While I generally enjoy the articles on Web Design from Scratch, I found a problem in this article about a/b testing. Take the following excerpt…
“We will look at the test results over the coming weeks to see if our prediction is correct, and use these results to formulate possible follow-on tests to further increase conversion rate.”’
“Results of the test will be posted here once we can see a clear winner.”
The author then invites the audience to check out the new design, which immediately left me wondering if this might skew the results.
“In the meantime, check out Rankmill.com, and start sharing your own Top Lists.”
It’s my prediction that Rankmill.com will see an increase in traffic from people that might not be their typical audience. This new wave of visitors will be coming through via this article, RSS feeds, twitter posts, and other referring sites. Once they get to the site, I predict that they’ll either…
a) See the old design and reload the page to see if the new design shows up. When it doesn’t, they’ll leave the site (increased bounce rates with the old design)
b) See the new design and experiment with the new UI for a bit
If my prediction is correct, might an upswing in new traffic (from people who know about the test) result in conclusions that the new design is more effective? I worry that the experiment is now contaminated and would be skeptical of any results that didn’t properly account for this.