Top tips and best practices
1. Start with a Solid Plan
Before running a test, create a detailed plan that includes your hypothesis, target metric, timeline, and audience segment. Stick to this plan. Avoid the temptation to segment your audience or add new metrics once the test is live — this compromises the integrity of your results. You need to include screenshots of the changes, both before and after, so we can clearly see how the change looks.
2. Define Clear Goals
Be specific about what you want to achieve. Are you trying to boost add-to-cart rates, reduce checkout abandonment, or increase average order value? Clear goals ensure meaningful insights.
3. Test One Change at a Time
Focus on a single variable per test — such as the CTA button colour, product image layout, or promo banner wording — so you can pinpoint what’s driving the result.
4. Choose the Right Metrics
Select metrics that reflect your business objectives, such as revenue per visitor or conversion rate. Avoid vanity metrics that don’t offer real insight into customer behaviour.
5. Don’t jump the gun
Don’t jump to conclusions too quickly. Only stop a test early if the results are clearly worse — for example, if the overall conversion rate or revenue per visitor is consistently lower and the trend stays well below the original version, even when you take the full range of possible outcomes into account (this range is called the “confidence interval”). We typically recommend monitoring the results for at least a week before deciding it’s consistently underperforming. Otherwise, be patient — early data can be misleading.
Note: If you want to apply an experiment to a specific segment — such as a particular device type or to new versus returning customers — please reach out to us so we can assist with the setup.
6. Segment Thoughtfully (and Early)
If you plan to segment users (e.g., by traffic source, device, or new vs. returning), define this before launching the test. Last-minute segmentation can lead to your data not being representative of what is really happening.
7. Avoid Overlapping Experiments
Running multiple tests that affect the same user pool can interfere with results. Stagger tests or use non-overlapping user groups to avoid cross-contamination.
8. Consider External Influences
Be aware of promotions, festivities (like Black Friday or Christmas), or changes in ad spend that may affect user behaviour and skew test results. Our models do not account for these seasons, so it is best to avoid running experiments during these periods.
9. Document Everything
Keep detailed records of each test: what you changed, why, when, for how long, and what the results were. This creates a valuable knowledge base for future testing. You can use the same test plan and simply update the information there to keep everything in one place.
10. Think Beyond the Immediate Win
A test might improve short-term metrics but damage long-term trust or user experience. For example, a pop-up might increase newsletter sign-ups but frustrate users — weigh both short and long-term effects.
11. Keep Testing and Iterating
Use insights from each test to inform future ones. A/B testing is an ongoing process — not a one-and-done tactic. It’s normal to perform multiple waves on a particular change to continue to optimise your sites.