What is statistical significance in A/B testing? +
Statistical significance tells you whether the difference between variants A and B is real or just random chance. 95% confidence means only 5% probability the result is random. Most marketers require 95% minimum before declaring a winner.
How many emails do I need for a valid A/B test? +
You need at least 1,000 recipients per variant (2,000 total). Smaller samples produce unreliable results even if they look significant. For low open or click rates, you may need 5,000+ per variant for reliable significance.
What should I A/B test in my emails? +
Highest impact tests: Subject line (biggest effect on open rate), From name, Send time, CTA button text/color, and email length. Always test only one element at a time so you know what caused the difference.
What confidence level should I use? +
Use 95% as your minimum. For high-stakes decisions (like changing your entire email template), wait for 99% confidence. At 90% you're taking too much risk — 1 in 10 winning variants would actually be losers.