A/B testing compares conversion rates between two or more variants. Since conversion is binary (converted/didn't convert) and you're comparing groups (variant A vs B), this is exactly a chi-square test of independence on a 2×2 contingency table. CrossTabs.com provides instant significance testing with effect sizes so you can determine both whether your result is significant and how large the difference is.
Structure your data as a 2×2 table:
| Converted | Not Converted | Total | |
|---|---|---|---|
| Variant A | 120 | 880 | 1000 |
| Variant B | 150 | 850 | 1000 |
| Total | 270 | 1730 | 2000 |
Results: χ²(1) = 3.83, p = 0.050. Conversion rate A = 12.0%, B = 15.0%. The difference is marginally significant at the 5% level.
Statistical significance alone doesn't tell you if a result is practically meaningful. CrossTabs.com provides:
Before running an A/B test, use power analysis to determine the required sample size. CrossTabs.com's power analysis calculator tells you how many users each variant needs to detect a given effect size with 80% power at α = 0.05. This prevents running underpowered tests that waste time and resources.
For 2×2 tables, the chi-square test and the two-proportion z-test give identical p-values (χ² = z²). Use whichever your team is more familiar with. CrossTabs.com uses chi-square because it generalizes to tests with more than two variants.
Run until you reach the pre-calculated required sample size (from power analysis). Do not peek at results and stop early when significant — this inflates your false positive rate. Use sequential testing methods if you need to monitor results continuously.
Yes. For A/B/C or A/B/C/D tests, use a larger contingency table. CrossTabs.com handles any number of variants. If the overall chi-square is significant, use pairwise comparisons with Bonferroni correction to identify which pairs differ.