
"A/B testing is the gold standard of experimentation. It is meant to help companies make faster, better, data-driven decisions. But too often, it does the opposite. The meeting starts with optimism: a new pricing idea, ad layout, or signup screen goes into an A/B test. After waiting for weeks, analysts come back with p-values, 95% confidence thresholds, and a familiar conclusion: "We should wait for more data. We don't have enough evidence yet, and it's not statistically significant.""
"Tomomichi Amano is an assistant professor of business administration at Harvard Business School. Joonhwi Joo is Assistant Professor of Marketing at The University of Texas at Dallas. He is an award-winning researcher and expert on data-driven decision-making. He has developed rigorous methods to uncover drivers of consumer behavior, and is the author of the leading textbook used by researchers of quantitative marketing."
A/B testing aims to enable faster, better, data-driven decisions but frequently produces inconclusive results that delay action. Teams implement experiments across pricing, ad layouts, and signup flows with optimism, only to receive p-values and 95% confidence thresholds that fail to reach statistical significance. The common response is to wait for more data, which prolongs decision cycles and undermines the purpose of experimentation. Dependence on strict significance thresholds can turn experimentation into a bottleneck, preventing timely learning and implementation of beneficial changes.
Read at Harvard Business Review
Unable to calculate read time
Collection
[
|
...
]