Final answer:
The confidence threshold commonly required to declare a winner in an A/B test is 90%. Higher confidence requires larger sample sizes, and confidence intervals widen as the confidence level increases. A 95% confidence interval would have a 2.5% probability in each tail of the distribution.
Step-by-step explanation:
Most tools require you to have a confidence threshold of 90%, in order to declare a winner in an A/B test. This confidence threshold ensures that there is only a 10% chance of making a type I error, which is rejecting a true null hypothesis or falsely declaring a significant difference when there isn't one.
When designing a study to estimate the population proportion with a certain level of confidence, the minimum number of survey participants you need depends on that desired confidence level. For a 90% confidence interval, the number needed would generally be smaller than that needed for a higher confidence level. If it were important to be more than 90 percent confident and a new survey was commissioned, the required sample size would increase because a higher confidence level means that we want to be more certain about our estimate, which requires a broader sample to account for the additional certainty.
Comparing confidence intervals, one can expect a 99 percent confidence interval to be wider than a 95 percent confidence interval because it will cover a larger range of values to ensure a higher degree of certainty. If the confidence level is decreased from 99 percent to 90 percent, the confidence interval will become narrower because less certainty (and hence less range of values) is required.
Lastly, in the construction of a two-sided 95 percent confidence interval, there will be 2.5% probability in each tail of the distribution. This common misinterpretation is clarified by understanding that a 90 percent confidence interval contains the true population parameter 90 percent of the time if the same procedure is repeated, not necessarily 90 percent of the data.