Final answer:
Conducting t tests on all possible pairs of means when analyzing data from more than two groups increases the probability of a Type I error, not a Type II error. For multiple group comparisons, ANOVA is recommended because it corrects for the elevated risk of Type I error associated with multiple t tests.
Step-by-step explanation:
When analyzing data from experiments that involve more than two groups, doing t tests on all possible pairs of means increases the probability of making a Type I error. A Type I error occurs when we incorrectly reject a true null hypothesis. This increased risk is due to the cumulative probability of committing an error across multiple comparisons. While t tests are appropriate for comparing two means, as the number of comparisons increases, statistical methods such as Analysis of Variance (ANOVA) are recommended to control the risk of Type I errors. ANOVA compares the means across all groups simultaneously, applying a correction to account for the multiple comparisons, hence keeping the Type I error rate at the desired level.
It is important to note that failing to reject a false null hypothesis would result in a Type II error. The power of a statistical test, which is 1 - Type II error, is affected by the sample size, effect size, and variance associated with the measure used. To accurately perform a hypothesis test and minimize the Type I and Type II errors, certain distributional requirements must be met, such as using a Student's t-test for normally distributed populations or large sample sizes with an unknown population standard deviation.