Final answer:
The probability that the scores of two randomly selected test takers on an exam (normally distributed with mean 41 and standard deviation 2) are within 10 points of each other is approximately 1 or 100%. This is calculated using the normal distribution, where the difference in scores (D) is also normally distributed with a standard deviation of 2√(2).
Step-by-step explanation:
The question involves the concept of the normal distribution in statistics and probability. We are given that the scores of an exam are normally distributed with a mean (μ) of 41 and a standard deviation (σ) of 2. To find the probability that the difference in scores between two randomly selected test takers is less than 10, we define the random variable D as the difference between their scores. Since the scores are normally distributed, the differences are also normally distributed.
Since the mean score difference between two identical distributions is 0, D has a mean of 0. To find the standard deviation of D, we use the fact that when independent random variables are subtracted, the variances are added:
σD2 = σ12 + σ22
σD = √(σ2 + σ2) = √(2)σ = √(2) • 2 = 2√(2)
To find P(|D| < 10), we convert D to a z-score and use the standard normal distribution. The z-score for D = 10 will be:
z = D/σD = 10/(2√(2)) = 10/2.828 = 3.54
Looking up the corresponding p-value for a z-score of 3.54 in the standard normal distribution table (which is typically given in statistics textbooks or can be found online), we find that the probability of D being within 10 points of each other is very high, approaching 100%.
Therefore, P(|D| < 10) ≈ 1 or 100%