Final answer:
The data set from Test 2 had the smaller standard deviation, which indicates that the scores for Test 2 were more tightly grouped around the mean than those for Test 1.
Step-by-step explanation:
The question asks which data set between Test 1 and Test 2 had the smaller standard deviation. To find the standard deviation of each test score set, we calculate the average (mean) of the scores, then find the difference of each score from the mean, square those differences, average those squared differences, and finally take the square root of that average.
For Test 1: {75, 75, 85, 80, 65, 70, 65}, the mean is approximately 73.57. Calculating the differences from the mean, squaring those differences, and then averaging those squared differences gives us a variance of 48.81. The standard deviation is the square root of the variance, which is approximately 6.99.
For Test 2: {95, 85, 85, 90, 90, 95, 100}, the mean is approximately 91.43. Calculating the differences from the mean, squaring those differences, and then averaging those squared differences gives us a variance of 35.24. The standard deviation is the square root of the variance, which is approximately 5.94.
The data set from Test 2 has the smaller standard deviation, indicating that the test scores for Test 2 are more closely clustered around the mean than the test scores for Test 1.