169k views
3 votes
The admissions officer at a small college compares the scores on the Scholastic Aptitude Test (SAT) for the school's in-state and out-of-state applicants. A random sample of 17 in-state applicants results in a SAT scoring mean of 1046 with a standard deviation of 37. A random sample of 10 out-of-state applicants results in a SAT scoring mean of 1118 with a standard deviation of 50. Using this data, find the 90% confidence interval for the true mean difference between the scoring mean for in-state applicants and out-of-state applicants. Assume that the population variances are not equal and that the two populations are normally distributed. Step 2 of 3 : Find the margin of error to be used in constructing the confidence interval. Round your answer to six decimal places.

User Shouvik
by
4.6k points

1 Answer

6 votes

Answer:

Explanation:

The formula for determining the confidence interval for the difference of two population means is expressed as

Confidence interval = (x1 - x2) ± z√(s²/n1 + s2²/n2)

Where

x1 = sample mean score of in-state applicants

x2 = sample mean score of out -of-state applicants

s1 = sample standard deviation for in-state applicants

s2 = sample standard deviation for out-of-state applicants

n1 = number of in-state applicants

n1 = number of out-of-state applicants

For a 90% confidence interval, we would determine the z score from the t distribution table because the number of samples are small

Degree of freedom =

(n1 - 1) + (n2 - 1) = (17 - 1) + (10 - 1) = 25

z = 1.708

x1 - x2 = 1046 - 1118 = - 72

Margin of error = z√(s1²/n1 + s2²/n2) = 1.708√(37²/17 + 50²/10) = 31.052239

Confidence interval is - 72 ± 31.052239

User Fnst
by
4.6k points