114k views
5 votes
To assess the accuracy of a laboratory scale, a standard weight that is known to weigh 1 gram is repeatedly weighed a total of n times and the mean, x-bar, of the results is computed. Suppose the scale readings are Normally distributed with unknown mean µ and a standard deviation of σ = 0.01 grams. How large should n be so that a 95% confidence interval for µ has a margin of error of ±0.0001?

1 Answer

2 votes

Final answer:

To achieve a margin of error of ±0.0001 for a 95% confidence interval with a standard deviation of 0.01, the required sample size is 38,416.

Step-by-step explanation:

To determine the sample size n required so that a 95% confidence interval for the mean μ has a margin of error of ±0.0001 when the population standard deviation σ is 0.01, we use the formula for the margin of error E in a normal distribution:

E = z*(σ/√n)

For a 95% confidence interval, the z-score (z) that corresponds to the middle 95% of the data is 1.96 (from standard normal distribution tables). Thus, substituting the desired margin of error and solving for n gives:

0.0001 = 1.96*(0.01/√n)

n = (1.96 * 0.01 / 0.0001)²

n = (1.96 / 0.01)²

n = 196²

n = 38416

Therefore, the required sample size n is 38,416.

User Wanderors
by
9.1k points