183k views
4 votes
To assess the accuracy of a laboratory scale, a standard weight known to weigh 1 gram is repeatedly weighed a total of n times How large should n be so that a 95% confidence interval for µ has a margin of error of ± 0.0001?

User Fff
by
7.5k points

1 Answer

4 votes

Answer:


n=((1.960(1))/(0.0001))^2 =384160000

So the answer for this case would be n=384160000 rounded up to the nearest integer

Explanation:

We know the following info:


ME = 0.0001 represent the margin of error desired


\sigma= 1 we assume that the population deviation is this value

The margin of error is given by this formula:


ME=z_(\alpha/2)(\sigma)/(√(n)) (a)

And on this case we have that ME =0.0001 and we are interested in order to find the value of n, if we solve n from equation (a) we got:


n=((z_(\alpha/2) \sigma)/(ME))^2 (b)

The critical value for 95% of confidence interval now can be founded using the normal distribution. If we use the normal standard distribution or excel we got:
z_(\alpha/2)=1.960, replacing into formula (b) we got:


n=((1.960(1))/(0.0001))^2 =384160000

So the answer for this case would be n=384160000 rounded up to the nearest integer

User Santiago Benoit
by
8.6k points
Welcome to QAmmunity.org, where you can ask questions and receive answers from other members of our community.

9.4m questions

12.2m answers

Categories