75.1k views
3 votes
If a method produces a random error for each measurement of 4%, but a percent error of equal to or less than 1% is required for this value for later analysis, what is the minimum number of measurements that must be collected and averaged? You will need to solve equation 1 for the value of n that meets the criterion of a 1% error in the average.

User Lmno
by
7.0k points

1 Answer

4 votes

Answer:

We need to take a sample of size n=16 to meet the criterion of a 1% error in the average.

Explanation:

We have to calcuate the sample size that reduces 4 times the deviation of the measurement.

We can express that as the relation between the sampling distribution standard deviation and the population standard deviation. The last has to be 4 times the former:


\sigma_s=(\sigma)/(4)

We know that for a sampling distribution, the standard deviation is calculated as:


\sigma_s=(\sigma)/(√(n))

Then, relating both equations, we have:


√(n)=4\\\\n=4^2=16

We need to take a sample of size n=16 to meet the criterion of a 1% error in the average.

User Sergine
by
7.0k points