Final answer:
Variability in a sampling distribution is most accurately captured by the standard error, which is the standard deviation of a sampling distribution and can be calculated using the population standard deviation and sample size.
Step-by-step explanation:
Variability in a sampling distribution is typically measured with the standard error. The standard error is essentially the standard deviation of the sampling distribution and reflects how much a statistic, such as the mean, would vary if you repeatedly took samples from the same population. It is calculated using the formula standard error of the mean = σ/√n, where σ is the population standard deviation and n is the sample size. In contrast, the range and variance are other measures of spread in a dataset but are not typically used as indicators of variability in a sampling distribution. The standard deviation itself is a measure of spread in either the sample or the population, but when discussing sampling distributions, the standard error is the correct term for the variability among sample means or other statistics.