Final answer:
Z-scores represent how many standard deviations an observation is from the mean of a data set. They standardize different data sets for comparison and are calculated using the formula z = (x - μ) / σ.
Step-by-step explanation:
Z-scores measure an observation's standing relative to the mean of the given data set. A z-score is a standardized value that tells you how many standard deviations an observation x is from the mean (μ) of the data set.
For instance, if a student's z-score on an exam is 2.0, it means they scored two standard deviations above the class mean. When comparing different data sets with different scales, like weight gains in different groups represented by different normal distributions, z-scores serve as a common measure for comparison. To calculate a z-score, you use the formula z = (x - μ) / σ, where x is the value in the data set, μ is the mean of the data set, and σ is the standard deviation of the data set.
Importantly, positive z-scores indicate values above the mean, and negative z-scores indicate values below the mean. If the z-score is zero, it means the value is equivalent to the mean. Moreover, knowing the z-score allows us to use the Standard Normal Distribution table to find probabilities associated with observations within a normal distribution.