Final answer:
The Mean Square Error (MSE) is calculated as the sum of squared errors (SSE) divided by the sample size, and it represents the average squared deviation per data point. The correct option is C.
Step-by-step explanation:
The formula for the Mean Square Error (MSE) is sum of squared errors divided by sample size. MSE is used in statistics, particularly in the context of regression analysis, to measure the average of the squares of the errors. Essentially, the squared deviations of predicted values from the actual values are summed up to get the Sum of Squared Errors (SSE).
There are different contexts where variations of sum of squares are used. For instance, SSbetween represents variation among different samples, while SSwithin is the variation within samples that is due to chance. The Mean Square (MS) like MSbetween and MSwithin are computed by dividing the sum of squares by the respective degrees of freedom.
For the mean square of residuals, which are the differences between the observed and estimated values, the sum of squared errors (SSE) is calculated first. It is then divided by the degrees of freedom, typically the number of data points minus two (n - 2) to estimate the population standard deviation of y.
The correct answer to the student's question is a) Sum of squared errors divided by sample size, which reflects the average squared deviation per data point and gauges the model's prediction error.