35.6k views
5 votes
Contrast the use of mad and mse in evaluating forecasts.

1 Answer

7 votes

Final answer:

MAD and MSE are two commonly used error measures for evaluating forecasts. MAD measures the average amount of difference between forecast and actual values, while MSE considers the squared differences between them.

Step-by-step explanation:

When evaluating forecasts, two commonly used error measures are MAD (Mean Absolute Deviation) and MSE (Mean Squared Error).

MAD measures the average amount by which each forecast value differs from the actual value, without regard to the direction of the difference. It is calculated by taking the absolute difference between each forecast value and the corresponding actual value, summing these differences, and then dividing by the number of observations.

MSE, on the other hand, measures the average of the squared differences between each forecast value and the corresponding actual value. It takes into account both the direction and the magnitude of the forecasting error. MSE is calculated by squaring the difference between each forecast value and the corresponding actual value, summing these squared differences, and then dividing by the number of observations.

User Joel Sullivan
by
8.2k points