Final answer:
The forecast error metric commonly used for comparability of forecast errors across different product lines is called MAD, which stands for Mean Absolute Deviation.
Step-by-step explanation:
The forecast error metric commonly used for comparability of forecast errors across different product lines is called MAD, which stands for Mean Absolute Deviation.
MAD measures the average absolute difference between the forecasted values and the actual values. It is widely used because it is easy to understand, interpret, and compare.
For example, if you are forecasting the sales of different products, you can calculate the MAD for each product line and compare them to see which product line has the least or most accurate forecasts.
MAD is calculated by summing up the absolute differences between the forecasted values and the actual values, and then dividing by the number of observations.
The forecast error metric commonly used for comparability of forecast errors across different product lines is the Mean Absolute Percentage Error (MAPE).
This metric is favored for its simplicity and the fact that it expresses errors as a percentage, making it straightforward to compare performance across diverse product lines and scales of operation.
MAPE is calculated by taking the average of the absolute values of the individual percentage errors, which enables geographers and businesses alike to gauge the accuracy of forecasts without being misled by the impact of outlier errors or the scale of the data.