Final answer:
AIC is a statistical measure used to compare models based on how well they balance fit and complexity. It provides a measure for the relative distance to an unknown truth, adhering to the principle of parsimony, especially when sample size is small. BIC is an alternative, favored in cases of larger sample sizes.
Step-by-step explanation:
The Akaike Information Criterion (AIC) is a method used within the field of statistics to evaluate and compare the relative quality of statistical models for a given set of data. It is based on information theory, providing a measure for the relative distance between the estimated model and the unknown truth. More specifically, it helps to address the complexity (number of parameters) of the model and the goodness of fit to the data. AIC is calculated using the formula AIC = 2k - 2ln(L), where k is the number of model parameters and L is the maximised log-likelihood function of the model.
One important property of AIC is its reflection of the principle of parsimony, favoring models that offer a good balance between simplicity and fit. When the sample size is small, AIC can be corrected to account for this with the AICC or QAIC formulae. The Bayesian Information Criterion (BIC) is another model comparison criterion which becomes more relevant as sample sizes grow and prioritizes the simplest model that adequately explains the variance.