Final answer:
When Shapley value is used to show the contribution of each feature in making a prediction, it supports Local interpretability. This approach explains the rationale behind individual predictions and is important for accountability in sensitive domains.
Step-by-step explanation:
When a Shapley value shows how much each feature contributed to predicting the label, it supports Local interpretability. Local interpretability refers to the explanation of an individual prediction made by a machine learning model. The Shapley value is a concept from cooperative game theory that allocates payoff to players depending on their contribution to the total payoff. Translated to machine learning, it is a method used to attribute the prediction of an instance to its features, thereby offering insights into the reasons behind a specific prediction.
In contrast, Global interpretability provides an overall understanding of the model's behavior, rather than explanations for individual predictions. The Shapley value, by explaining individual predictions, helps in diagnosing and justifying particular decisions, which is crucial in domains that require accountability, like finance or healthcare.