Final answer:
Explainable AI (XAI) is the practice of making AI decision-making understandable to humans. It focuses on creating transparent AI systems and involves various techniques to make black box models interpretable. Therefore the correct answer is Option 3.
Step-by-step explanation:
The effort to turn black box AI models into models whose decisions are easier to understand is known as Explainable AI (XAI). Explainable AI is an emerging field in computer science that emphasizes the importance that AI systems be designed in a way that humans can understand the world around them by making the decisions of AI systems transparent and explainable.
To solve problems more effectively, XAI may involve techniques to build models with more interpretable structures, or it may include designing a process that can elucidate the internal decision-making mechanisms of more complex models like deep neural networks. An example of a mathematical model might be a decision tree that is inherently more interpretable than a black box neural network.
Focused efforts such as integrating human-in-the-loop or establishing ethics certification programs for AI designers could help in creating AI systems whose workings can be scrutinized and understood by humans, adhering to both transparence and ethical standards.