Our team, led by Gary Klein, is exploring the opportunities and challenges presented by the ever-increasing presence of artificial intelligence (AI) systems in everyday work. Gary recently wrote about these efforts in his Psychology Today blog.
Confusions can arise when intelligent systems cannot explain or justify their recommendations in ways that are easily understandable. Our goal with this effort is to develop a suite of tools that can be used by different stakeholders (such as developers, end users, trainers, or policymakers) to increase understanding between these complex systems and the humans who interact with them. Here are some of the tools we are interested in exploring:
- Cognitive Tutorial. An up-front tutorial to provide users with a better mental model of the AI system they are working with.
- Explainability Scales. We have developed and validated a number of measurement scales that can be used to evaluate the explainability of an AI system.
- Self-Explaining Scorecard. This scorecard is an ordinal scale for gauging the power and sophistication of AI techniques to improve explainability.
We look forward to sharing future innovations in the emerging field of explainable AI.