ShadowBox Newsletter: Summer 2019
ShadowBox has been staying busy! Read on to see what we’ve been up to this summer!
The 14thInternational Naturalistic Decision Making Conference
In June, ShadowBox hosted the 14thInternational Naturalistic Decision Making Conference (NDM14) in San Francisco, CA along with the help of the wonderful NDM advisory committee! The event included a successful Doctoral Consortium, four riveting pre-conference workshops, and a variety of engaging talks, posters, and panels! If you missed this exciting NDM meeting, we hope to see you in 2021 for NDM15! See our inspiring keynote speakers in Figure 1.
Figure 1: Pictured to the right are our three keynote speakers from NDM14. From left to right: Ben Shneiderman, Mica Endsley, and Wendy Jephson. This photo was taken in June 2019 at the Marine’s Memorial Club & Hotel in San Francisco, CA, USA. Photo Credit: Ben Shneiderman.
ShadowBox Project Updates: Endings and Beginnings!
Annie E. Casey Foundation Project Moves Forward
We are excited to partner with the Annie E. Casey Foundation for a fifth year. Deliverables for our new grant will include a ShadowBox implementation guide for child welfare agencies as well as six additional ShadowBox scenarios.
We have just finalized a new software tool that will allow small groups to complete scenarios in real-time, facilitated sessions. Features include real-time polling, an ability to generate a graphic display for decision responses, and the ability to keep the group together by having the facilitator “unlock” the next screen.
Cognitive After-Action Review Guide for Observers (CAARGO) Updates
ShadowBox worked in collaboration with the Center for Operator Performance to design an innovative cognitive reference for trainers - cognitive after-action review guide for observers (CAARGO). The goal was to design various modules, which target different dimensions of training and can be used by trainers with different responsibilities and skill levels. CAARGO (see Figure 2 for a wallet-sized version of CAARGO) was designed to have a minimal footprint and to transform ordinary activities into training opportunities. This guide consists of four parts:
The first part described different kinds of mindsets trainers possess and included a pre-test to see where they stand.
The second part presented cognitive traps that trainees face, which are largely confusions that may interfere with an operator’s troubleshooting. Included were examples of questions that trainers can use to help spot when trainees are struggling, help them realize mistakes, and correct and strengthen mental models.
The third part provided ways to dig deeper, to diagnose the reasons for confusions, to offer more effective hints, and to notice the strengths and skills that operators are demonstrating.
The fourth part offered tips for analyzing incidents and near misses. This part also described how to conduct fast and efficient interviews, which focus not just on what happened but why.
Introducing these concepts can be challenging. Therefore, we are continuing this effort with the COP to develop a way to onboard CAARGO within a company and sustain these concepts over time. We will do this primarily by developing a training video illustrating a trainer employing CAARGO methods. The video can be used to 1) introduce CAARGO and onboard new trainers and 2) provide exercises for trainers to test their knowledge of CAARGO concepts and practice using these skills.
Figure 1. The wallet-sized version of CAARGO.
Other Exciting ShadowBox News: Methodology, Diagnosis Errors, and Explainable XAI
ShadowBox “Branches” Out
In our effort to improve and expand our training methodology, we at ShadowBox continue to explore new scenario structures. One such structure is the “branching” scenario, which is designed to allow trainees to explore different paths through a situation.
“I got the idea from the Build-Your-Own-Adventure books I used to read as a kid,” explains John Schmitt, who is heading up the effort.
In a branching scenario, when a trainee gets to a decision point, he or she picks an option, as always, but in this case each decision leads to a different outcome and a new decision point. Trainees learn not only from expert feedback, as in the standard ShadowBox scenario, but also from the consequences of their decisions. They will have the opportunity to follow multiple paths through a situation, reaching different final results, some good and some not so good. The expectation is that this opportunity for multiple excursions through a single problem space will deepen learning.
“Learning is better when it is engaging,” Schmitt says, “and I think the opportunity to see how well you do will make the experience more engaging.”
Anytime people have to size up situations there is a potential for diagnosis errors. Healthcare is a prime example — researchers have found that diagnosis errors contribute to approximately 10% of patient deaths in hospitals. One factor that causes diagnosis errors is the tendency to hold on to an incorrect initial diagnosis and to dismiss contrary evidence. Gary Klein posted two essays on this topic on his Psychology Today blog. The first essay, “The curious case of confirmation bias,” (May 5, 2019) explained why the common explanation, confirmation bias, was not tenable and should be discarded. The second essay, “Escaping from fixation,” (June 11, 2019) showed why fixation was a better account than confirmation bias and also offered several suggestions for reducing diagnosis errors.
Explainable Artificial Intelligence (XAI)
DARPA (Defense Advanced Research Projects Agency) initiated a program on XAI in 2017 to try to reduce the problem that new forms of AI involved machine learning are so inscrutable that users are reluctant to take advantage of the technology. We are part of a team led by Robert Hoffman at the Institute of Human Machine Cognition, and including Shane Mueller at Michigan Technological University, to formulate naturalistic models of the process of explaining in general, and explaining the workings of technical systems in particular. We have generated models of local explaining (why a system acted in a certain way) and global explaining (how a system works), and we are exploring what has to happen for self-explaining — using indicators to arrive at our own local and global understanding.
The Dayton Crew:
ShadowBox's Dayton office is accepting applications for a new research assistant; interested applicants should email a CV/resume and cover letter to firstname.lastname@example.org. Summer has been a busy season for ShadowBox - stay tuned to see what Fall has in store for us!