Explainability is crucial for establishing user trust in Artificial Intelligence (AI), particularly within safety-critical domains such as Air Traffic Management (ATM) and Air Traffic Control (ATC). This study empirically investigates the effects of Explainable AI (XAI), specifically HeatMap-based visual explanations, on cognitive workload, user acceptance, and intention to use AI-driven decision-support systems among Air Traffic Control Officers (ATCOs). Despite significant theoretical advancements in the broader XAI domain, empirical evidence addressing the specific impact of visual explanations on human-AI interactions in safety-critical environments like ATC remains limited. To address these critical gaps, an experimental comparison was conducted between explainable (HeatMap) and non-explainable (BlackBox) AI conditions, involving two user groups: expert and student ATCOs. Both objective neurophysiological measures (Electroencephalography) and subjective questionnaires were employed to capture comprehensive user responses. Key findings revealed that the presence of visual explanations significantly reduced cognitive workload and enhanced users’ willingness to adopt the AI system, regardless of participants’ level of expertise. However, explicit perceptions of AI’s impact on work performance were predominantly influenced by expertise, with less experienced controllers reporting a greater perceived impact than their expert counterparts. By combining objective neurometrics with subjective user assessments, this research advances methodological rigor in evaluating human-AI interactions and highlights the importance of tailored, user-centric explanations. These findings directly contribute to practical guidelines for designing cognitively compatible and trustworthy AI tools in ATC, providing nuanced insights for targeted training and deployment strategies based on user expertise.
Explainable artificial intelligence in air traffic control: effects of expertise on workload, acceptance, and usage intentions
Cartocci, Giulia;
2026-01-01
Abstract
Explainability is crucial for establishing user trust in Artificial Intelligence (AI), particularly within safety-critical domains such as Air Traffic Management (ATM) and Air Traffic Control (ATC). This study empirically investigates the effects of Explainable AI (XAI), specifically HeatMap-based visual explanations, on cognitive workload, user acceptance, and intention to use AI-driven decision-support systems among Air Traffic Control Officers (ATCOs). Despite significant theoretical advancements in the broader XAI domain, empirical evidence addressing the specific impact of visual explanations on human-AI interactions in safety-critical environments like ATC remains limited. To address these critical gaps, an experimental comparison was conducted between explainable (HeatMap) and non-explainable (BlackBox) AI conditions, involving two user groups: expert and student ATCOs. Both objective neurophysiological measures (Electroencephalography) and subjective questionnaires were employed to capture comprehensive user responses. Key findings revealed that the presence of visual explanations significantly reduced cognitive workload and enhanced users’ willingness to adopt the AI system, regardless of participants’ level of expertise. However, explicit perceptions of AI’s impact on work performance were predominantly influenced by expertise, with less experienced controllers reporting a greater perceived impact than their expert counterparts. By combining objective neurometrics with subjective user assessments, this research advances methodological rigor in evaluating human-AI interactions and highlights the importance of tailored, user-centric explanations. These findings directly contribute to practical guidelines for designing cognitively compatible and trustworthy AI tools in ATC, providing nuanced insights for targeted training and deployment strategies based on user expertise.| File | Dimensione | Formato | |
|---|---|---|---|
|
Cartocci et al Brain Informatics 2026.pdf
accesso aperto
Tipologia:
Documento in Pre-print
Licenza:
Creative commons
Dimensione
6.22 MB
Formato
Adobe PDF
|
6.22 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

