Artificial Intelligence (AI) has recently made significant advancements and is now pervasive across variousapplication domains. This holds true for Air Transportation as well, where AI is increasingly involved indecision-making processes. While these algorithms are designed to assist users in their daily tasks, they stillface challenges related to acceptance and trustworthiness. Users often harbor doubts about the decisionsproposed by AI, and in some cases, they may even oppose them. This is primarily because AI-generateddecisions are often opaque, non-intuitive, and incompatible with human reasoning. Moreover, when AI isdeployed in safety-critical contexts like Air Traffic Management (ATM), the individual decisions generatedby AI models must be highly reliable for human operators. Understanding the behavior of the model andproviding explanations for its results are essential requirements in every life-critical domain. In this scope, thisproject aimed to enhance transparency and explainability in AI algorithms within the Air Traffic Managementdomain. This article presents the results of the project’s validation conducted for a Conflict Detection andResolution task involving 21 air traffic controllers (10 experts and 11 students) in En-Route position (i.e. hightaltitude flight management). Through a controlled study incorporating three levels of explanation, we offerinitial insights into the impact of providing additional explanations alongside a conflict resolution algorithmto improve decision-making. At a high level, our findings indicate that providing explanations is not alwaysnecessary, and our project sheds light on potential research directions for education and training purposes.

Examining decision-making in air traffic control. Enhancing transparency and decision support through machine learning, explanation, and visualization. A case study

Cartocci, Giulia;
2024-01-01

Abstract

Artificial Intelligence (AI) has recently made significant advancements and is now pervasive across variousapplication domains. This holds true for Air Transportation as well, where AI is increasingly involved indecision-making processes. While these algorithms are designed to assist users in their daily tasks, they stillface challenges related to acceptance and trustworthiness. Users often harbor doubts about the decisionsproposed by AI, and in some cases, they may even oppose them. This is primarily because AI-generateddecisions are often opaque, non-intuitive, and incompatible with human reasoning. Moreover, when AI isdeployed in safety-critical contexts like Air Traffic Management (ATM), the individual decisions generatedby AI models must be highly reliable for human operators. Understanding the behavior of the model andproviding explanations for its results are essential requirements in every life-critical domain. In this scope, thisproject aimed to enhance transparency and explainability in AI algorithms within the Air Traffic Managementdomain. This article presents the results of the project’s validation conducted for a Conflict Detection andResolution task involving 21 air traffic controllers (10 experts and 11 students) in En-Route position (i.e. hightaltitude flight management). Through a controlled study incorporating three levels of explanation, we offerinitial insights into the impact of providing additional explanations alongside a conflict resolution algorithmto improve decision-making. At a high level, our findings indicate that providing explanations is not alwaysnecessary, and our project sheds light on potential research directions for education and training purposes.
2024
air traffic control
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14245/15974
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
social impact