We are pleased to announce that JProf. Mario Nadj’s research group has published an innovative article in the Journal ACM Transactions on Interactive Intelligent Systems entitled Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology.
In research on Explainable Artificial Intelligence (XAI), various methods have been proposed to explain predictions to users and to increase the transparency of the underlying artificial intelligence (AI) systems. However, the user perspective has received less attention in XAI research, resulting in (1) insufficient user involvement in the design process and (2) limited understanding of how users perceive them visually. With this in mind, we further developed representations of local explanations from four established model-agnostic XAI methods in an iterative design process with users. Furthermore, we evaluated them in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show, among others, that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, greatly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. Our work contributes to ongoing AI research on improving transparency.