ejeai Open Access Journal

European Journal of Emerging Artificial Intelligence

eISSN: Applied
Publication Frequency : 2 Issues per year.

  • Peer Reviewed & International Journal
Table of Content
Issues (Year-wise)
Loading…

Open Access iconOpen Access

ARTICLE

LEVERAGING ANALOGIES FOR AI EXPLAINABILITY: ENHANCING LAYPERSON UNDERSTANDING IN AI-ASSISTED DECISION MAKING

1 School of Engineering and Applied Sciences, Harvard University, USA
2 Department of Computer Science, University of California, Berkeley, USA
3 Department of Informatics, Technical University of Munich, Germany

Citations: Loading…
ABSTRACT VIEWS: 70   |   FILE VIEWS: 15   |   PDF: 15   HTML: 0   OTHER: 0   |   TOTAL: 85
Views + Downloads (Last 90 days)
Cumulative % included

Abstract

The integration of Artificial Intelligence (AI) into critical decision-making processes necessitates transparent and understandable explanations, especially for non-expert users (laypeople). While traditional Explainable AI (XAI) methods often present technical details that remain inscrutable to a lay audience, this article investigates the potential of analogy-based explanations to bridge this knowledge gap. We present a two-part empirical study. Study I focuses on the generation and qualitative assessment of analogy-based explanations using non-expert crowd workers, establishing a systematic framework for evaluating their quality across dimensions such as structural correspondence, relational similarity, and familiarity. Our findings highlight the subjective nature of analogy quality and the potential for leveraging crowdsourcing to generate diverse explanations. Study II evaluates the practical effectiveness of these analogy-based explanations in a high-stakes medical diagnosis task (skin cancer detection). Surprisingly, quantitative results did not show a significant improvement in understanding or appropriate reliance with analogy-based explanations compared to detailed concept-level explanations. However, qualitative feedback revealed that users found analogies helpful when they perceived a strong connection to a familiar source domain and when presented on demand. While explanations, including analogies, increased perceived cognitive load and decision-making time, our comprehensive analysis points to the crucial roles of human intuition and perceived plausibility in shaping user behavior. This research contributes actionable insights for designing human-centered XAI, emphasizing the need for personalized and carefully crafted analogies to truly enhance layperson understanding and foster appropriate reliance in AI-assisted decision-making.


Keywords

Explainable AI (XAI), Analogical Reasoning, Human-AI Collaboration, Layperson Understanding

References

1. Abdul, A., von der Weth, C., Kankanhalli, M., & Lim, B. Y. (2020). Cogam: measuring and moderating cognitive load in machine learning model explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14.

2. Adams, T. L., Li, Y., & Liu, H. (2020). A replication of beyond the turk: Alternative platforms for crowdsourcing behavioral research–sometimes preferable to student groups. AIS Transactions on Replication Research, 6(1), 15.

3. Aroyo, L., & Welty, C. (2015). Truth is a lie: Crowd truth and the seven myths of human annotation. AI Magazine, 36(1), 15–24.

4. Arrieta, A. B., D ́ıaz-Rodr ́ıguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garc ́ıa, S., Gil-L ́opez, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58, 82–115.

5. Balayn, A., He, G., Hu, A., Yang, J., & Gadiraju, U. (2022a). Ready player one! eliciting diverse knowledge using A configurable game. In Laforest, F., Troncy, R., Simperl, E., Agarwal, D., Gionis, A., Herman, I., & M ́edini, L. (Eds.), WWW ’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, pp. 1709–1719. ACM.


How to Cite

LEVERAGING ANALOGIES FOR AI EXPLAINABILITY: ENHANCING LAYPERSON UNDERSTANDING IN AI-ASSISTED DECISION MAKING. (2024). European Journal of Emerging Artificial Intelligence, 1(01), 37-53. https://parthenonfrontiers.com/index.php/ejeai/article/view/47

Share Link