(Re)Conceptualizing trustworthy AI: A foundation for change

Christopher D. Wirz, Julie L. Demuth, Ann Bostrom, Mariana G. Cains, Imme Ebert-Uphoff, David John Gagne, Andrea Schumacher, Amy McGovern, Deianna Madlambayan

Research output: Contribution to journalReview articlepeer-review

2 Scopus citations

Abstract

Developers and academics have grown increasingly interested in developing “trustworthy” artificial intelligence (AI). However, this aim is difficult to achieve in practice, especially given trust and trustworthiness are complex, multifaceted concepts that cannot be completely guaranteed nor built entirely into an AI system. We have drawn on the breadth of trust-related literature across multiple disciplines and fields to synthesize knowledge pertaining to interpersonal trust, trust in automation, and risk and trust. Based on this review we have (re)conceptualized trustworthiness in practice as being both (a) perceptual, meaning that a user assesses whether, when, and to what extent AI model output is trustworthy, even if it has been developed in adherence to AI trustworthiness standards, and (b) context-dependent, meaning that a user's perceived trustworthiness and use of an AI model can vary based on the specifics of their situation (e.g., time-pressures for decision-making, high-stakes decisions). We provide our reconceptualization to nuance how trustworthiness is thought about, studied, and evaluated by the AI community in ways that are more aligned with past theoretical research.

Original languageEnglish
Article number104309
JournalArtificial Intelligence
Volume342
DOIs
StatePublished - May 2025

Keywords

  • Artificial intelligence
  • Interdisciplinary
  • Machine learning (ML)
  • Trust
  • Trustworthy AI

Fingerprint

Dive into the research topics of '(Re)Conceptualizing trustworthy AI: A foundation for change'. Together they form a unique fingerprint.

Cite this