TY - JOUR
T1 - (Re)Conceptualizing trustworthy AI
T2 - A foundation for change
AU - Wirz, Christopher D.
AU - Demuth, Julie L.
AU - Bostrom, Ann
AU - Cains, Mariana G.
AU - Ebert-Uphoff, Imme
AU - Gagne, David John
AU - Schumacher, Andrea
AU - McGovern, Amy
AU - Madlambayan, Deianna
N1 - Publisher Copyright:
© 2025 The Author(s)
PY - 2025/5
Y1 - 2025/5
N2 - Developers and academics have grown increasingly interested in developing “trustworthy” artificial intelligence (AI). However, this aim is difficult to achieve in practice, especially given trust and trustworthiness are complex, multifaceted concepts that cannot be completely guaranteed nor built entirely into an AI system. We have drawn on the breadth of trust-related literature across multiple disciplines and fields to synthesize knowledge pertaining to interpersonal trust, trust in automation, and risk and trust. Based on this review we have (re)conceptualized trustworthiness in practice as being both (a) perceptual, meaning that a user assesses whether, when, and to what extent AI model output is trustworthy, even if it has been developed in adherence to AI trustworthiness standards, and (b) context-dependent, meaning that a user's perceived trustworthiness and use of an AI model can vary based on the specifics of their situation (e.g., time-pressures for decision-making, high-stakes decisions). We provide our reconceptualization to nuance how trustworthiness is thought about, studied, and evaluated by the AI community in ways that are more aligned with past theoretical research.
AB - Developers and academics have grown increasingly interested in developing “trustworthy” artificial intelligence (AI). However, this aim is difficult to achieve in practice, especially given trust and trustworthiness are complex, multifaceted concepts that cannot be completely guaranteed nor built entirely into an AI system. We have drawn on the breadth of trust-related literature across multiple disciplines and fields to synthesize knowledge pertaining to interpersonal trust, trust in automation, and risk and trust. Based on this review we have (re)conceptualized trustworthiness in practice as being both (a) perceptual, meaning that a user assesses whether, when, and to what extent AI model output is trustworthy, even if it has been developed in adherence to AI trustworthiness standards, and (b) context-dependent, meaning that a user's perceived trustworthiness and use of an AI model can vary based on the specifics of their situation (e.g., time-pressures for decision-making, high-stakes decisions). We provide our reconceptualization to nuance how trustworthiness is thought about, studied, and evaluated by the AI community in ways that are more aligned with past theoretical research.
KW - Artificial intelligence
KW - Interdisciplinary
KW - Machine learning (ML)
KW - Trust
KW - Trustworthy AI
UR - https://www.scopus.com/pages/publications/85218908358
U2 - 10.1016/j.artint.2025.104309
DO - 10.1016/j.artint.2025.104309
M3 - Review article
AN - SCOPUS:85218908358
SN - 0004-3702
VL - 342
JO - Artificial Intelligence
JF - Artificial Intelligence
M1 - 104309
ER -