Whose AI? How different publics think about AI and its social impacts

  • Luye Bao
  • , Nicole M. Krause
  • , Mikhaila N. Calice
  • , Dietram A. Scheufele
  • , Christopher D. Wirz
  • , Dominique Brossard
  • , Todd P. Newman
  • , Michael A. Xenos

Research output: Contribution to journalArticlepeer-review

91 Scopus citations

Abstract

Effective public engagement with complex technologies requires a nuanced understanding of how different audiences make sense of and communicate disruptive technologies with immense social implications. Using latent class analysis (LCA) on nationally-representative survey data (N = 2,700), we examine public attitudes on different aspects of AI, and segment the U.S. population based on their AI-related risk and benefit perceptions. Our analysis reveals five segments: the negative, perceiving risks outweighing benefits; the ambivalent, seeing high risks and benefits; the tepid, perceiving slightly more benefits than risks; the ambiguous, perceiving moderate risks and benefits; and the indifferent, perceiving low risks and benefits. For societal debates surrounding a deeply disruptive issue like AI, our findings suggest potential opportunities for engagement by soliciting input from individuals in segments with varying levels of support for AI, as well as a way to widen representation of voices and ensure responsible innovation of AI.

Original languageEnglish
Article number107182
JournalComputers in Human Behavior
Volume130
DOIs
StatePublished - May 2022
Externally publishedYes

Keywords

  • Artificial intelligence
  • Benefit perceptions
  • Public opinion
  • Risk perceptions
  • Segmentation analysis

Fingerprint

Dive into the research topics of 'Whose AI? How different publics think about AI and its social impacts'. Together they form a unique fingerprint.

Cite this