Abstract
People tend to be hesitant toward algorithmic tools, and this aversion potentially affects how innovations in artificial intelligence (AI) are effectively implemented. Explanatory mechanisms for aversion are based on individual or structural issues but often lack reflection on real-world contexts. Our study addresses this gap through a mixed-method approach, analyzing seven cases of AI deployment and their public reception on social media and in news articles. Using the Contextual Integrity framework, we argue that most often it is not the AI technology that is perceived as problematic, but that processes related to transparency, consent, and lack of influence by individuals raise aversion. Future research into aversion should acknowledge that technologies cannot be extricated from their contexts if they aim to understand public perceptions of AI innovation.
Original language | English |
---|---|
Pages (from-to) | 609-633 |
Number of pages | 25 |
Journal | International Journal of Communication |
Volume | 18 |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2024 (Tessa Oomen, João Gonçalves, and Anouk Mols). Licensed under the Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd). Available at http://ijoc.org. All Rights Reserved.
Research programs
- ESHCC M&C
Erasmus Sectorplan
- Sector plan SSH-Breed