Abstract
Algorithmic moderation is often presented as the only solution to the scale of harmful and hateful content disseminated online. This chapter questions this assertion by contextualizing the need and scope of algorithmic moderation based on normative considerations, followed by a set of critical issues to be approached in the study of algorithmic moderation, such as the systemic biases that underlie technical approaches to algorithmic performance and asymmetries in access to AI data and knowledge. We then illustrate these issues with the specific case of hate speech and finally consider some of the differences in how human and algorithmic moderators are perceived. This chapter is structured as a starting point for scholars who wish to venture into the study of algorithmic moderation, its antecedents, and consequences.
Original language | English |
---|---|
Title of host publication | Handbook of Critical Studies of Artificial Intelligence |
Editors | Simon Lindgren |
Publisher | Edward Elgar Publishing |
Chapter | 46 |
Pages | 528-537 |
Number of pages | 10 |
ISBN (Electronic) | 9781803928562 |
ISBN (Print) | 9781803928555 |
DOIs | |
Publication status | Published - 14 Nov 2023 |
Research programs
- ESHCC M&C
Erasmus Sectorplan
- Sector plan SSH-Breed