Algorithmic moderation: Contexts, perceptions, and misconceptions

João Gonçalves*, Ina Weber

*Corresponding author for this work

Research output: Chapter/Conference proceedingChapterAcademic

57 Downloads (Pure)

Abstract

Algorithmic moderation is often presented as the only solution to the scale of harmful and hateful content disseminated online. This chapter questions this assertion by contextualizing the need and scope of algorithmic moderation based on normative considerations, followed by a set of critical issues to be approached in the study of algorithmic moderation, such as the systemic biases that underlie technical approaches to algorithmic performance and asymmetries in access to AI data and knowledge. We then illustrate these issues with the specific case of hate speech and finally consider some of the differences in how human and algorithmic moderators are perceived. This chapter is structured as a starting point for scholars who wish to venture into the study of algorithmic moderation, its antecedents, and consequences.
Original languageEnglish
Title of host publicationHandbook of Critical Studies of Artificial Intelligence
EditorsSimon Lindgren
PublisherEdward Elgar Publishing
Chapter46
Pages528-537
Number of pages10
ISBN (Electronic)9781803928562
ISBN (Print)9781803928555
DOIs
Publication statusPublished - 14 Nov 2023

Research programs

  • ESHCC M&C

Erasmus Sectorplan

  • Sector plan SSH-Breed

Fingerprint

Dive into the research topics of 'Algorithmic moderation: Contexts, perceptions, and misconceptions'. Together they form a unique fingerprint.

Cite this