Algorithmic moderation: Contexts, perceptions, and misconceptions

João Gonçalves*, Ina Weber

*Corresponding author for this work

Research output: Chapter/Conference proceedingChapterAcademic

1 Downloads (Pure)


Algorithmic moderation is often presented as the only solution to the scale of harmful and hateful content disseminated online. This chapter questions this assertion by contextualizing the need and scope of algorithmic moderation based on normative considerations, followed by a set of critical issues to be approached in the study of algorithmic moderation, such as the systemic biases that underlie technical approaches to algorithmic performance and asymmetries in access to AI data and knowledge. We then illustrate these issues with the specific case of hate speech and finally consider some of the differences in how human and algorithmic moderators are perceived. This chapter is structured as a starting point for scholars who wish to venture into the study of algorithmic moderation, its antecedents, and consequences.
Original languageEnglish
Title of host publicationHandbook of Critical Studies of Artificial Intelligence
EditorsSimon Lindgren
PublisherEdward Elgar Publishing
Number of pages10
ISBN (Electronic)9781803928562
ISBN (Print)9781803928555
Publication statusPublished - 14 Nov 2023

Research programs



Dive into the research topics of 'Algorithmic moderation: Contexts, perceptions, and misconceptions'. Together they form a unique fingerprint.

Cite this