TY - JOUR
T1 - Common sense or censorship:
T2 - How algorithmic moderators and message type influence perceptions of online content deletion
AU - Gonçalves, João
AU - Weber, Ina
AU - Masullo, Gina M.
AU - Torres da Silva, Marisa
AU - Hofhuis, Joep
N1 - Funding Information:
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors received the financial support from Facebook for this project.
Publisher Copyright:
© The Author(s) 2021.
PY - 2021/7/28
Y1 - 2021/7/28
N2 - Hateful content online is a concern for social media platforms, policymakers, and the public. This has led high-profile content platforms, such as Facebook, to adopt algorithmic content-moderation systems; however, the impact of algorithmic moderation on user perceptions is unclear. We experimentally test the extent to which the type of content being removed (profanity vs hate speech) and the explanation given for its removal (no explanation vs link to community guidelines vs specific explanation) influence user perceptions of human and algorithmic moderators. Our preregistered study encompasses representative samples (N = 2870) from the United States, the Netherlands, and Portugal. Contrary to expectations, our findings suggest that algorithmic moderation is perceived as more transparent than human, especially when no explanation is given for content removal. In addition, sending users to community guidelines for further information on content deletion has negative effects on outcome fairness and trust.
AB - Hateful content online is a concern for social media platforms, policymakers, and the public. This has led high-profile content platforms, such as Facebook, to adopt algorithmic content-moderation systems; however, the impact of algorithmic moderation on user perceptions is unclear. We experimentally test the extent to which the type of content being removed (profanity vs hate speech) and the explanation given for its removal (no explanation vs link to community guidelines vs specific explanation) influence user perceptions of human and algorithmic moderators. Our preregistered study encompasses representative samples (N = 2870) from the United States, the Netherlands, and Portugal. Contrary to expectations, our findings suggest that algorithmic moderation is perceived as more transparent than human, especially when no explanation is given for content removal. In addition, sending users to community guidelines for further information on content deletion has negative effects on outcome fairness and trust.
UR - http://www.scopus.com/inward/record.url?scp=85111534260&partnerID=8YFLogxK
U2 - 10.1177/14614448211032310
DO - 10.1177/14614448211032310
M3 - Article
AN - SCOPUS:85111534260
SP - 1
EP - 23
JO - New Media and Society
JF - New Media and Society
SN - 1461-4448
ER -