Treffer: Platform Governance with Algorithm-Based Content Moderation: An Empirical Study on Reddit.

Title:
Platform Governance with Algorithm-Based Content Moderation: An Empirical Study on Reddit.
Authors:
He, Qinglai1 (AUTHOR) qinglai.he@wisc.edu, Hong, Yili2 (AUTHOR) khong@miami.edu, Raghu, T. S.3 (AUTHOR) Raghu.Santanam@asu.edu
Source:
Information Systems Research (INFORMS). Jun2025, Vol. 36 Issue 2, p1078-1095. 18p.
Database:
Business Source Elite

Weitere Informationen

Pratice- and Policy-oriented Abstract Volunteer (human) moderators have been the essential workforce for content moderation to combat growing inappropriate online content. Because volunteer-based content moderation faces challenges in achieving scalable, desirable, and sustainable moderation, many online platforms have started to adopt algorithm-based content moderation tools (bots). However, it is unclear how volunteer moderators react to the bot adoption in terms of their community-policing and -nurturing efforts. Our research collected public moderation records by bots and volunteer moderators from Reddit. Our analysis suggests that bots can augment volunteer moderators. Augmentation results in volunteers shifting their efforts from simple policing work to a broader set of moderations, including policing over subjective rule violations and satisfying the increased needs for community-nurturing activities following the policing actions. This paper has implications for online platform managers looking to scale online activities and explains how volunteers can achieve more effective and sustainable content moderation with the assistance of bots. With increasing volumes of participation in social media and online communities, content moderation has become an integral component of platform governance. Volunteer (human) moderators have thus far been the essential workforce for content moderation. Because volunteer-based content moderation faces challenges in achieving scalable, desirable, and sustainable moderation, many online platforms have recently started to adopt algorithm-based content moderation tools (bots). When bots are introduced into platform governance, it is unclear how volunteer moderators react in terms of their community-policing and -nurturing efforts. To understand the impacts of these increasingly popular bot moderators, we conduct an empirical study with data collected from 156 communities (subreddits) on Reddit. Based on a series of econometric analyses, we find that bots augment volunteer moderators by stimulating them to moderate a larger quantity of posts, and such effects are pronounced in larger communities. Specifically, volunteer moderators perform 20.9% more community policing, particularly over subjective rules. Moreover, in communities with larger sizes, volunteers also exert increased efforts in offering more explanations and suggestions after their community adopted bots. Notably, increases in activities are primarily driven by the increased need for nurturing efforts to accompany growth in subjective policing. Moreover, introducing bots to content moderation also improves the retention of volunteer moderators. Overall, we show that introducing algorithm-based content moderation into platform governance is beneficial for sustaining digital communities. [ABSTRACT FROM AUTHOR]

Copyright of Information Systems Research (INFORMS) is the property of INFORMS: Institute for Operations Research & the Management Sciences and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

Volltext ist im Gastzugang nicht verfügbar.