U-M study explores how political bias in content moderation on social media feeds echo chambers

Study: Politically biased moderation drives echo chamber formation: An analysis of content removals on Reddit using natural language processing

Although public attention has led to corporate and public policy changes, algorithms and their creators might not be the only driving factor behind political polarization on social media.

In a new study, Justin Huang , assistant professor of marketing at the University of Michigan Ross School of Business, explores how user-driven content moderation is ubiquitous and an overlooked aspect of this issue.

Huang and his collaborators, Ross School Ph.D. graduate Jangwon Choi and U-M graduate Yuqin Wan , study the popular social media site Reddit to explore how subreddit moderator biases in content removal decisions of over a hundred independent communities help create echo chambers.

With a looming presidential election and ethical questions surrounding censorship on social media, the study raises important considerations for industry leaders and policymakers. Huang shares his insights.

What are some of the negative implications of politically biased content removal’

Our research documents political bias in user-driven content moderation, namely comments whose political orientation is opposite to the moderators’ political orientation are more likely to be removed. This bias creates echo chambers, online spaces characterized by homogeneity of opinion and insulation from opposing viewpoints.

A key negative implication of echo chambers is they distort perceptions of political norms. We look to our peers to help form and shape their political beliefs, and being in an echo chamber can lead to a distorted view of what’s normal.

In some cases, this can radicalize individuals and allow misinformation to go unchallenged. It also can lead to dismay at and reduce trust in electoral outcomes-how could Candidate A have won when everyone I spoke to supported Candidate B’

Ultimately, this undermines the deliberative discourse and common understanding key to the proper functioning of our democracy.

In identifying the political views of moderators and commenters, did you find that any particular political view was more likely to delete comments than others’

For regular Reddit users, it should come as no surprise the site is, on average, left-leaning. The largest political subreddit on the website, /r/politics, is a bastion of Democratic support. It’s also borne out in our data and modeling of political opinion among users and moderators of the local subreddits we study.

Suffice to say that biased content moderation is not limited to any one side.

Could there be similar effects of user content moderation on other social media platforms’

The type of user-driven content moderation we study is present on all’of the major social media platforms, including Facebook, TikTok, Instagram, YouTube and X (formerly Twitter). These platforms give users ownership and moderation control over online spaces such as groups or the comment sections of content they create, and there are practically no platform guidelines or oversight on how a user moderates.

Drawing a parallel to the commercial setting of brand management, social media managers often recommend engaging in viewpoint-related censorship (remove comments from the "haters-) to create an echo chamber of positive brand opinion.

What can social media companies do to foster more open discourse’

User-driven content moderation plays a key role in combating toxicity and establishing community norms in online spaces, highlighting the challenge to platform managers in preserving its beneficial aspects while reducing the potential for abuse and echo chamber formation. Here are a few things platforms could consider:

  • Provide clear guidelines around what constitutes appropriate vs. inappropriate reasons for content removal. Further, educating moderators on the potential for and harms created by biased removals could lead moderators to be more judicious in their decisions.
  • Increase the transparency of content removals by notifying users when their content is removed. Additionally, providing public-facing data on the volume of removals could help rein in abuses through public scrutiny and community pressure on moderators.
  • Implementing analytics and oversight could help platforms monitor the extent to which moderators exhibit political bias in their content moderation. In combination with the provided guidelines, analytics can allow platforms to automatically flag and follow up with moderators who may be abusing the system.