AI Intervention Reduces Political Polarization on Social Media

21
AI Intervention Reduces Political Polarization on Social Media

A new study demonstrates that algorithmic manipulation of social media feeds can measurably decrease political polarization, even without platform cooperation. Researchers at the University of Washington and Northeastern University have developed a browser extension leveraging large language models (LLMs) to subtly reorder posts in users’ feeds, pushing down extreme content and, in some cases, slightly boosting it for comparison. The results, published in Science, show a clear impact on users’ attitudes towards opposing political groups.

The Experiment and Its Findings

The core of the study involved over 1,200 participants who used X (formerly Twitter) with modified feeds during the lead-up to the 2024 U.S. election. One group saw polarizing content de-emphasized, reducing its visibility; another saw it amplified. The key finding: those exposed to de-emphasized divisive posts reported warmer feelings toward opposing political groups. This shift was measured using a “feeling thermometer” scale, where participants rated their sentiment. The change averaged two to three degrees, a significant effect considering that historical U.S. political sentiment shifts by roughly three degrees over three years.

Conversely, participants who saw boosted polarizing content reported colder feelings toward opposing groups, further demonstrating the algorithm’s influence. The intervention also affected emotional responses: less sadness and anger were reported by those with de-emphasized content.

Bypassing Platform Control

This research is groundbreaking because it sidesteps the traditional barrier to studying algorithmic influence: platform access. Instead of relying on cooperation from social media companies—which rarely grant full transparency—the researchers created a tool that operates independently within users’ browsers. As Martin Saveski, a co-author from the University of Washington, explains, “Only the platforms have had the power to shape and understand these algorithms. This tool gives that power to independent researchers.”

This method bypasses platform approval, allowing for real-world testing without relying on the willingness of tech giants to share data or control.

Implications and Future Research

The long-term effects of such interventions remain unclear. Victoria Oldemburgo de Mello, a psychologist at the University of Toronto, notes that the observed effects may either dissipate or compound over time, highlighting a crucial area for future research. The researchers have made their code publicly available to encourage further investigation and replication.

The framework also has potential beyond political polarization. The team plans to explore interventions related to well-being and mental health, leveraging LLMs to analyze and modify social media feeds for broader benefits. While the current tool operates primarily on browser-based platforms, the researchers are exploring ways to adapt it for use with mobile applications, which presents technical challenges but remains a key goal.

The study’s success demonstrates that algorithmic manipulation of social media feeds can have a measurable impact on user attitudes, even without platform cooperation. This finding challenges the narrative that polarization is solely driven by user behavior and underscores the responsibility of algorithmic design in shaping public discourse.