Researchers at the University of Amsterdam found that polarization, echo chambers, and the amplification of extreme voices can emerge on social media even without recommendation algorithms or advertising, as these dynamics arose naturally among 500 AI chatbots modeled on U.S. demographic and political diversity interacting on a minimalist platform.
The experiment placed 500 chatbots, each built from distinct personas based on the American National Election Studies dataset, on a stripped-down social network that excluded ads, recommendation systems, and engagement-boosting algorithms. The platform allowed only basic interactions, yet the bots quickly formed patterns that mirror real-world online behavior. The setup was designed to test whether social dynamics alone could produce polarization and attention imbalances.
One clear outcome was the spontaneous formation of echo chambers. The chatbots tended to cluster with like-minded counterparts, a pattern driven by homophily — the tendency to connect with similar others. These clusters reinforced ideological segregation without any algorithmic nudging, demonstrating that interaction preferences alone can generate insulated information environments.
Another prominent finding was that more partisan or extreme content attracted disproportionate attention. Bots posting extreme or highly partisan messages accumulated more followers and were reposted at higher rates, allowing a small subset of accounts to dominate discourse. This concentration of influence created attention inequality similar to that seen on real social platforms, where a few polarizing voices often monopolize conversations.
Crucially, the study showed that polarization and ideological divides flourished even in the absence of recommendation engines or ad-driven incentives. This challenges the common assumption that algorithms are the sole cause of online toxicity and fragmentation, suggesting instead that the basic architecture of social interaction can produce these outcomes.
Researchers also tested six interventions intended to reduce polarization and inequality, including chronological feeds, downranking viral content, hiding follower counts, hiding bios, and amplifying opposing views. Some measures produced modest improvements—such as slightly reducing attention inequality or increasing exposure to different opinions—but none solved the underlying problem. In some cases, interventions had unintended effects; for example, chronological feeds sometimes increased the visibility of extreme content.
The authors conclude that incremental algorithmic tweaks are unlikely to eliminate online polarization and echo chambers. Creating healthier digital spaces may require rethinking how social networks are structured and how people interact within them. By demonstrating that these dynamics can arise from social behavior itself, the research highlights the need for more fundamental changes to digital communication systems.