Tuesday Seminar – 15 June

Pernicious Personalization: An Audit on the Ideological Bias of Twitter Recommender System

Fabio Votta (University of Amsterdam)
Benjamin Guinaudeau (University of Konstanz)
Simon Roth (University of Konstanz)

Although social media only recently emerged, the accumulation of evidence undermining the ‘echo chamber’ hypothesis is striking. While self-selective exposure to congruent content – the echo chamber – is not as salient as expected, the ideological bias induced primarily by algorithmic selection – the filter bubble – has been less scrutinized in the literature. In this study, we propose a new experimental research design to investigate recommender systems. To avoid any behavioral confounder, we rely on automated agents, which ‘treat’ the algorithm with ideological and behavioral cues. For each agent, we compare the ideological slant of the recommended timeline with the ideological slant of an artificially reconstructed chronological timeline and, hence, isolate the ideological bias of the recommender system. This allows us to investigate two main questions : (1) how much bias is induced by the recommender system? (2) what role is played by implicit and explicit cues, when triggering ideological recommendations?The experiment has been pre-registrated (https://osf.io/5kwpr) and features 170 automated agents, which were active for three weeks before and three weeks after the 2020 American presidential election. We find that, after three weeks of delivering ideological cues (following accounts and liking content), the average algorithmic bias is about 5%. In other words, the timeline as structured by the algorithm entails 5% less cross-cutting content than it does when it is structured chronologically. While the algorithm relies on both implicit and explicit cues to formulate recommendations, the effect of implicit cues is significantly stronger.

Contact Semih Çakır if you would like to participate in the seminar. 

This content has been updated on 10 June 2021 at 20 h 35 min.