Information bubbles, are they a myth or a part of our daily digital lives?
The internet and social media in particular are often seen as important elements of modern democracy, not only because they make one-to-one and one-to-many communication easier, but because they can expose us to discover viewpoints different than ours. Being exposed to challenging viewpoints is important, because informed voters make better decisions. According to this belief, thanks to social media, anyone can make his/her voice heard, which was not possible in traditional media due to space limitation and editorial filters. However, skeptics claim that social media also come with filters. To prevent information overload, either users only follow like-minded users, or algorithms personalize incoming information and show us what we already like and agree with. Due to this “filter bubble” effect, for instance a liberal user reads more liberal news, deliberates mostly with liberal users and his/her world-view is never challenged. Viewpoint diversity online would thus diminish due to these self-created or imposed bubbles.
While the claim of the skeptics received attention in the media, there are very few scientific studies that studied viewpoint diversity. Facebook  was one of the first to study information diversity and they defined diversity as “novelty”. In a study of 250 million users, Facebook scientists modified news feeds of some users by removing certain incoming information. Facebook concluded that thanks to weak ties (users who we have in our networks but not very deeply connected), bubbles are burst. We will thus receive novel information on Facebook that we would not get elsewhere, thanks to our weak ties. However, this study is rather thin on theory, as “novelty” is not the only metric that can be used to measure diversity. Thanks to Facebook we can get “novel” websites, but they might not necessarily contain challenging viewpoints.
Another study  focused on Twitter and source diversity, checking whether the incoming tweets of users contained items from all end of the political spectrum. Authors concluded that, thanks to the “retweet” feature of Twitter, bubbles did not occur. However, according to studies in media and communication studies where the concept information diversity is studied extensively for many years, diversity should not only be measured by the number of available of sources. This is because (1) a high number of sources does not indicate diversity in information or viewpoints; (2) while there are many sources, minorities and marginalized groups can find a hard time to reach a larger audience due to power imbalances; (3) while a user’s incoming feed is diverse, that does not mean he/she will actually consume this information.
In a comparative study we performed for Dutch and Turkish Twitter users, we used metrics from existing social media analytics studies, but also added new ones using the theory from media and communication studies. We first crawled tweets of popular (seed) Twitters users who mainly tweeted political matters. This list included politicians, political parties, newspapers, bloggers, journalists, etc. Later, we crawled regular users who retweeted from these seed users and the retweets they made. Finally we labelled the “seed” users who belong to a small political party as a “minority”. That made for instance the Kurdish Party in Turkey and the Greens in the Netherlands a “minority” user. Minorities formed 15% of all tweets created by the popular users for both countries. According to our research, on a scale of 0 to 1, source diversity is around 0.6 for both countries. We therefore conclude that we cannot observe bubbles using this metric. However, when we look at “minority access”, whether minorities could reach a larger audience, we do observe bubbles for Turkish users. The minorities cannot reach more than half of the studied Turkish users. Another interesting finding is that, while the incoming feed of users for both countries is not too bad, their outgoing feed (items they choose to retweet or reply) is rather low. So, while users receive diverse information, they are still biased in what they share.
The results of our study introduce various questions. How should minorities be able to reach a larger public? Can this be done by design? While identifying minorities and their valuable tweets is no easy task, showing these items to ‘‘challenge averse’’ users is a real challenge. So far, “pushing” challenging information to users or giving the user feedback about his biased news consumption do not seem to have a significant effect . More research is needed to understand how users’ reading behavior change and to determine the conditions that would allow such a change. Further, normative questions arise while making design decisions. When designing diverse recommendation systems, it is definitely a challenge to determine which view is “valid”. For instance, should a recommendation system show all viewpoints in the climate change debate, if some viewpoints are not empirically validated or simply seen as false by the majority of the experts? Should a viewpoint get equal attention even if it provides no information or only contains arguments with fallacies? Should information intermediaries be required to introduce diversity into their design? These questions would need to be addressed by a good ethical analysis before design decisions of such systems are made.
The paper can be downloaded at http://www.sciencedirect.com/science/article/pii/S0747563214003069
1. Facebook is now in spotlight for a more recent experiment they conducted without notifying users. This experiment has major ethical concerns as well, but this is out of scope for this post.
- Bakshy, E., Rosenn, I., Marlow, C., & Adamic, L. (2012). The role of social networks in information diffusion. Proceedings of the 21st international conference on World Wide Web, (pp. 519–528). URL: http://dl.acm.org/citation.cfm?id=2187907.
- An, J., Cha, M., Gummadi, K., & Crowcroft, J. (2011). Media Landscape in Twitter: A World of New Conventions and Political Diversity. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (pp. 18–25).
- Munson, S. A., Lee, S. Y., & Resnick, P. (2013). Encouraging reading of diverse political viewpoints with a browser widget. In International conference on weblogs and social media (ICWSM). Boston.
The author of this post is Engin Bozdag, research fellow of the Alexander von Humboldt Institute for Internet and Society. The post does not necessarily represent the view of the Institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
We approach the de-mystification of this claim by looking at concrete examples of how AI (re)produces inequalities and connect those to several aspects which help to illustrate socio-technical entanglements.
“System Risk Indication” (SyRI) deployed by the dutch government for automatically detecting social benefit fraud. The program was shut down due to a severe lack in transparency and unproportional collection...
AI won’t kill us in the form of a time-travelling humanoid robot with an Austrian accent. But: AI is used in various military applications – supporting new concepts of command…