
Part 3. Do You Follow?
Exposing how technology can exacerbate information disorderPart 3. Do You Follow?
Exposing how technology can exacerbate information disorder
This essay is part of “Digitized Divides”, a multi-part series about technology and crisis. This part was written by Safa, and co-developed through discussions, research, framing, and editing by Safa, Louise Hisayasu, Dominika Knoblochová, Christy Lange, Mo R., Helderyse Rendall, and Marek Tuszynski. Image by Liz Carrigan and Safa, with visual elements from Alessandro Cripsta.
Apps, websites, and online media can be essential for accessing news, services, and information. But amid all that content, it can be challenging to find reliable sources while navigating through the distractions, advertisements, and false information – especially shocking headlines and altered photos or videos that can convince you of a completely different reality. What we see online is not always what it seems.
Social media has been a key tool of information and connection for people who are part of traditionally marginalized communities. Young people access important communities they may not be able to access in real life, such as LGBTQ+ friendly spaces153. In the words of one teen: “Throughout my entire life, I have been bullied relentlessly. However, when I’m online, I find that it is easier to make friends… [...] Without it, I wouldn’t be here today.”154 But experts are saying that social media has been “both the best thing [...] and it’s also the worst” to happen to the trans community, with hate speech and verbal abuse resulting in tragic real life consequences.155 “Research to date suggests that social media experiences may be a double-edged sword for LGBTQ+ youth that can protect against or increase mental health and substance use risk.”156
In January 2025, Mark Zuckerburg announced that Meta (including Facebook and Instagram) would end their third-party fact-checking program in favor of the model of ‘community notes’ on X (formerly Twitter).157 Meta’s decision includes ending policies that protect LGBTQ+ users.158 Misinformation is an ongoing issue across social media platforms, reinforced and boosted by the design of the apps with the most clicks and likes getting the most rewards – whether they be rewards of attention or money. Research found that “the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, suggesting that a relatively small number of people can have an outsized impact on the information ecosystem.”159
Meta’s pledge of removing their third-party fact-checking program has raised alarm bells of journalists, human rights organizations, and researchers. The UN’s High Commissioner for Human Rights, Volker Türk, said in response: “Allowing hate speech and harmful content online has real world consequences.”160 Meta has been implicated in or accused of supercharging the genocide of the Rohingya in Myanmar,161 as well as fueling ethnic violence in Kenya,162 Ethiopia,163 and in Nigeria164 at least in part due to the rampant misinformation on its platform. “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook… are affecting societies around the world,” said one leaked internal Facebook report from 2019; “We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.”165 The International Fact-Checking Network responded to the end of the nine-year fact-checking program in an open letter shortly after Zuckerberg’s 2025 announcement, stating: “the decision to end Meta’s third-party fact-checking program is a step backward for those who want to see an internet that prioritizes accurate and trustworthy information.”166
The algorithms behind social media platforms control which information is prioritized, repeated, and recommended to people in their feeds and in search results. But even with several reports, studies, and shifting user behaviors, the companies themselves have not done much to adapt their user interface designs to catch up to the more modern ways of interaction and facilitate meaningful user fact-checking.
Because social media feeds, such as Instagram’s feed and “For You” page, mix information, it becomes difficult for readers to distinguish whether a post, photo, or video is one minute, one day, or one week old, sometimes prioritizing older stories which have received more likes, shares, and comments ahead of the most recent information. This might be fine for seeing the most popular cat memes, but is a counterintuitive way to organize the news and does not facilitate people in documenting and tracking crises. Instagram has an option to temporarily change the feed between “Favorites” and “Following,” where the latter would show the latest posts first.167 However, the naming and design are not intuitive and work in contrast with consistent and long-term needs for chronology that journalists and people in war or crisis situations may experience.
Additionally, on certain platforms such as Instagram, it is difficult to fact-check outside of the platform because the app doesn’t allow for hyperlinking in post and reel captions. This means if someone were to add a URL into a post’s or reel’s caption, the user would need to copy and paste it into a browser, but copying text is not possible on the mobile app. Without a special type of business account, it is not possible for typical users to add hyperlinks. To bypass this, most news agencies, influencers, and individuals will include a link in their bio which tries to organize the information they’ve recently been posting about. These extra steps or obstacles in order to look into and verify information could be a barrier or easily missed by users, especially those with less digital and media literacy training.
With all of this context about social media platforms not including design patterns that encourage and promote verification, fact-checking, citations, and sourcing of information, it is no wonder that mis- and disinformation spread rapidly168. Even when media outlets publish corrections to false information and any unsubstantiated claims they perpetuate, it isn’t enough to reverse the damage. As described by First Draft News: “it is very, very difficult to dislodge [misinformation] from your brain.”169 When false information is published online or in the news and begins circulating, even if it is removed within minutes or hours, the ‘damage is done’ so-to-say. Corrections and clarifying statements rarely get as much attention as the original piece of false information and even if they are seen, they may not be internalized.
Algorithms are so prevalent that at first glance they may seem trivial, but are actually deeply significant. Well-known cases like the father who found out his daughter was pregnant through what was essentially an algorithm170, and another father whose Facebook “Year in Review” ‘celebrated’ the death of his daughter171, illustrate how the creators, developers, and designers of algorithmically curated content should be considerate of worst-case-scenarios. Edge-cases, although rare, are significant and warrant inspection and mitigation.
Furthering audiences down the rabbit hole, there have been a multitude of reports and studies that have found how recommendation algorithms across social media can radicalize audiences based on the content they prioritize and serve. “Moral outrage, specifically, is probably the most powerful form of content online.”172 A 2021 study found that TikTok’s algorithm led viewers from transphobic videos to violent far-right content, including racist, misogynistic, and ableist messaging. “Our research suggests that transphobia can be a gateway prejudice, leading to further far-right radicalization.”173 YouTube was also once dubbed the “radicalization engine”174, and still seems to be struggling with its recommendation algorithms, such as the more recent report of YouTube Kids sending young viewers down eating disorder rabbit holes175. Ahead of German elections in 2025, researchers found that social media feeds across platforms, but especially on TikTok skewed right-wing176.
People are increasingly looking for their information in different ways, beyond traditional news media outlets. A 2019 report found that teens were getting most of their news from social media.177 A 2022 article explained how many teens are using TikTok more than Google to find information.178 That same year a study explored how adults under 30 trust information from social media almost as much as national news outlets.179 A 2023 multi-country report found that less than half (40%) of total respondents “trust news most of the time.”180 Researchers warned the trajectory of information disorder could result in governments steadily taking more control of information, adding “access to highly concentrated tech stacks will become an even more critical component of soft power for major powers to cement their influence.”181
Indonesia’s 2024 elections saw the use of AI-generated digital avatars take center stage, especially in capturing the attention of young voters. Former candidate, now President Prabowo Subianto used a cute digital avatar created by generative AI across social media platforms like TikTok, and was able to completely rebrand his public image and win the presidency – distracting from the accusations against him of committing major human rights abuses.182 Generative AI, including chatbots like ChatGPT183, is also a key player in information disorder because of how realistic and convincing the texts and images produced by it can be. Even seemingly harmless content on spam pages like ‘Shrimp Jesus,’ can result in real-world consequences such as the erosion of trust, falling for scams, and breaching one’s data to brokers who feed that information back into systems fueling digital influence.184 Furthermore, the outputs of generative AI may be highly controlled. “Automated systems have enabled governments to conduct more precise and subtle forms of online censorship,” according to a 2023 Freedom House report. “Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern.”185
One tool used in digital influencing is psychometric profiling186, which sometimes relies on the so-called OCEAN model (which claims to measure and assess a person’s openness, conscientiousness, empathy, agreeableness, and neuroticism) in order to help predict what kinds of messages they’ll be most likely to respond to. One individual user can be profiled and categorized into larger groups of thousands or millions of users, based on what similarly provokes and motivates them. It becomes much simpler to serve up specific advertisements and prioritize certain content in peoples’ feeds and search results that get them to click and engage. Psychometric profiling was one of the methods used by Cambridge Analytica for years, and which was exposed in 2017.187 But Cambridge Analytica was far from the only business working in this way, as there are documented to be over 500 companies in the so-called “influence industry” that use personal data for political profiling.188 The proliferation of AI tools has made the influence industry less transparent and more difficult to regulate than it was in the pre-internet era. With the rise of AI, these mass models of profiling have become easier and faster than ever for anyone with the resources to do it.
As has been echoed time and again throughout this series, technology is neither good, nor bad – it depends on for what purpose it is used. “Technology inherits the politics of its authors, but almost all technology can be harnessed in ways that transcend these frameworks.”189 These various use cases and comparisons can be useful when discussing specific tools and methods, but only at a superficial level – for instance in regards to digital avatars which were mentioned in this piece. One key example comes from Venezuela, where the media landscape is rife with AI-generated pro-government messages190 and people working in journalism face threats of imprisonment. In response journalists have harnessed and leveraged digital avatars to help protect their identities and maintain privacy.191 This is indeed a story of resilience, but it sits within a larger and more nefarious context of power and punishment. While any individual tool can reveal both benefits and drawbacks in their use cases, zooming out and looking at the bigger picture reveals power systems and structures that put people at risk and the trade-offs of technology are simply not symmetrical. Two truths can exist at the same time, and the fact that technology is used for harnessing strength and is used for harming and oppressing people is significant.
Notice: This work is licensed under a Creative Commons Attribution 4.0 International Licence.
Endnotes
154 Price, Dr. Myeshia; et al. ““Without It, I Wouldn’t Be Here Today”: LGBTQ+ Young People’s Experiences in Online Spaces.” HopeLab, 2025.
155 Compton, Julie. “'Frightening' online transphobia has real-life consequences, advocates say.” NBC News, 2019.
156 Fisher, Celia B.; et al. “Social media: A double-edged sword for LGBTQ+ youth.” ScienceDirect, 2024.
157 Parry, Hannah. “Mark Zuckerberg Ends Meta Fact-Checking as Donald Trump Takes Office.” Newsweek, 2025.
158 Conger, Kate. “Meta Drops Rules Protecting LGBTQ Community as Part of Content Moderation Overhaul.” The New York Times, 2025.
160 UN News. “It’s not censorship to stop hateful online content, insists UN rights chief.” United Nations, 2025.
161 Amnesty International. “Myanmar: Facebook’s systems promoted violence against Rohingya; Meta owes reparations – new report.” 2022.
162 Amnesty International. “META Sued in Kenya For Fueling Ethiopian Ethnic Violence.” Genocide Watch, 2022.
163 Mackintosh, Eliza. “Facebook knew it was being used to incite violence in Ethiopia. It did little to stop the spread, documents show.” CNN Business, 2021.
164 Adegoke, Yemisi; et al. “Like. Share. Kill. Nigerian police say false information on Facebook is killing people.” BBC, 2018.
165 Cranz, Alex; et al. “Facebook encourages hate speech for profit, says whistleblower.” The Verge, 2021.
166 The International Fact-Checking Network. “An open letter to Mark Zuckerberg from the world’s fact-checkers, nine years later . Poynter Institute, 2025.
167 Mosseri, Adam. “Control your Instagram Feed with Favorites and Following.” Instagram Blog, 2022.
168 Oxford Internet Institute. “Social media manipulation by political actors an industrial scale problem - Oxford report.” 2021.
169 Shane, Tommy. “The psychology of misinformation: Why it’s so hard to correct.” First Draft, 2020.
170 Duhigg, Charles. “How Companies Learn Your Secrets.” The New York Times, 2012.
171 Meyer, Eric A. “My Year Was Tragic. Facebook Ambushed Me With a Painful Reminder.” Slate, 2014.
172 Shapiro, Ari; et al. “How the polarizing effect of social media is speeding up.” NPR, 2022.
173 Little, Olivia; et al. “TikTok's algorithm leads users from transphobic videos to far-right rabbit holes.” Media Matters for America, 2021.
174 Ingram, Mathew. “The YouTube ‘radicalization engine’ debate continues.” Columbia Journalism Review, 2020.
175 Germain, Thomas. “YouTube Wants to Stop Sending Kids Down the Rabbit Hole of Eating Disorder Videos.” Gizmodo, 2023.
176 Global Witness. “X and TikTok algorithms push pro-AfD content to non-partisan German users - new analysis.” 2025.
177 Common Sense Media. “New Survey Reveals Teens Get Their News from Social Media and YouTube.” 2019.
178 Huang, Kalley. “For Gen Z, TikTok Is the New Search Engine.“ The New York Times, 2022.
179 Eddy, Kirsten. “Republicans, young adults now nearly as likely to trust info from social media as from national news outlets.” Pew Research Center, 2024.
180 Newman, Nic; et al. “Reuters Institute Digital News Report 2023.” (page 10) Reuters Institute for the Study of Journalism, 2023.
181 Cavaciuti-Wishart, Ellissa; et al. “Global Risks Report 2024.” World Economic Forum, 2024.
182 Burgess, Annika; et al. “Dancing, cats and Hunger Games. Indonesia's presidential candidates take social media campaigning to a 'whole new level'.” ABC News, 2023.
183 Hsu, Tiffany. “Disinformation Researchers Raise Alarms About A.I. Chatbots.” The New York Times, 2023.
184 DiResta, Renee; et al. “How Spammers, Scammers and Creators Leverage AI-Generated Images on Facebook for Audience Growth.” Stanford Cyber Policy Center, 2024.
185 Funk, Allie; et al. “The Repressive Power of Artificial Intelligence.” Freedom House, 2023.
186 Bashyakarla, Varoon; et al. “Personal Data: Political Persuasion - The Guidebook and Visual Gallery.” Data and Politics, Tactical Tech, 2019.
187 Chang, Alvin. “The Facebook and Cambridge Analytica scandal, explained with a simple diagram.” Vox, 2018.
188 Macintyre, Amber; et al. “The Influence Industry Explorer.” The Influence Industry Project, Tactical Tech. Accessed February 6, 2025.
189 Cade; et al. “On Weaponised Design.” Our Data Our Selves, Tactical Tech, 2018.
190 Ryan-Mosley, Tate. “How generative AI is boosting the spread of disinformation and propaganda.” MIT Technology Review, 2023.
191 Rueda, Manuel. “Venezuelan journalists use AI to avoid government scrutiny.” All Things Considered on NPR, 2024.
Read Digitized Divides:
- Part 0. Executive Summary
- Part 1. Digital Information Floods and Dams: Exploring how technology can be used as both a gateway and a barrier to accessing information
- Part 2. ‘Smart’ (or Machiavellian?) Surveillance: Tracking how technology is used to supercharge monitoring and control
- Part 3. Do You Follow?: Exposing how technology can exacerbate information disorder
- Part 4. Systematized Supremacy: Witnessing how tech is used to conquer and destroy
- Part 5. Tactile Tech: Uncovering the materiality of internet infrastructures
- Part 6. The Green Transition’s Barren Footprint: Reckoning with the reality of rare-earth mining
- Part 7. ‘Artisanal’ Mining and ‘Natural’ Technology: Revealing the costs of cobalt’s commodified extractivism
- Part 8. The Illusion of AI: Spotlighting tech laborers in factories, warehouses, and gig and click workers
- Part 9. The Humanity Behind Our Tools: Recognizing the harsh conditions that mining and e-waste workers face
- Part 10. It’s All Downhill From Here: What is technology actually facilitating?