Social Media Misinformation and the Prevention of Political Instability and Mass Atrocities

Atrocity prevention stakeholders face profound challenges from the quantity, speed, and increasing sophistication of online misinformation

By  Kristina Hook  •  Ernesto Verdeja

Social media’s enormous impact is unmistakable. Facebook sees approximately 300 million new photos uploaded daily, while six thousand Tweets are sent every second. The most popular YouTube channels receive over 14 billion views weekly, while the messaging app Telegram boasts over 500 million users. Such platforms have also been used to promote instability, provide platforms for the spread of political conflict, and call for violence. Researchers believe that organized social media misinformation campaigns have operated in at least 81 countries, a trend that continues to grow yearly, with a sizable number of state-backed and private corporation manipulation efforts.

This policy paper draws on various sources to survey contemporary social media misinformation patterns and provide recommendations for multiple stakeholders. We argue that the instability and atrocity prevention community must incorporate emerging issues linked to social media misinformation, and we provide recommendations for doing so. A simple but disturbing truth confronts the prevention community: misinformation can rapidly shapeshift across topics, but only a few narratives need to take hold for trust in facts and evidentiary standards to erode.

Executive Summary

This policy paper examines how social media misinformation (SMM) can worsen political instability and legitimize mass atrocities. In particular, this paper is designed to encourage key stakeholders within fields linked to countering SMM to consider the specific challenges arising from the willful or accidental spread of misinformation in contexts at risk for mass atrocities. While knowledge of contextual nuance is required, general dynamics characterize the nexus between SMM and atrocity prevention. This paper argues that SMM may be particularly resonant in atrocity-risk contexts and therefore play a powerful role in encouraging a permissive societal bandwagon effect that fosters violence against the targeted group(s). To equip the diverse range of relevant stakeholders we address in this paper — including social media corporations, established (legacy) media, non-governmental civil society actors, researchers and civil society, and governments and multilateral organizations — we provide an overview of the challenges emerging from SMM with customized recommendations for each stakeholder group to support atrocity prevention. We envision this policy brief as providing the foundation to ensure continued conversations across these stakeholder groups on a complex, contested subject and, in so doing, broadening the atrocity prevention community in this emerging area. 

Rather than focus on a specific case, the paper provides an overview of SMM and its effects and proposes several recommendations for the instability and atrocity prevention community. The paper is divided into several sections. Following a discussion of key terms, we turn to the main socio-political, psychological, and social media factors that increase the impact of social media misinformation. The paper next outlines the main functions of SMM, and then explores several challenges for those whose work intersects with atrocity prevention, including social media corporations, established (legacy) media, non-governmental civil society actors, researchers and civil society, and governments and multilateral organizations. The paper concludes with policy recommendations for these various stakeholder groups. The paper draws on interviews, scholarly and practitioner research, and the authors’ work in this area.

Introduction

Social media’s enormous impact is unmistakable. Facebook sees approximately 300 million new photos uploaded daily, while six thousand Tweets are sent every second.1Natalie Jomini Stroud, Emily Thorson, and Dannagal Young, “Making Sense of Information and Judging Its Credibility” (Understanding and Addressing the Disinformation Ecosystem, Annenberg School for Communications, 2017), 45–50, https://firstdraftnews.org/wp-content/uploads/2018/03/The-Disinformation-Ecosystem-20180207-v3.pdf. The most popular YouTube channels receive over 14 billion views weekly,2Patrick Van Kessel, Skye Toor, and Aaron Smith, “A Week in the Life of Popular YouTube Channels,” Pew Research Center: Internet, Science & Tech (blog), July 25, 2019, https://www.pewresearch.org/internet/2019/07/25/a-week-in-the-life-of-popular-youtube-channels/. while the messaging app Telegram boasts over 500 million users.3Stan Schroeder, “Telegram Hits 500 Million Active Users amid WhatsApp Backlash,” Mashable, January 13, 2021, https://mashable.com/article/telegram-500-million. Social media platforms connect people across societies, facilitating information sharing in ways unimaginable only two decades ago. The manipulation of social media platforms has also spread widely, and such platforms have been used to promote instability, spread political conflict, and call for violence. Researchers believe that organized social media misinformation campaigns have operated in at least 81 countries, a trend that continues to grow yearly, with a sizable number of manipulation efforts by states and private corporations.4Samantha Bradshaw, Hannah Bailey, and Philip N. Howard, “Industrialized Disinformation: 2020 Global Inventory of Organised Social Media Manipulation,” Working Paper (Oxford, UK: Project on Computational Propaganda, 2021), https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/127/2021/02/CyberTroop-Report20-Draft9.pdf.   

We argue that a wide range of actors connected to the instability and atrocity prevention community must incorporate emerging issues linked to social media misinformation (SMM), and we provide recommendations for doing so. A simple but disturbing truth confronts diverse professions whose work pertains to atrocity prevention: misinformation can rapidly shapeshift across topics, but only a few narratives need to take hold to erode trust in facts and evidentiary standards. Taking advantage of the dense, extensive social interconnections across social media platforms, actors can launch numerous falsehoods, accusations, and conspiracies and observe which narratives take hold. As a growing part of contemporary asymmetric conflict, malicious actors — whether foreign or domestic state actors, parastatal groups, or non-state actors — determine when, where, and how often to attack. Defenders, which include targeted governments, civil society organizations, tech corporations, media outlets, and others, must prioritize where to focus and how to respond. The nature of this asymmetry means that defenders find themselves in a reactive crouch. The quantity, speed, and increasing sophistication of misinformation pose profound challenges for instability and atrocity prevention stakeholders. 

The paper is divided into several sections. Following a discussion of key terms, we turn to the main socio-political, psychological, and social media factors that increase the impact of social media misinformation. The paper next outlines the main functions of SMM, and then explores several challenges for those whose work intersects with atrocity prevention, including social media corporations, established (legacy) media, non-governmental civil society actors, researchers and civil society, and governments and multilateral organizations. The paper concludes with policy recommendations for these various stakeholder groups. 

This policy paper draws on various sources to survey contemporary social media misinformation patterns and provide recommendations for multiple stakeholders. Our research uses three primary sources. The first involves current scholarly and practitioner research. Second, we conducted interviews from October 2021 to May 2022 with 40 experts in governments and intergovernmental organizations, 13 in human rights organizations, 10 in technology corporations and tech oversight organizations, eight in media outlets, six in misinformation fact-checking and monitoring groups, 16 in research centers, and 11 in computer science communities.5Many interviewees asked for anonymity to speak candidly. We refer to them by area of expertise in these footnotes. These involved semi-structured interviews with groups and individuals of an hour or more in length. The questions fell into several categories: specification of the misinformation and disinformation problems and their political, legal, and social consequences, particularly concerning atrocities and instability; interviewees’ team or organizational responses; assessment of the broader technical, legal, and policy initiatives currently in place, including their strengths and limitations; discussion of additional steps needed from the wider practitioner community; and, identification of the major challenges for the future.6The questions varied slightly by interviewee type. For instance, discussions with computer scientists went into more detail on technical issues of identifying misinformation and disinformation provenance, monitoring large volumes of social media data in real time and at scale, and related computational issues. Lastly, we draw on our own project with colleagues, Artificial Intelligence, Social Media, and Political Violence Prevention, which uses novel artificial intelligence (AI) tools to identify types and patterns of social media visual misinformation in unstable political situations, particularly in the Russia-Ukraine context.7Additional information on our own project is here: https://kroc.nd.edu/research/artificial-intelligence-social-media-and-political-violence-prevention/.   

Definitions

In this paper, we adopt inclusive formulations of various definitions of key terms relevant to policy development.8Claire Wardle, “Information Disorder: The Essential Glossary” (Shorenstein Center on Media, Politics, and Public Policy, July 2018); Caroline Jack, “Lexicon of Lies: Terms for Problematic Information” (Data & Society Research Institute, August 2017). Social media refers to computer-based platforms that enable people to communicate and share information across virtual networks in real-time. This includes community apps such as Facebook and Twitter, media-sharing channels like YouTube and Instagram, messaging apps such as WhatsApp, and hybrid platforms like Telegram. Scholars typically distinguish between social media misinformation — involving false or misleading information shared on these platforms — and social media disinformation (SMD), the subset of intentionally shared misinformation.9“Malinformation” is sometimes used for true information that is shared with the aim of causing harm, such as private information to damage a person’s reputation. However, we follow the general trend to subsume malinformation within the category of disinformation, since both concern the intentional spread of information to harm, although they differ on the degree of truth content. In practice, the boundaries are porous. Many purveyors of misinformation believe what they are sharing and are not intentionally spreading false information.

Additionally, not all harmful social media posts are wholly false. They may contain some truth, be largely true but misleading or out of context, or may advocate claims that are problematic but do not necessarily qualify as imminently dangerous. Furthermore, proving the origin of specific pieces of misinformation can be exceedingly difficult.10Interviews, computer scientists (February 2022; April 2022).   

Additionally, social media disinformation may be part of influence operations: sustained campaigns usually organized by states, parastatal entities, or organized non-state actors. In other cases, SMD is significantly ad hoc and decentralized. In keeping with much of the policy community, we use social media misinformation as inclusive of all these forms of problematic social media uses and refer to SMD when there is a plausible indication of an intent to deceive, such as with influence operations. We nevertheless address relevant distinctions throughout the paper. 

Factors Increasing Misinformation Resonance

SMM can elevate the risk of atrocities across various political conditions,11James Waller, Becoming Evil: How Ordinary People Commit Genocide and Mass Killing, 2nd ed. (Oxford; New York: Oxford University Press, 2007). ranging from repressive/authoritarian (China, Myanmar, Venezuela, Russia, etc.) to semi-democratic (Philippines, India, Indonesia, etc.).12Bradshaw, Bailey, and Howard, “Industrialized Disinformation.” SMM has also taken hold in countries that have historically had strong democratic institutions, including the United States and the United Kingdom, partly due to a lack of trust in institutions and domestic political influences. Contextual differences play a significant role, as these differences correlate with other mitigating institutional and societal factors that help lower the salience of violent SMM as civil liberties increase. This section discusses three key clusters of factors that affect SMM resonance: 1) socio-political cleavages, 2) individual and group psychological dynamics, and 3) the social media ecosystem.13The research on general conditions is extensive and broadly focuses on these three clusters. For an overview see: Lisa Schirch, ed., Social Media Impacts on Conflict and Democracy: The Techtonic Shift, Routledge Advances in International Relations and Global Politics (New York: Routledge, 2021).  

Socio-Political Cleavages

Socio-political cleavages are key to elevating the likelihood of domestic political instability, including atrocities. These include significant social and political polarization, anti-democratic or weakened democratic regimes, and severe governance or security crises.

Severe social and political polarization refers to the hardening of in-group/out-group differences and weakening socialization processes that could otherwise moderate tensions. It occurs through the spread of dehumanizing discourses and formal and informal policies and practices. It also reinforces perceived normative differences between groups: out-groups are seen as threats to the interests, goals, security, or survival of the in-group.14David Livingstone Smith, Making Monsters: The Uncanny Power of Dehumanization (Cambridge, Massachusetts: Harvard University Press, 2021). At its most extreme, such polarization may increasingly manifest through violent behavior, including attacks against opponents. Misinformation both draws from and intensifies polarization, underscoring the reinforcing nature of radicalization dynamics. 15 Kristina Hook, “Hybrid Warfare Is Here to Stay. Now What?,” Political Violence at a Glance (blog), December 12, 2018, https://politicalviolenceataglance.org/2018/12/12/hybrid-warfare-is-here-to-stay-now-what/.   

Regime type also matters. Authoritarian and semi-authoritarian governments are much more likely to use disinformation to target opponents, silence dissent, and shape public discourse. However, SMD and SMM have also been effective in diverse contexts of “democratic backsliding”: democracies where the rule of law is unevenly applied, the free press is attacked or marginalized, and populist leaders are increasingly unrestrained by constitutional or legal checks (such as in Hungary, Turkey, and the United States). Although misinformation may originate from numerous sources, including civil society, the critical point is that with weakened legal and institutional constraints on executive authority, social media may become a powerful platform for misinformation and disinformation that is specifically perpetrated by state authorities or their proxies. 

Profound governance or security crises are especially hospitable environments for SMM. These crises can include the likelihood or onset of armed conflict or collective violence, highly contested power transfers (e.g., extremely divisive elections, coups), constitutional crises, or the imposition of emergency rule. Crises heighten the political stakes, making social media “another battlefront in the narrative war.”16Interview, government official, (April 2022).    

In these contexts of generalized misinformation, sustained disinformation campaigns by states and their proxies can create further instability. State-sponsored SMD is often internally targeted, but SMD is increasingly becoming part of foreign policy destabilization and pressure campaigns, as seen with Russian disinformation in contexts ranging from Ukraine to the United States. In short, foreign involvement intensifies the instability factors noted above.17Jithesh Arayankalam and Satish Krishnan, “Relating Foreign Disinformation through Social Media, Domestic Online Media Fractionalization, Government’s Control over Cyberspace, and Social Media-Induced Offline Violence: Insights from the Agenda-Building Theoretical Perspective,” Technological Forecasting and Social Change 166 (May 2021): 1–14, https://doi.org/10.1016/j.techfore.2021.120661.   

These socio-political factors contribute to distrust between citizens and official sources as well as among citizens, increasing the potential influence of social media misinformation on those who feel socially, politically, or economically alienated.

Psychological Dynamics

Three broad categories of psychological dynamics increase group and individual susceptibility to social media misinformation: 1) belonging, 2) intelligibility, and 3) confirmation bias.18Christina Nemr and William Gangware, Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age (Park Advisors, 2019), https://www.state.gov/wp-content/uploads/2019/05/Weapons-of-Mass-Distraction-Foreign-State-Sponsored-Disinformation-in-the-Digital-Age.pdf.    

The first category concerns a natural need for social belonging and security through membership in a group. Research shows that people have a powerful psychological need to connect with others, finding self-worth through community.19Gregory M. Walton and Geoffrey L. Cohen, “A Question of Belonging: Race, Social Fit, and Achievement,” Journal of Personality and Social Psychology 92, no. 1 (2007): 82–96, https://doi.org/10.1037/0022-3514.92.1.82. Social media participation feeds into this need directly: it can satisfy, at least partly, the need for belonging by connecting like-minded others and strengthening psychological well-being. Accordingly, people heavily involved in a particular online community may find it difficult to critique dominant narratives, especially if information comes from a trusted or known source. Challenging dominant positions may generate criticism, humiliation, or even expulsion.

To make a complex, seemingly dangerous world intelligible, people often infer clear-cut causal connections, motivations, and relationships where they do not exist. While this is a common psychological heuristic, conditions of heightened instability make this particularly dangerous. Misinformation can replace complex or confusing political phenomena with reductive stories of good-versus-evil and us-versus-them. These epistemological shortcuts, which discount complex analyses and often foreclose critical scrutiny of one’s assumptions and preferences, are amplified in social media echo chambers that reinforce our world views.20Xiaoyan Qiu et al., “Limited Individual Attention and Online Virality of Low-Quality Information,” Nature Human Behaviour 1, no. 7 (July 2017), https://doi.org/10.1038/s41562-017-0132.    

A final psychological factor is confirmation bias, the tendency to point to information that confirms already-held beliefs while rejecting contradictory information. Social media worsens this bias;21Ana Lucía Schmidt et al., “Anatomy of News Consumption on Facebook,” Proceedings of the National Academy of Sciences 114, no. 12 (March 21, 2017): 3035–39, https://doi.org/10.1073/pnas.1617052114. research finds that people online gravitate toward those news sources that affirm their views and move away from challenging sources.22Amy Mitchell et al., “Americans Who Mainly Get Their News on Social Media Are Less Engaged, Less Knowledgeable,” Pew Research Center’s Journalism Project (blog), July 30, 2020, https://www.pewresearch.org/journalism/2020/07/30/americans-who-mainly-get-their-news-on-social-media-are-less-engaged-less-knowledgeable/. Furthermore, effective disinformation campaigns intensify these biases to create politically divisive in-group/out-group distinctions. Research shows that social media users are much more likely to share unverified stories than to post a correction when stories have been shown to be false or manipulated.23Arkaitz Zubiaga et al., “Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads,” ed. Naoki Masuda, PLOS ONE 11, no. 3 (March 4, 2016): e0150989, https://doi.org/10.1371/journal.pone.0150989.    

These psychological dynamics become especially important in atrocity-risk contexts, where there is already an ongoing moral reorientation from out-group targeting as passively permissive to actively beneficial.24Waller, Becoming Evil. As we will discuss further, SMM can create the perception of widespread societal buy-in to this moral recasting, i.e., facilitating a bandwagon effect.

Social Media Factors

Several context-specific social media factors also contribute to SMM impacts and should be carefully assessed in atrocity-risk contexts. These factors include: a robust social media ecosystem with a developed “attention economy,” and an SMM focus on topics of high political valence (highly salient and divisive, such as an election) where the political outcome is uncertain and political success requires some degree of public support. 

A robust social media ecosystem refers to a substantial proportion of the public who are regularly on social media. This activity sparks large, dense networks facilitating the rapid spread of information. These platforms vary in format, regulation, online culture, and popularity. Given their role in spreading information and SMM, we also include non-mainstream or regionally popular formats like Telegram and Gab and messaging apps like WhatsApp and Viber in this category. Some social media platforms, such as VK (Russian Federation) or TikTok (China), are also believed to share user data with authoritarian governments.25Reuters, “Russia’s VK Internet Group Sold to Company Linked to Putin Ally,” Reuters, December 3, 2021, sec. Europe, https://www.reuters.com/world/europe/russias-usm-holding-sells-stake-vk-sogaz-insurer-2021-12-02/; Salvador Rodriguez, “TikTok Insiders Say Social Media Company Is Tightly Controlled by Chinese Parent ByteDance,” CNBC, June 25, 2021, sec. Technology, https://www.cnbc.com/2021/06/25/tiktok-insiders-say-chinese-parent-bytedance-in-control.html. These differences require nuances, such as foreign influence operations, domestic influence operations, and ill-informed information sharing, to be teased out in each context through case studies beyond this paper’s scope.

Despite these differences, the “attention economy” is a crucial aspect of social media.26The attention economy refers to the ways in which a person’s attention is limited, which in turn limits the consumption of information. Attention thus becomes a valuable commodity for corporations and states to influence. Thomas H. Davenport and John C. Beck, The Attention Economy: Understanding the New Currency of Business (Boston, Mass: Harvard Business School Press, 2001). Researchers have noted that a person’s attention is a scarce resource, and social media tech companies rely on keeping users’ attention in order to drive revenue.27Vikram R. Bhargava and Manuel Velasquez, “Ethics of the Attention Economy: The Problem of Social Media Addiction,” Business Ethics Quarterly 31, no. 3 (July 2021): 321–59, https://doi.org/10.1017/beq.2020.32. Tech companies have a range of techniques, such as using “friends,” “likes,” “subscriptions,” and other quantifiable measures to secure users’ attention, which in turn strengthen the social media ecosystem and create a focused, engaged audience for advertisers. The public valuation of tech companies is tied to their number of users, and without sustained public outcry, companies are often slow to crack down on misinformation that may risk driving away users. Some companies have directly addressed troll factories, shell accounts, bots, or other amplifiers of misinformation by establishing special response protocols (discussed in Section V), but these actions are influenced by revenue concerns generated from the attention economy.28Interviews, tech company employees, (April 2022).   

High political valence topics will vary by context, but the point is that they elicit strong public reactions and are framed to resonate along cleavages like political ideology or party, ethnicity, race, religion, or other recognized fault lines.29William J. Brady et al., “Emotion Shapes the Diffusion of Moralized Content in Social Networks,” Proceedings of the National Academy of Sciences 114, no. 28 (July 11, 2017): 7313–18, https://doi.org/10.1073/pnas.1618923114; Thomas Zeitzoff, “How Social Media Is Changing Conflict,” Journal of Conflict Resolution 61, no. 9 (October 2017): 1970–91, https://doi.org/10.1177/0022002717721392. Uncertain political outcomes that depend on public support are especially susceptible to various forms of information disorder, including misinformation and disinformation.30Information disorder is used by some experts to refer to the full range of “propaganda, lies, conspiracies, rumors, hoaxes, hyper-partisan content, falsehood or manipulated media,” including misinformation, disinformation, and malinformation. Claire Wardle, “Understanding Information Disorder,” First Draft, September 22, 2020, https://firstdraftnews.org:443/long-form-article/understanding-information-disorder/. In particular, disinformation campaigns are often used to secure backing by discrediting the opposition. The lead-up to presidential elections in Brazil (2018)31Raquel Recuero, Felipe Bonow Soares, and Anatoliy Gruzd, “Hyperpartisanship, Disinformation and Political Conversations on Twitter: The Brazilian Presidential Election of 2018,” Proceedings of the International AAAI Conference on Web and Social Media 14, no. 1 (May 26, 2020): 569–78, https://ojs.aaai.org/index.php/ICWSM/article/view/7324. , Colombia (2022)32ColombiaCheck, “Elecciones 2022,” accessed June 28, 2022, https://colombiacheck.com/elecciones-2022. , Indonesia (2019)33Saiful Mujani and Nicholas Kuipers, “Who Believed Misinformation during the 2019 Indonesian Election?,” Asian Survey 60, no. 6 (December 3, 2020): 1029–43, https://doi.org/10.1525/as.2020.60.6.1029. , and the United States (2020)34Amy Mitchell et al., “Misinformation and Competing Views of Reality Abounded throughout 2020,” Pew Research Center’s Journalism Project (blog), February 22, 2021, https://www.pewresearch.org/journalism/2021/02/22/misinformation-and-competing-views-of-reality-abounded-throughout-2020/. illustrate these dynamics. These campaigns are often sustained, coordinated, and sophisticated efforts to manipulate public sentiment and views, well beyond uncoordinated or sporadic misinformation posts.

Nevertheless, the context-specific nature of disinformation content, provenance, and dissemination patterns poses significant challenges for systematic theorizing and forecasting how disinformation will interact with high political valence issues. “Disinformation is not a wholly portable subject,” one expert told us. “Knowing about the subject of disinformation is not sufficient. In an individual context, you must know about the nuances of a politicized topic before combining this knowledge with technical, computational, and other social science approaches to disinformation. Disinformation is a derivative of specific real-life events.”35Interview, social scientist (March 2022).    

Functions of Social Media Misinformation

Existing research suggests that SMM is not a direct cause of severe instability or mass atrocities but it can be an enabler,36Matthew L Williams et al., “Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime,” The British Journal of Criminology, July 23, 2019, azz049, https://doi.org/10.1093/bjc/azz049. legitimizing and accelerating violence in a variety of ways. Below, we frame these within three broad categories concerning the primary functions of misinformation. SMM contributes to:

Out-group Vulnerability by:

  • Fraying social relations between individuals and groups. Research has found that misinformation campaigns weaken social ties and separate people into increasingly self-contained online political communities, with fewer opportunities to encounter counter-narratives or other sources of information. It also opens potential avenues of radicalization.37Gregory Asmolov, “The Disconnective Power of Disinformation Campaigns,” Journal of International Affairs 71 (September 18, 2018): 69–76, https://jia.sipa.columbia.edu/disconnective-power-disinformation-campaigns.    
  • Spreading dehumanizing or polarizing discourse that normalizes perceptions of political opponents as untrustworthy adversaries or even existential threats. Subjection to such severe and ongoing dehumanizing discourse legitimizes marginalization, the denial of rights, and sometimes violence.38Victoire Rio, “Myanmar: The Role of Social Media in Fomenting Violence,” in Social Media Impacts on Conflict and Democracy (Routledge, 2021), 143–60.   
  • Targeting opposition groups or leaders with specific false accusations such as corruption or disloyalty, or smear campaigns to undermine their credibility. This is especially common prior to pivotal transitions, such as upcoming elections, that often correlate with increased atrocity risks.
  • Attacking opposition figures to dissuade public deliberation and curtail speech. Sustained and relentless harassment of opponents discourages alternative views from emerging and signals to potential critics that they, too, may be targeted. It may also lower the perceived costs of violence against opposition figures.

In-group Cohesion by:

  • Presenting one’s group as the authentic defenders of important values and collective survival. This is a group-affirming function as it repositions one’s group as engaged in a righteous struggle with enormous stakes, which may lower the perceived prohibitions against violence. Groups’ differences are framed in heavily normative terms with little space for critique or compromise. 
  • Building a collective identity around perceptions of persecution, situating fear and grievances at the center of identity, a noted dynamic often present in atrocity contexts.39Rio, “Myanmar: The Role of Social Media in Fomenting Violence.”   
  • Advocating collective self-defense or preemptive attack against perceived enemies, another feature noted in atrocity-risk contexts.40Ibid. Social media is an effective platform for spreading calls for direct assaults on opponents and also for the practical work of organizing collective action off-line, including the creation of extremist advocacy organizations, secret cells, and militia groups.41Mattias Wahlström and Anton Törnberg, “Social Media Mechanisms for Right-Wing Political Violence in the 21st Century: Discursive Opportunities, Group Dynamics, and Co-Ordination,” Terrorism and Political Violence 33, no. 4 (2021): 766–87.    

Erosion of Civil Trust and the Spread of Epistemic Insecurity by:

  • Introducing unrelated narratives or counterattacks. Misinformation can be part of a larger diversionary effort to draw attention from damaging criticisms or counterarguments.42Hook, “Hybrid Warfare Is Here to Stay. Now What?”    
  • Spreading doubt on a specific issue, such as electoral fairness, candidate integrity, or opposing party intentions.
  • Creating what we term epistemic insecurity, where charges of bias, dissimulation, and conspiracies undermine truth claims, evaluative standards, and evidence, and which are replaced by alternative, unsubstantiated narratives. Sustained misinformation campaigns can have a cumulative effect of sabotaging factual reporting and narratives while lowering social and epistemological barriers to far-fetched conspiratorial thinking.43Daniel Allington, “Conspiracy Theories, Radicalisation and Digital Media” (London: Global Network on Extremism and Technology, 2021). Several experts in disinformation response units of tech companies noted that foreign-supported SMD campaigns often are directed at spreading epistemic insecurity, or what they termed “perception hacking.”44Interviews, tech company experts (April 2022).   

The points above underscore misinformation’s varied and self-reinforcing effects in fragile political contexts. These effects raise profound challenges for the atrocity prevention community, as discussed in the following section.

SMM Challenges for Instability and Atrocity Prevention

Social media misinformation presents various practical challenges to atrocity-focused work, affecting the full spectrum of efforts from early warning, deterrence and response, to prosecution. Any instability and atrocity prevention program across these categories needs to consider the following challenges to craft viable, effective policies and strategies:

Speed

The rapid pace of social media and the technology culture associated with its rise accelerate the spread of misinformation, pushing it faster and further. One technical and policy expert said “maximum engagement strategy” pushes information into the public sphere with little regard for critical inquiry or healthy skepticism.45Interview, expert from monitoring group (February 2022). Research suggests that social media platforms are also changing norms, expectations, and practices in journalism — shaping the professional cultures across all digital, print, television, and radio industries — as journalists report implicit or explicit pressure to “publish content online quickly at the expense of accuracy” for profit reasons.46Angela M Lee, “Social Media and Speed-Driven Journalism: Expectations and Practices,” International Journal on Media Management 17, no. 4 (2015): 217–39. Our interviews revealed a similar pattern in social media, noting the pressure policymakers (and others) feel to respond to crises within hours. Speed also contributes to crowding out unpopular opinions with little thought given to dissenting views, an effect seen in the amplification of COVID-19 disinformation and misinformation worldwide.47“Meeting COVID-19 Misinformation and Disinformation Head-On,” Johns Hopkins Bloomberg School of Public Health, 19, accessed May 12, 2022, https://publichealth.jhu.edu/meeting-covid-19-misinformation-and-disinformation-head-on. The speed of SMM concentrates, strengthens, and amplifies confirmation biases.

Chaos

Social media contributes to an overabundance of data, challenging instability and atrocity prevention policy experts to identify and prioritize what is most important for decision-making and filter out misinformation, all in contexts that strain resources. The rise of SMM has corresponded to a modern transition to the information age, where information itself has become a productive force.48J. D Bernal, The Social Function of Science (London: Routledge & Sons Ltd, 1939), updated by experts like Raghu Krishnapuram, “Global Trends in Information Technology and Their Implication” (2013 1st International Conference on Emerging Trends and Applications in Computer Science (ICETACS), Shillong, India: IEEE, 2013), https://doi.org/10.1109/ICETACS.2013.6691382. Mainly due to social media activity, data are now created at an unprecedented speed, with an estimated 44 trillion gigabytes of data generated daily.49Frank Oestergaard, Susanne Beck Kimman, and Søren Ravn Pedersen, “Control Your Data or Drown Trying,” IBM Nordic Blog, September 11, 2019, https://www.ibm.com/blogs/nordic-msp/control-your-data-or-drown-trying/. Yet far from solving policymaker needs, these trends have created an informational overload: “Without new capabilities, the paradox of having too much data and too little insight would continue.”50Nicholas Drury and Sandipan Sarkar, “How Cognitive Computing Will Revolutionize Banks and Financial Markets,” IBM THINK Blog, November 17, 2015, https://www.ibm.com/blogs/think/2015/11/cognitive-banking-2/. In public remarks, a former U.S. Department of Defense advisor said, “If you could apply human judgements, it’s wonderful, otherwise substituting human judgment and talent with computing analytics does not work.”51Tara Kangarlou, “State Department Hopes It Can Find Peace among Data,” CNN, May 1, 2013, https://www.cnn.com/2013/04/30/politics/state-data-analysis. The turmoil of atrocity-risk contexts present decision-makers with some of the world’s most complex challenges for actionable pattern identification. One former government analyst noted that SMM is “gasoline to the fog of war.”52Interview (December 2021).    

Curation and Control

Social media invites curation, providing abundant opportunities for fostering polarization and SMD influence operations by external actors. The experts interviewed across professional sectors identify several themes related to the unique curation powers of social media and its implications.53Multiple Interviews experts in government, tech corporations, and misinformation fact-checking organizations (October 2021-May 2022). One analyst framed every social media interaction as having invisible third parties that shape the interaction, from software engineers who make various behavioral assumptions about users in developing social media algorithms to political actors employing dissimulation to advance their goals. Another policymaker discussed how viral SMM content, like false accusations of imminent violence by opponents, can dominate political deliberation by saturating the information sphere. Research confirms that perception biases are linked to how imminent a given risk is perceived rather than the actual severity of danger — a dynamic that atrocity perpetrators can exploit to dehumanize targeted groups.54Michael W. Slimak and Thomas Dietz, “Personal Values, Beliefs, and Ecological Risk Perception,” Risk Analysis 26, no. 6 (December 2006): 1689–1705, https://doi.org/10.1111/j.1539-6924.2006.00832.x.    

Experts also cited greater credulity for SMM when passed from another trusted source such as a relative, friend, or colleague. One researcher argued that social media has extended traditional marketing practices of segmentation (dividing one’s market into targeted groups) into micro-segmentation, allowing SMD promulgators to test various disinformation narratives through promoted ads and adapting the content in real-time based on audience feedback. The overlap of competing narratives, conspiracy, and chaos creates a strategic messaging advantage for those able to leverage these data, including SMM and SMD actors.55Jakob Rigi, “The War in Chechnya: The Chaotic Mode of Domination, Violence and Bare Life in the Post-Soviet Context,” Critique of Anthropology 27, no. 1 (March 2007): 37–62, https://doi.org/10.1177/0308275X07073818. These dynamics highlight the importance of legacy media institutions and investigative journalism in countering atrocity risks associated with SMM, although existing challenges limit their effectiveness in this area, as we address in the policy recommendation section. 

Capacity and Anonymity

New technologies are increasingly affordable and simple to use. While social media platforms may bring a democratizing aspect to the information space, this ease of access and anonymity also give disinformation creators, amplifiers, and funders more opportunity to act with impunity. Growing capacity and anonymity increase the likelihood of SMM, though our interviewees also suggested that these features may likewise enable human rights defenders to develop or extend decentralized grassroots campaigns in novel ways (e.g., Ushahidi and Una Hakika in Kenya).56Frederick Ogenga, “Kenya: Social Media Literacy, Ethnicity, and Peacebuilding,” in Social Media Impacts on Conflict and Democracy (Routledge, 2021), 131–42. Although cases differ, our interviews reveal a general divide among experts over how to respond to the double-edged consequences of greater capacity and anonymity. Some experts see a net positive that allows for new means of localized peacebuilding, information sharing, and anonymity for otherwise vulnerable sources, while others remain skeptical of the ability of peacebuilding groups to use social media effectively in the face of strong SMM and disinformation campaigns by states or other armed actors.57Multiple interviews with experts from government and non-governmental organizations, January 2022-May 2022. We note that this division appears to map closely onto interviewees’ general perspectives about the relative benefits of localized versus national or international peacebuilding approaches.     

Speech Rights: Balancing Civil Liberties and Civilian Protection

Long-standing debates and global differences surrounding the balance between national security, civil liberties, and civilian protection have sharpened in the face of SMM. Some states like New Zealand58Classification Office (New Zealand), “The Edge of the Infodemic: Challenging Misinformation in Aotearoa,” 2021, https://apo.org.au/node/312981. and intergovernmental organizations like the EU59“The Digital Services Act Package: Shaping Europe’s Digital Future,” European Commission, n.d., https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package.  have strengthened oversight, standards, and penalties for misinformation and disinformation content on social media. Other states, such as Myanmar, China, or Russia60Bradshaw, Bailey, and Howard, “Industrialized Disinformation.” , use national security as a pretext for limiting free speech and civil liberties. Different global contexts have diverse norms, ethical standards, and values around the blend of civil liberties, national security, and civilian protection.61Cheryl Rivers and Anne Louise Lytle, “Lying, Cheating Foreigners!! Negotiation Ethics across Cultures,” International Negotiation 12, no. 1 (2007): 1–28.   
 

Additionally, major tech companies like Meta, Google, and Twitter, among others, are often hesitant to employ policies that may be seen as limiting free speech, noting the practical and logistical difficulties of enforcing consistent standards across many legal contexts.62Interviews, tech company content moderation experts and oversight organizations (March 2022; April 2022). This is further complicated by conflicting national and regional laws, providing unclear direction and accountability for these companies whose reach is nearly global (the 2022 EU Digital Services and Digital Markets Acts, for instance, is more demanding than current U.S. legislation).

Regardless, corporate content moderation efforts, even from companies such as Meta that have invested significant resources in this area, are often ineffective, slow, or inconsistently applied, either because of insufficient resources used to combat misinformation, unclear legal obligations and expectations that change by jurisdiction, or a commitment to profits over truth.63Petros Iosifidis and Nicholas Nicoli, “The Battle to End Fake News: A Qualitative Content Analysis of Facebook Announcements on How It Combats Disinformation,” International Communication Gazette 82, no. 1 (February 2020): 60–81, https://doi.org/10.1177/1748048519880729; Adam L. Alter, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked (New York: Penguin Press, 2017); Interview with fact-checking and monitoring groups, (December 2021, February 2022). Other social media platforms, such as Gab and Telegram, have weaker moderation rules and are less willing to regulate potentially dangerous speech.64Iria Puyosa and Esteban Ponce de Leon, “Understanding Telegram’s Ecosystem of Far-Right Channels in the US,” DFRLab (blog), March 23, 2022, https://medium.com/dfrlab/understanding-telegrams-ecosystem-of-far-right-channels-in-the-us-22e963c09234. Without clear, robust, and implementable international standards, the balance between civil liberties and civilian protection will continue to vary across situations, including any national legal frameworks that might exist in an atrocity-risk context.65Ted van Baarda et al., eds., The Moral Dimension of Asymmetrical Warfare: Counter-Terrorism, Democratic Values and Military Ethics (Leiden ; Boston: Martinus Nijhoff, 2009).    

Censorship and Accountability Efforts

Conflicting actions between stakeholders, such as removing content flagged by social media moderators as violating community standards, may lead to a lack of information access for prosecutors, investigators, and prevention policymakers, especially in international contexts not subject to legal warrants. This challenge has been noted by social media companies surveyed for this paper, yet potential solutions remain under review by some of the companies surveyed at the time of this writing. Relatedly, when asked what the major challenge to their efforts was, governmental policymakers stated data access, an issue made more complicated by security classifications. As expressed by policymakers interviewed, except in criminal cases, technology companies’ disclosure of internal data, including highly profitable micro-segmentation metrics, remains a voluntary step. Uneven data-sharing policies by technology companies operating in contexts with highly variable civil liberties, such as China and the United States, complicate matters. These dynamics often offer less opportunity for democratic states that sponsor atrocity prevention efforts to use these data in their work. However, even in democracies, government demands for information on account owners and connections raise profound, contested privacy and surveillance issues. Additionally, social media monitoring efforts to remove dangerous content may lead to a lack of data and information access for prosecutors and investigators pursuing atrocity perpetrators.66“‘Video Unavailable’: Social Media Platforms Remove Evidence of War Crimes” (Human Rights Watch, September 10, 2020), https://www.hrw.org/report/2020/09/10/video-unavailable/social-media-platforms-remove-evidence-war-crimes.    

Policy Recommendations

Effective atrocity prevention requires a coherent, integrated vision across the socio-political, psychological, and technical domains. These approaches respectively address the societal dynamics that enable atrocities, psychological dimensions that link atrocities and SMM (including the attention economy), and other technical components (e.g., identifying, monitoring, classifying, and countering SMM). Multiple interviewees underscored the context-specific nature of addressing SMM but also highlighted general prevention strategies that can be adopted. Countering SMM is one component of atrocity prevention, but an increasingly important one. This last section includes recommendations tailored to different groups of stakeholders, although we envision some of these recommendations holding for multiple actors. These stakeholders include social media corporations, established (legacy) media, non-governmental civil society actors, researchers and civil society, and governments and multilateral organizations. Each of the actors plays a significant role in countering SMM. We include specific guidance for each group and broader recommendations to strengthen strategic partnerships among them for atrocity prevention.

Social Media Corporations

This category includes private corporations like Meta (Facebook/Instagram), Twitter, and Google (including YouTube), among others. Although many large corporations have offices and strategies designed to tackle SMM, the following steps are needed:

  • Recognize that the platforms play a complex, central role in any effort to curtail disinformation. Remind shareholders that being a field leader includes addressing SMM; the consumer base expects this.
  • Adjust algorithms amplifying SMM, especially to reduce the reach of SMM and conspiracy accounts by demoting and de-prioritizing this content from users’ feeds. De-platforming accounts that call for violence works.67Although deplatforming may lead some users simply to migrate to other less regulated platforms, several experts told us that rebuilding these social networks can be difficult and time consuming, and consequently the new networks are often smaller. Interviews, fact-checking and violence monitoring experts, (March 2022; May 2022). https://www.niemanlab.org/2021/06/deplatforming-works-this-new-data-on-trump-tweets-shows/.   
  • Close fake and bot accounts regularly and proactively. Ensure a field-wide standard that provides regular announcements with statistical information and external evaluations (as large companies like Meta currently do68Chris Sonderby, “Transparency Report, Second Half 2021,” Meta (blog), May 17, 2022, https://about.fb.com/news/2022/05/transparency-report-h2-2021/. ) on these efforts to show the companies are responsive to public demand. 
  • Continue to clarify and consistently enforce policies and procedures for content monitoring, flagging, and removal. 
  • Institute robust oversight mechanisms and empower staff to exercise decisional authority on flagging and removal. Ensure staff are protected from corporate retaliation or contract non-renewals for third-party vendors. 
  • Prioritize SMM cases continually, not just when under public scrutiny. Blend efforts to include a clear focus on upstream, preventive monitoring rather than reactive response efforts. Doing so will better equip companies to work with other stakeholders in identifying upstream efforts, thereby preventing (and not just responding) to SMM, especially disinformation. This involves more robust efforts to integrate “lessons learned” from previous cases.
  • Commission and implement annual polarization and conflict awareness education among staff, senior decision-makers, and content monitoring specialists. The latter must have regular refreshing training and extra insight into context-specific trends. Similarly, work with instability and atrocity prevention experts to deepen such knowledge in internal crisis monitoring teams. This allows companies to anticipate future atrocity and instability episodes better and prepare accordingly.
  • Invest more in local partnerships in the Global South for content moderation. Local experts are an essential asset but often need greater support and resources.69Alexandra Stevenson, “Soldiers in Facebook’s War on Fake News Are Feeling Overrun,” The New York Times, October 9, 2018, sec. Business, https://www.nytimes.com/2018/10/09/business/facebook-philippines-rappler-fake-news.html. Several experts interviewed noted that many tech company experts have little experiential and other knowledge of conflict-affected societies awash with violent SMM. In contrast, tech company stakeholders admitted that significant gaps in their contextual knowledge exist despite hiring some local experts.70Interviews, tech company experts, monitoring organization experts (April 2022; May 2022).    
  • Protect content analyst and moderator employees in violent or repressive locations, including by greater anonymization of company sources where possible. 
  • Strengthen and formalize rapid response linkages to the instability and atrocity prevention community. Some of this has occurred, but experts note it remains relatively weak.71Interviews, government officials, human rights organization experts (May 2022). Invest in building these networks before crises. 
  • Resist self-censorship demands from recognized authoritarian regimes, including the sharing of surveillance information on human rights defenders working to prevent violence in their local contexts. 
  • Maximize the effectiveness of corporate giving by sponsoring and participating in digital media literacy programs, journalism grants that allow media paywalls to be removed during instability and atrocity crises, and academic research grant programs for poorly understood aspects of SMM (such as the specific reinforcing processes connecting online extremism and in-world violence).
  • Work with advertisers to limit or stop advertising on known SMD sites which monetize disinformation.72Elise Thomas, “Conspiracy Clickbait: The Art of the (Affiliate Marketing) Deal,” Case Study (London: Institute for Strategic Dialogue, 2022), https://www.isdglobal.org/wp-content/uploads/2022/01/Conspiracy-Clickbait-Study-3.pdf.    

Established (Legacy) Media

  • Recognize the explicit overlap between robust journalism, atrocity prevention, and SMM. Years of strong investigative journalism may later be necessary to counter dehumanizing calls for violence.73Rachel Treisman, “Putin’s Claim of Fighting against Ukraine ‘neo-Nazis’ Distorts History, Scholars Say,” NPR, March 1, 2022, sec. Europe, https://www.npr.org/2022/03/01/1083677765/putin-denazify-ukraine-russia-history.   
  • Prioritize fact-checking and analysis over merely reporting extreme discourse to build a shared basis of facts to counter atrocity-related SMM.
  • Invest resources in long-term coverage of key issues, creating a foundation of public record facts that can counter later disinformation campaigns that elevate atrocity risks.
  • Work with civil society groups to lead digital media literacy programs, an essential component of reducing bandwagon effects that dehumanize or enable atrocity risks through viral SMM. 
  • Publicize, protect, and adhere to journalistic norms, explaining to the public what these are and why they are essential. Build trust as a facts-based arbiter by acknowledging mistakes or failures around these norms. This trust will be essential during the heated moments of accumulating SMM-linked atrocity risks.
  • Speak openly about the distinction between news aggregators as sources of information and a reduced presence of primary or investigative news reporting.
  • Remove paywall protection for critical news coverage.
  • Participate in, support, and foster career advancement opportunities associated with non-major city reporting opportunities (e.g., the USA Today Network for rural community reporting), using these opportunities to widen public trust and digital literacy. Consider whether these models can be supported in contexts with long-term, structural atrocity risks.
  • Host public events with academics and public intellectuals who can discuss human tendencies toward epistemic insecurity.
  • Normalize bias and cognitive dissonance discussions, including for journalists, and be open with the public about the steps taken to minimize these in reporting.

Non-Governmental Civil Society Actors

  • Host tabletop exercises and SMM-linked atrocity prevention simulations for stakeholders across all categories, reducing siloed efforts and fostering relationships before crises strike.
  • Working with other civil society organizations, create an action plan to strengthen coordination around monitoring and moderation on social media platforms to avoid piece-meal strategies that work at cross-purposes. Solicit input and buy-in from social media companies.
  • Build public advocacy for social media accountability. Maintain relationships with social media stakeholders, and when possible, work in such a way that builds political capital for reformers within the system. Additionally, maintain public pressure on tech companies that “whitewash” superficial SMM efforts. 
  • Pressure tech companies to establish common binding norms and policies on SMM (little incentive exists for them to do so individually). The Global Internet Forum to Counter Terrorism is a start but should be expanded substantially through greater transparency, increased membership, and metrics that move beyond “low-hanging fruit.”74Global Internet Forum to Counter Terrorism: https://gifct.org. The GIFCT was founded by Facebook, Microsoft, Twitter and YouTube in 2017 to stop extremists from exploiting social media platforms. Dunston Allison-Hope, Lindsey Anderson, and Susan Morgan, “Human Rights Impact Assessment: Global Internet Forum to Counter Terrorism,” BSR, July 20, 2021, https://www.bsr.org/en/our-insights/report-view/human-rights-impact-assessment-global-internet-forum-to-counter-terrorism.     
  • Encourage financial support for news media to drop paywalls to access critical news.
  • Be realistic about organizational strengths and resources, planning a division-of-labor strategy among other civil society organizations. 
  • Contribute detailed conflict mapping of instability contexts and actors, setting the groundwork for tailored and sustained SMM responses. Knowledge of these contexts can help tech companies fulfill the recommendation for more upstream, preventive efforts.
  • Practice reflexivity and prioritize platforms for Global South actors to contribute their expertise.

Researchers and Civil Society

  • Participate in tabletop exercises with other stakeholders, building networks for practitioners in the field. For senior academics, internally advocate for such activities to fulfill early-career researcher performance metrics (e.g., tenure), building the bench of scholars with SMM and atrocity prevention expertise at all career levels to ensure long-term sustainability.
  • Expand fact-checking networks to monitor, flag, and publicize disinformation. This may include combining expertise across high political valence issue areas (elections, public health, security, etc.).
  • Remember the different, short-term timelines of atrocity prevention/SMM response, and consider how research can provide empirically grounded frameworks for information organization in real-time.
  • Regularly speak with other stakeholders, asking what types of research are needed for practitioners’ atrocity prevention/SMM toolkit.
  • With civil society actors, lead trainings in SMM monitoring and digital media literacy for the public.
  • Partner with psychologists and other experts on education programs about SMM-related biases.
  • Prioritize the integration of SMM analysis into violence and instability early warning modeling.75Michael Yankoski et al., “Artificial Intelligence for Peace: An Early Warning System for Mass Violence,” in Towards an International Political Economy of Artificial Intelligence, ed. Tugrul Keskin and Ryan David Kiggins (Cham: Springer International Publishing, 2021), 147–75, https://doi.org/10.1007/978-3-030-74420-5_7; Ernesto Verdeja, “Predicting Genocide and Mass Atrocities,” Genocide Studies and Prevention 9, no. 3 (February 2016): 13–32, https://doi.org/10.5038/1911-9933.9.3.1314.   
  • Support the role of local experts, especially from traditionally marginalized communities including in the Global South, in knowledge production around SMM. Provide support and anonymity when in repressive or violent contexts.
  • Practice media skills using available resources (e.g., university media offices, conversations with local journalists, and organizations like the Alan Alda Center for Communicating Science) and write these into grants. Employ these skills to “translate” research to SMM stakeholders and the public.
  • Develop more precise and operable frameworks of “harm” that recognize the speed and scale of social media diffusion;76Interviewees noted the interpretive challenges to applying general concepts of harm (e.g., incitement to violence) to specific contexts in rapidly changing political circumstances. Several interviewees encouraged greater collaboration with instability and atrocity prevention experts to understand these connections (interviews, tech company experts, [March and April 2022]). work with the instability and atrocity prevention experts on these efforts.

Governments and Multilateral Organizations

  • Develop and strengthen internal SMM analytical capacity, including hiring information officers with experience in the linkage between instability, atrocity prevention, and SMM.
  • Work with academic experts and integrate SMM into early warning and accountability policy toolkits. Fund case study-specific research to determine what approaches, tools, and lessons learned may be portable (or not) across contexts.
  • Increase policy analyst and policymaker knowledge and internal awareness of how social media operates across different platforms, including those with little regulation like Telegram and Gab. 
  • Be mindful of too much direct government involvement, as this can reinforce mistrust of government. Ask internally whether civil society or other stakeholders can better lead public messaging on a contentious subject.
  • Ask questions on these topics internally and with government and multilateral counterparts, specifically about possible internal mismatches in SMM working definitions. Tailored questions can elicit whether competing mandates or mismatched working definitions reduce policy effectiveness.
  • Strengthen legislation and international agreements to ensure that tech companies that remove SMM can share material relevant to atrocities/instability with human rights researchers and prosecutors. Tech companies often remove material quickly for violating their relevant Terms of Service and save it for a limited period, but this material can be valuable for analysis and accountability. Current legal avenues, such as the Stored Communications Act, Cloud Act, and various Mutual Legal Assistance Treaties (MLAT), are cumbersome and slow.77David J. Simon and Joshua Lam, “To Support Accountability for Atrocities, Fix U.S. Law on the Sharing of Digital Evidence,” Just Security, April 20, 2022, https://www.justsecurity.org/81182/to-support-accountability-for-atrocities-fix-u-s-law-on-the-sharing-of-digitial-evidence/. The Berkeley Protocol on Digital Source Investigations provides practical suggestions for improvements.78“Berkeley Protocol on Digital Open Source Investigations: A Practical Guide on the Effective Use of Digital Open Source Information in Investigating Violations of International Criminal, Human Rights and Humanitarian Law” (Human Rights Center UC Berkeley School of Law and United Nations Office of the High Commissioner for Human Rights, 2022), https://www.ohchr.org/sites/default/files/2022-04/OHCHR_BerkeleyProtocol.pdf.   
  • Clarify institutional roles and policies in countering SMM internally and publicly.79“The Tech Against Terrorism Guidelines: Government Transparency Reporting on Online Counterterrorism Efforts” (Tech Against Terrorism, 2022), https://static1.squarespace.com/static/609d273957ee294d03d8dadf/t/60fe84736b5d5b2618fcdb95/1627292788720/TAT+Guidelines+-+Government+transparency+reporting+on+online+counterterrorism+efforts.pdf. Avoid actions that may reduce the effectiveness of civil society efforts to counter SMM.
  • Remember the erroneous tendency to prioritize process over outcome.
  • Democratic governments and multilateral organizations should use their influence and lobbying platforms with other governments. Despite the perceived influence of social media companies, their local content moderators can still face governmental harassment in non-free contexts.
  • Utilize global and regional organizations and platforms (e.g., UN, EU, AU, OAS) to integrate SMM analysis with instability and atrocity prevention networks, policies, and doctrine. Currently, this work is ad hoc, with little concrete sharing of best practices.80Interviews, tech company experts (December 2021; April 2022). A possible avenue is leveraging existing atrocity prevention “focal point” networks, which already serve as prevention expertise nodes in governments in parts of the Global South. “Global Network of R2P Focal Points,” Global Centre for the Responsibility to Protect, n.d., https://www.globalr2p.org/the-global-network-of-r2p-focal-points/.  Intergovernmental agencies can be crucial in coordinating and sharing knowledge on standards, policies, and practical tools for combatting SMM. 
  • In addition to non-governmental organizations and academic-led monitoring efforts, legislators and parliamentarians should consider legislation that establishes accountability and oversight mechanisms for violence and incitement that can be directly connected to platforms.81For instance, the EU’s 2022 Digital Services Act requires tech companies to employ stronger content moderation policies. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package. One expert from a tech oversight organization noted, “Corporate self-regulation has shown its limits” (May 2022).   

Acknowledgments

We would like to thank our interviewees, many of whom requested anonymity. Additionally, we’d like to extend our gratitude to Isabelle Arnson, Amir Bagherpour, Rachel Barkley, Jeremy Blackburn, Cathy Buerger, Kate Ferguson, Tibi Galis, Andrea Gittleman, Derrick Gyamfi, Maygane Janin, Ashleigh Landau, Khya Morton, Savita Pawnday, Max Pensky, Iria Puyosa, Sandra Ristovska, Walter Scheirer, Lisa Schirch, Karen Smith, Fabienne Tarrant, Tim Weninger, Kerry Whigham, Rachel Wolbers, Lawrence Woocher, Oksana Yanchuk, and Michael Yankoski for discussing these issues with us. Finally, we thank Ilhan Dahir, James Finkel, Shakiba Mashayekhi, and Lisa Sharland from the Stimson Center.

About the Authors 

Kristina Hook is Assistant Professor of Conflict Management at Kennesaw State University’s School of Conflict Management, Peacebuilding, and Development. A specialist in Ukraine and Russia, her expertise includes genocide and mass atrocity prevention, emerging technologies and disinformation, post-conflict reconstruction, and war-related environmental degradation. She regularly consults with government, multilateral, and human rights organizations on these issues. Prior to her time in academia, she served as a U.S. Department of State policy advisor for conflict stabilization. 

Ernesto Verdeja is Associate Professor of Political Science and Peace Studies at the Kroc Institute for International Peace Studies, Keough School of Global Affairs, University of Notre Dame. He is also the Executive Director of the non-profit organization Institute for the Study of Genocide. His research focuses on the causes and prevention of genocide and mass atrocities, social media disinformation and mass violence, and transitional justice. He regularly consults with governments and human rights organizations on these topics

Notes

  • 1
    Natalie Jomini Stroud, Emily Thorson, and Dannagal Young, “Making Sense of Information and Judging Its Credibility” (Understanding and Addressing the Disinformation Ecosystem, Annenberg School for Communications, 2017), 45–50, https://firstdraftnews.org/wp-content/uploads/2018/03/The-Disinformation-Ecosystem-20180207-v3.pdf.
  • 2
    Patrick Van Kessel, Skye Toor, and Aaron Smith, “A Week in the Life of Popular YouTube Channels,” Pew Research Center: Internet, Science & Tech (blog), July 25, 2019, https://www.pewresearch.org/internet/2019/07/25/a-week-in-the-life-of-popular-youtube-channels/.
  • 3
    Stan Schroeder, “Telegram Hits 500 Million Active Users amid WhatsApp Backlash,” Mashable, January 13, 2021, https://mashable.com/article/telegram-500-million.
  • 4
    Samantha Bradshaw, Hannah Bailey, and Philip N. Howard, “Industrialized Disinformation: 2020 Global Inventory of Organised Social Media Manipulation,” Working Paper (Oxford, UK: Project on Computational Propaganda, 2021), https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/127/2021/02/CyberTroop-Report20-Draft9.pdf.
  • 5
    Many interviewees asked for anonymity to speak candidly. We refer to them by area of expertise in these footnotes.
  • 6
    The questions varied slightly by interviewee type. For instance, discussions with computer scientists went into more detail on technical issues of identifying misinformation and disinformation provenance, monitoring large volumes of social media data in real time and at scale, and related computational issues.
  • 7
    Additional information on our own project is here: https://kroc.nd.edu/research/artificial-intelligence-social-media-and-political-violence-prevention/.
  • 8
    Claire Wardle, “Information Disorder: The Essential Glossary” (Shorenstein Center on Media, Politics, and Public Policy, July 2018); Caroline Jack, “Lexicon of Lies: Terms for Problematic Information” (Data & Society Research Institute, August 2017).
  • 9
    “Malinformation” is sometimes used for true information that is shared with the aim of causing harm, such as private information to damage a person’s reputation. However, we follow the general trend to subsume malinformation within the category of disinformation, since both concern the intentional spread of information to harm, although they differ on the degree of truth content.
  • 10
    Interviews, computer scientists (February 2022; April 2022).
  • 11
    James Waller, Becoming Evil: How Ordinary People Commit Genocide and Mass Killing, 2nd ed. (Oxford; New York: Oxford University Press, 2007).
  • 12
    Bradshaw, Bailey, and Howard, “Industrialized Disinformation.”
  • 13
    The research on general conditions is extensive and broadly focuses on these three clusters. For an overview see: Lisa Schirch, ed., Social Media Impacts on Conflict and Democracy: The Techtonic Shift, Routledge Advances in International Relations and Global Politics (New York: Routledge, 2021). 
  • 14
    David Livingstone Smith, Making Monsters: The Uncanny Power of Dehumanization (Cambridge, Massachusetts: Harvard University Press, 2021).
  • 15
    Kristina Hook, “Hybrid Warfare Is Here to Stay. Now What?,” Political Violence at a Glance (blog), December 12, 2018, https://politicalviolenceataglance.org/2018/12/12/hybrid-warfare-is-here-to-stay-now-what/.
  • 16
    Interview, government official, (April 2022).
  • 17
    Jithesh Arayankalam and Satish Krishnan, “Relating Foreign Disinformation through Social Media, Domestic Online Media Fractionalization, Government’s Control over Cyberspace, and Social Media-Induced Offline Violence: Insights from the Agenda-Building Theoretical Perspective,” Technological Forecasting and Social Change 166 (May 2021): 1–14, https://doi.org/10.1016/j.techfore.2021.120661.
  • 18
    Christina Nemr and William Gangware, Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age (Park Advisors, 2019), https://www.state.gov/wp-content/uploads/2019/05/Weapons-of-Mass-Distraction-Foreign-State-Sponsored-Disinformation-in-the-Digital-Age.pdf.
  • 19
    Gregory M. Walton and Geoffrey L. Cohen, “A Question of Belonging: Race, Social Fit, and Achievement,” Journal of Personality and Social Psychology 92, no. 1 (2007): 82–96, https://doi.org/10.1037/0022-3514.92.1.82.
  • 20
    Xiaoyan Qiu et al., “Limited Individual Attention and Online Virality of Low-Quality Information,” Nature Human Behaviour 1, no. 7 (July 2017), https://doi.org/10.1038/s41562-017-0132.
  • 21
    Ana Lucía Schmidt et al., “Anatomy of News Consumption on Facebook,” Proceedings of the National Academy of Sciences 114, no. 12 (March 21, 2017): 3035–39, https://doi.org/10.1073/pnas.1617052114.
  • 22
    Amy Mitchell et al., “Americans Who Mainly Get Their News on Social Media Are Less Engaged, Less Knowledgeable,” Pew Research Center’s Journalism Project (blog), July 30, 2020, https://www.pewresearch.org/journalism/2020/07/30/americans-who-mainly-get-their-news-on-social-media-are-less-engaged-less-knowledgeable/.
  • 23
    Arkaitz Zubiaga et al., “Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads,” ed. Naoki Masuda, PLOS ONE 11, no. 3 (March 4, 2016): e0150989, https://doi.org/10.1371/journal.pone.0150989.
  • 24
    Waller, Becoming Evil.
  • 25
    Reuters, “Russia’s VK Internet Group Sold to Company Linked to Putin Ally,” Reuters, December 3, 2021, sec. Europe, https://www.reuters.com/world/europe/russias-usm-holding-sells-stake-vk-sogaz-insurer-2021-12-02/; Salvador Rodriguez, “TikTok Insiders Say Social Media Company Is Tightly Controlled by Chinese Parent ByteDance,” CNBC, June 25, 2021, sec. Technology, https://www.cnbc.com/2021/06/25/tiktok-insiders-say-chinese-parent-bytedance-in-control.html.
  • 26
    The attention economy refers to the ways in which a person’s attention is limited, which in turn limits the consumption of information. Attention thus becomes a valuable commodity for corporations and states to influence. Thomas H. Davenport and John C. Beck, The Attention Economy: Understanding the New Currency of Business (Boston, Mass: Harvard Business School Press, 2001).
  • 27
    Vikram R. Bhargava and Manuel Velasquez, “Ethics of the Attention Economy: The Problem of Social Media Addiction,” Business Ethics Quarterly 31, no. 3 (July 2021): 321–59, https://doi.org/10.1017/beq.2020.32.
  • 28
    Interviews, tech company employees, (April 2022).
  • 29
    William J. Brady et al., “Emotion Shapes the Diffusion of Moralized Content in Social Networks,” Proceedings of the National Academy of Sciences 114, no. 28 (July 11, 2017): 7313–18, https://doi.org/10.1073/pnas.1618923114; Thomas Zeitzoff, “How Social Media Is Changing Conflict,” Journal of Conflict Resolution 61, no. 9 (October 2017): 1970–91, https://doi.org/10.1177/0022002717721392.
  • 30
    Information disorder is used by some experts to refer to the full range of “propaganda, lies, conspiracies, rumors, hoaxes, hyper-partisan content, falsehood or manipulated media,” including misinformation, disinformation, and malinformation. Claire Wardle, “Understanding Information Disorder,” First Draft, September 22, 2020, https://firstdraftnews.org:443/long-form-article/understanding-information-disorder/.
  • 31
    Raquel Recuero, Felipe Bonow Soares, and Anatoliy Gruzd, “Hyperpartisanship, Disinformation and Political Conversations on Twitter: The Brazilian Presidential Election of 2018,” Proceedings of the International AAAI Conference on Web and Social Media 14, no. 1 (May 26, 2020): 569–78, https://ojs.aaai.org/index.php/ICWSM/article/view/7324.
  • 32
    ColombiaCheck, “Elecciones 2022,” accessed June 28, 2022, https://colombiacheck.com/elecciones-2022.
  • 33
    Saiful Mujani and Nicholas Kuipers, “Who Believed Misinformation during the 2019 Indonesian Election?,” Asian Survey 60, no. 6 (December 3, 2020): 1029–43, https://doi.org/10.1525/as.2020.60.6.1029.
  • 34
    Amy Mitchell et al., “Misinformation and Competing Views of Reality Abounded throughout 2020,” Pew Research Center’s Journalism Project (blog), February 22, 2021, https://www.pewresearch.org/journalism/2021/02/22/misinformation-and-competing-views-of-reality-abounded-throughout-2020/.
  • 35
    Interview, social scientist (March 2022).
  • 36
    Matthew L Williams et al., “Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime,” The British Journal of Criminology, July 23, 2019, azz049, https://doi.org/10.1093/bjc/azz049.
  • 37
    Gregory Asmolov, “The Disconnective Power of Disinformation Campaigns,” Journal of International Affairs 71 (September 18, 2018): 69–76, https://jia.sipa.columbia.edu/disconnective-power-disinformation-campaigns.
  • 38
    Victoire Rio, “Myanmar: The Role of Social Media in Fomenting Violence,” in Social Media Impacts on Conflict and Democracy (Routledge, 2021), 143–60.
  • 39
    Rio, “Myanmar: The Role of Social Media in Fomenting Violence.”
  • 40
    Ibid.
  • 41
    Mattias Wahlström and Anton Törnberg, “Social Media Mechanisms for Right-Wing Political Violence in the 21st Century: Discursive Opportunities, Group Dynamics, and Co-Ordination,” Terrorism and Political Violence 33, no. 4 (2021): 766–87.
  • 42
    Hook, “Hybrid Warfare Is Here to Stay. Now What?”
  • 43
    Daniel Allington, “Conspiracy Theories, Radicalisation and Digital Media” (London: Global Network on Extremism and Technology, 2021).
  • 44
    Interviews, tech company experts (April 2022).
  • 45
    Interview, expert from monitoring group (February 2022).
  • 46
    Angela M Lee, “Social Media and Speed-Driven Journalism: Expectations and Practices,” International Journal on Media Management 17, no. 4 (2015): 217–39.
  • 47
    “Meeting COVID-19 Misinformation and Disinformation Head-On,” Johns Hopkins Bloomberg School of Public Health, 19, accessed May 12, 2022, https://publichealth.jhu.edu/meeting-covid-19-misinformation-and-disinformation-head-on.
  • 48
    J. D Bernal, The Social Function of Science (London: Routledge & Sons Ltd, 1939), updated by experts like Raghu Krishnapuram, “Global Trends in Information Technology and Their Implication” (2013 1st International Conference on Emerging Trends and Applications in Computer Science (ICETACS), Shillong, India: IEEE, 2013), https://doi.org/10.1109/ICETACS.2013.6691382.
  • 49
    Frank Oestergaard, Susanne Beck Kimman, and Søren Ravn Pedersen, “Control Your Data or Drown Trying,” IBM Nordic Blog, September 11, 2019, https://www.ibm.com/blogs/nordic-msp/control-your-data-or-drown-trying/.
  • 50
    Nicholas Drury and Sandipan Sarkar, “How Cognitive Computing Will Revolutionize Banks and Financial Markets,” IBM THINK Blog, November 17, 2015, https://www.ibm.com/blogs/think/2015/11/cognitive-banking-2/.
  • 51
    Tara Kangarlou, “State Department Hopes It Can Find Peace among Data,” CNN, May 1, 2013, https://www.cnn.com/2013/04/30/politics/state-data-analysis.
  • 52
    Interview (December 2021).
  • 53
    Multiple Interviews experts in government, tech corporations, and misinformation fact-checking organizations (October 2021-May 2022).
  • 54
    Michael W. Slimak and Thomas Dietz, “Personal Values, Beliefs, and Ecological Risk Perception,” Risk Analysis 26, no. 6 (December 2006): 1689–1705, https://doi.org/10.1111/j.1539-6924.2006.00832.x.
  • 55
    Jakob Rigi, “The War in Chechnya: The Chaotic Mode of Domination, Violence and Bare Life in the Post-Soviet Context,” Critique of Anthropology 27, no. 1 (March 2007): 37–62, https://doi.org/10.1177/0308275X07073818.
  • 56
    Frederick Ogenga, “Kenya: Social Media Literacy, Ethnicity, and Peacebuilding,” in Social Media Impacts on Conflict and Democracy (Routledge, 2021), 131–42.
  • 57
    Multiple interviews with experts from government and non-governmental organizations, January 2022-May 2022. We note that this division appears to map closely onto interviewees’ general perspectives about the relative benefits of localized versus national or international peacebuilding approaches.
  • 58
    Classification Office (New Zealand), “The Edge of the Infodemic: Challenging Misinformation in Aotearoa,” 2021, https://apo.org.au/node/312981.
  • 59
    “The Digital Services Act Package: Shaping Europe’s Digital Future,” European Commission, n.d., https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package. 
  • 60
    Bradshaw, Bailey, and Howard, “Industrialized Disinformation.”
  • 61
    Cheryl Rivers and Anne Louise Lytle, “Lying, Cheating Foreigners!! Negotiation Ethics across Cultures,” International Negotiation 12, no. 1 (2007): 1–28.
  • 62
    Interviews, tech company content moderation experts and oversight organizations (March 2022; April 2022).
  • 63
    Petros Iosifidis and Nicholas Nicoli, “The Battle to End Fake News: A Qualitative Content Analysis of Facebook Announcements on How It Combats Disinformation,” International Communication Gazette 82, no. 1 (February 2020): 60–81, https://doi.org/10.1177/1748048519880729; Adam L. Alter, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked (New York: Penguin Press, 2017); Interview with fact-checking and monitoring groups, (December 2021, February 2022).
  • 64
    Iria Puyosa and Esteban Ponce de Leon, “Understanding Telegram’s Ecosystem of Far-Right Channels in the US,” DFRLab (blog), March 23, 2022, https://medium.com/dfrlab/understanding-telegrams-ecosystem-of-far-right-channels-in-the-us-22e963c09234.
  • 65
    Ted van Baarda et al., eds., The Moral Dimension of Asymmetrical Warfare: Counter-Terrorism, Democratic Values and Military Ethics (Leiden ; Boston: Martinus Nijhoff, 2009).
  • 66
    “‘Video Unavailable’: Social Media Platforms Remove Evidence of War Crimes” (Human Rights Watch, September 10, 2020), https://www.hrw.org/report/2020/09/10/video-unavailable/social-media-platforms-remove-evidence-war-crimes.
  • 67
    Although deplatforming may lead some users simply to migrate to other less regulated platforms, several experts told us that rebuilding these social networks can be difficult and time consuming, and consequently the new networks are often smaller. Interviews, fact-checking and violence monitoring experts, (March 2022; May 2022). https://www.niemanlab.org/2021/06/deplatforming-works-this-new-data-on-trump-tweets-shows/.
  • 68
    Chris Sonderby, “Transparency Report, Second Half 2021,” Meta (blog), May 17, 2022, https://about.fb.com/news/2022/05/transparency-report-h2-2021/.
  • 69
    Alexandra Stevenson, “Soldiers in Facebook’s War on Fake News Are Feeling Overrun,” The New York Times, October 9, 2018, sec. Business, https://www.nytimes.com/2018/10/09/business/facebook-philippines-rappler-fake-news.html.
  • 70
    Interviews, tech company experts, monitoring organization experts (April 2022; May 2022).
  • 71
    Interviews, government officials, human rights organization experts (May 2022).
  • 72
    Elise Thomas, “Conspiracy Clickbait: The Art of the (Affiliate Marketing) Deal,” Case Study (London: Institute for Strategic Dialogue, 2022), https://www.isdglobal.org/wp-content/uploads/2022/01/Conspiracy-Clickbait-Study-3.pdf.
  • 73
    Rachel Treisman, “Putin’s Claim of Fighting against Ukraine ‘neo-Nazis’ Distorts History, Scholars Say,” NPR, March 1, 2022, sec. Europe, https://www.npr.org/2022/03/01/1083677765/putin-denazify-ukraine-russia-history.
  • 74
    Global Internet Forum to Counter Terrorism: https://gifct.org. The GIFCT was founded by Facebook, Microsoft, Twitter and YouTube in 2017 to stop extremists from exploiting social media platforms. Dunston Allison-Hope, Lindsey Anderson, and Susan Morgan, “Human Rights Impact Assessment: Global Internet Forum to Counter Terrorism,” BSR, July 20, 2021, https://www.bsr.org/en/our-insights/report-view/human-rights-impact-assessment-global-internet-forum-to-counter-terrorism.
  • 75
    Michael Yankoski et al., “Artificial Intelligence for Peace: An Early Warning System for Mass Violence,” in Towards an International Political Economy of Artificial Intelligence, ed. Tugrul Keskin and Ryan David Kiggins (Cham: Springer International Publishing, 2021), 147–75, https://doi.org/10.1007/978-3-030-74420-5_7; Ernesto Verdeja, “Predicting Genocide and Mass Atrocities,” Genocide Studies and Prevention 9, no. 3 (February 2016): 13–32, https://doi.org/10.5038/1911-9933.9.3.1314.
  • 76
    Interviewees noted the interpretive challenges to applying general concepts of harm (e.g., incitement to violence) to specific contexts in rapidly changing political circumstances. Several interviewees encouraged greater collaboration with instability and atrocity prevention experts to understand these connections (interviews, tech company experts, [March and April 2022]).
  • 77
    David J. Simon and Joshua Lam, “To Support Accountability for Atrocities, Fix U.S. Law on the Sharing of Digital Evidence,” Just Security, April 20, 2022, https://www.justsecurity.org/81182/to-support-accountability-for-atrocities-fix-u-s-law-on-the-sharing-of-digitial-evidence/.
  • 78
    “Berkeley Protocol on Digital Open Source Investigations: A Practical Guide on the Effective Use of Digital Open Source Information in Investigating Violations of International Criminal, Human Rights and Humanitarian Law” (Human Rights Center UC Berkeley School of Law and United Nations Office of the High Commissioner for Human Rights, 2022), https://www.ohchr.org/sites/default/files/2022-04/OHCHR_BerkeleyProtocol.pdf.
  • 79
    “The Tech Against Terrorism Guidelines: Government Transparency Reporting on Online Counterterrorism Efforts” (Tech Against Terrorism, 2022), https://static1.squarespace.com/static/609d273957ee294d03d8dadf/t/60fe84736b5d5b2618fcdb95/1627292788720/TAT+Guidelines+-+Government+transparency+reporting+on+online+counterterrorism+efforts.pdf.
  • 80
    Interviews, tech company experts (December 2021; April 2022). A possible avenue is leveraging existing atrocity prevention “focal point” networks, which already serve as prevention expertise nodes in governments in parts of the Global South. “Global Network of R2P Focal Points,” Global Centre for the Responsibility to Protect, n.d., https://www.globalr2p.org/the-global-network-of-r2p-focal-points/. 
  • 81
    For instance, the EU’s 2022 Digital Services Act requires tech companies to employ stronger content moderation policies. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package. One expert from a tech oversight organization noted, “Corporate self-regulation has shown its limits” (May 2022).

Recent & Related

Project Note
Ryan Fletcher

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea