Below is a list of common terms related to misinformation, disinformation, and media manipulation, as well as specific terms the Casebook uses to code cases.
As the practice of media manipulation evolves in response to changes in policies, regulations, and politics, so do the tactics. Below are some of the ways campaign operators take advantage of our networked media ecosystem.
Filter all definitions
A website that hosts message boards on a variety of topics from music to politics to anime. The site is broken down into distinct message boards where mostly anonymous users post and reply to threads. Threads that garner new replies are "bumped" to the top of the respective message board. 4chan is a Casebook value under the "Network Terrain" variable in the code book.
A recently defunct image board owned by Jim Watkins that has been linked to the propagation of white supremacy, neo-Nazism, and the dissemination of manifestos by multiple mass shooters. In 2019, the website was taken offline after its Domain Name System (DNS) and domain registration was terminated. The site has since reformed and rebranded itself as "8kun," primarily comprised of QAnon-related imageboards. 8chan is a Casebook value under the "Network Terrain" variable in the code book.
Accounts may be suspended if they violate the terms of service—also referred to as community guidelines—of a social media platform. The suspension is often flagged by both automated processes and human review. Users can often appeal a suspension if they believe their account was suspended by mistake. Accounts may be permanently removed if there have been repeated violations of the terms of service. Account suspension is a Casebook value under the "Mitigation" variable in the code book.
A period of time when the normal state of affairs is interrupted by unforeseen events that are troubling and potentially dangerous. Active crises trigger confusion and require urgent action and immediate attention. Due to the increased media attention and importance of any decisions made during this time, active crises are vulnerable to being exploited by media manipulation. Active crisis is a Casebook value under the "Vulnerabilities" variable in the code book.
Individuals or groups that campaign for social, political, or legal change. They may be formally organized (e.g., registered non-governmental organization) or loosely affiliated (e.g., advocacy networks). Activists are a Casebook value under the "Targets" variable in the code book.
Concealing the operators and origins of a campaign in order to create the false perception of a grassroots movement or organic support for a specific cause. Astroturfing is a Casebook value under the "Strategy" variable in the code book.
Launched in 2017, Bitchute is a video-hosting platform that positions itself as a “free speech” alternative to YouTube. The site hosts videos promoting violent far-right ideology, medical disinformation, and conspiracy theories. Bitchute links are blocked on Twitter as of August 2020. Bitchute is a Casebook value under the "Network Terrain" variable in the code book.
Blocking is something one user (of a platform or website) does to another. When one user does not want to see what another user posts or how that user engages on the site, they can opt to "block" that user. After blocking, the blocked user will no longer appear in the blocker's account. Blocking is a Casebook value under the "Mitigation" variable in the code book.
Self-published websites or web pages, with no editorial oversight, that are usually run by an individual or small group and are regularly updated with new content. Blogs are a Casebook value under the "Network Terrain" variable in the code book.
Within the study of disinformation and media manipulation, bots typically refer to social media accounts that are automated and deployed for deceptive purposes, such as to artificially amplify a message, game a trending or recommendation algorithm, or inflate an account's engagement metrics. These accounts are typically centrally controlled or in coordination with each other. Bots are a Casebook value under the "Tactics" variable in the code book.
Periods of heightened attention to current events of local, national, or international importance in mass media and on social media. During these moments of mass attention, legitimate information and misinformation may be indistinguishable until facts are established and vetted by official bodies. This period of confusion creates opportunities to sow confusion, target individuals, or shape certain narratives. Breaking news event is a Casebook value under the "Vulnerabilities" variable in the code book.
Butterfly attacks occur when imposters mimic the patterns of behavior of a social group (usually a group that has to fight for representation). Imposters pretend to be part of the group in order to insert divisive rhetoric and disinformation into popular online conversation or within the information networks used by these groups. Distinct from astroturfing, which tries to falsify grassroots support for an issue, butterfly attacks are designed to infiltrate existing communities, media campaigns, or hashtags to disrupt a social group's operations and discredit the group by sowing divisive, inflammatory, or confusing information. These mimicry campaigns exploit social wedges and prejudice, and encourage targeted harassment.
Coined by Patrick Ryan to describe a series of manipulation campaigns he claims to have orchestrated in 2013, the term butterfly attack is inspired by the mimicry behavior of certain species of butterflies, who impersonate the fluttering patterns of other species to confuse predators.1
Butterfly attack is a Casebook value under the "Strategy" variable in the code book.
- 1. Patrick Ryan, “The Butterfly War,” October 13, 2017, https://cultstate.com/2017/10/13/The-Butterfly-War/.
A cheap fake is altered media that has been changed through conventional and affordable technology. Social media examples of cheap fake techniques include photoshopping (including face swapping), lookalikes, as well as speeding and slowing video. A cheap fake is easier to produce than a deep fake, which requires advanced technology and machine learning. 1
NPR reported that the effectiveness of cheap fakes, for the most part, kept deep fakes out of the 2020 presidential election.2 Who needs to spend money on AI when basic photoshopping will do? Unlike deep fakes, cheap fakes are common. In August 2020, a video of Nancy Pelosi was slowed down to make it appear like she was intoxicated. A video with the same claim went viral in May 2019. The 2019 post originated on TikTok and was posted to YouTube, Facebook, and Twitter. Facebook did not remove the post, but the other three platforms did. 3
- 1. Britt Paris and Joan Donovan, “Deepfakes and Cheap Fakes” (Data and Society Research Institute, September 18, 2019), https://datasociety.net/library/deepfakes-and-cheap-fakes/.
- 2. Tim Mak and Dina Temple-Raston, “Where Are The Deepfakes In This Presidential Election?” NPR, October 1, 2020, https://www.npr.org/2020/10/01/918223033/where-are-the-deepfakes-in-this-presidential-election.
- 3. Hannah Denham, “Another Fake Video of Pelosi Goes Viral on Facebook,” Washington Post, August 3, 2020, https://www.washingtonpost.com/technology/2020/08/03/nancy-pelosi-fake-video-facebook/.
Groups or organizations engaged in advocating for certain issues, educating the wider public, holding the government accountable, or promoting civil and human rights. They may be formally organized or loosely coordinated and include non-governmental organizations (NGOs), community groups, labor unions, educational organizations, faith-based organizations, professional associations, nonprofit think tanks, and foundations.
Civil society response refers to actions taken by members or groups of civil society in an attempt to mitigate a campaign's harms or spread. Civil society response is a Casebook value under the "Mitigation" variable in the code book.
A legal proceeding by a private party or parties against another in a civil court of law that seeks remedy for a wrongdoing or harm. Civil/private lawsuit is a Casebook value under the "Mitigation" variable in the code book.
The use of scientific jargon and community norms to cloak or hide a political, ideological, or financial agenda within the appearance of legitimate scientific research. This may include the use of technical language, difficult-to-understand graphs and charts, or seemingly scientific data presented as empirical evidence to lend credibility to the claims being made. Cloaked science may be seeded onto public preprint servers, in data repositories, journals, or publications with lax review standards, through press releases, or by baiting journalists who may not be able to scrutinize the claims thoroughly.
This definition builds upon Jessie Daniel's research on cloaked websites, which she describes as "sites published by individuals or groups who conceal authorship in order to disguise deliberately a hidden political agenda,"1 and is inspired by Sarah Richardson's description of transphobic politics being "cloaked in science."2 Science scholar Timothy Caulfield uses the term “scienceploitation”3 to describe a similar phenomenon—the use of scientific language to mask otherwise unscientific motives (e.g., financial gain).
Note that cloaked science as a tactic is the deliberate use of information masquerading as science and should not be confused with “junk science,” which is a term that has been used to discredit scientific findings, claims, and data as fraudulent or misleading in a similar way that “fake news” can be used to dismiss critical news coverage.4
Cloaked science is a Casebook value under the "Tactics" variable in the code book.
- 1. Jessie Daniels, “Cloaked Websites: Propaganda, Cyber-Racism and Epistemology in the Digital Era,” New Media & Society, July 21, 2009, https://journals.sagepub.com/doi/10.1177/1461444809105345.
- 2. Sarah Richardson, “Transphobia, Cloaked in Science,” Blog//Los Angeles Review of Books (blog), November 8, 2018, https://blog.lareviewofbooks.org/essays/transphobia-cloaked-science/.
- 3. Timothy Caulfield, “Pseudoscience and COVID-19 — We’ve Had Enough Already,” Nature, April 27, 2020, https://doi.org/10.1038/d41586-020-01266-z.
- 4. Jonathan M. Samet and Thomas A. Burke, “Turning Science Into Junk: The Tobacco Industry and Passive Smoking,” American Journal of Public Health 91, no. 11 (November 1, 2001): 1742–44, https://doi.org/10.2105/AJPH.91.11.1742.
The large-scale screening by automated systems and humans of content uploaded to social-media sites to remove material that violates the law or individual platforms' terms of service.
When groups or individuals who are targets of manipulation campaigns expose impersonation attempts or false claims made about them. These community-driven debunkings play out visibly on social media, and do not always receive press attention or acknowledgement.
Community moderation is the "cleaning up" of a social media site by users. It involves flagging, closing, commenting, editing, and sometimes deleting posts that violate terms of service and community standards.
Individuals or groups that actively propagate unfounded or unverified narratives and frames. Conspiracy techniques often include speculation, unsubstantiated claims, and explanations predicated on secretive and powerful actors scheming with malicious intent. Conspiracists are a Casebook value under the "Attribution" variable in the code book.
Content removal is the act of platforms taking down specific pieces of content, like videos, tweets, posts, etc. The platform's terms of service are often a guideline for what can be removed, though these are rarely enforced uniformly or consistently. Content removal is a Casebook value under the "Mitigation" variable in the code book.
A term coined by Facebook to describe the use of multiple Facebook or Instagram assets working in concert to misrepresent themselves, artificially boost the popularity of content, or engage in behaviors designed to enable other violations under the platform's community standards, and where the use of fake accounts is central to the operation.
A portmanteau of “copy,” “paste,” and “pasta,” copypasta refers to any block of text that is repeatedly reposted, often on social media, messaging apps, online discussion forums, and comments sections. Copypasta is a Casebook value under the "Tactics" variable in the code book.
A tactic used for countering hate speech and misinformation by advancing alternative narratives and challenging information. Counterspeech is a Casebook value under the "Mitigation" variable in the code book.
All activities involved in the process of investigating and prosecuting a crime, including collecting evidence or information pertaining to a crime, apprehending a suspect, and any subsequent related proceedings such as a trial or sentencing.
Critical press refers to press coverage that is critical of a manipulation campaign. Articles may debunk false claims or investigate the origins and motivations of a campaign. Critical press is a Casebook value under the "Mitigation" variable in the code book.
A person or account who disseminates political propaganda on the internet, particularly on social media platforms. They may be paid or unpaid, working independently or in tandem with a group or campaign, and may be automated or manual.
Coined and theorized by Michael Golebiewski and danah boyd,1 this refers to unique topics or terms that result in minimal, low-quality, or manipulative information from search engine queries. Data voids are social or technical security risks depending on the subject matter of the query. Data void is a Casebook value under the "Vulnerabilities" variable in the code book.
- 1. Michael Golebiewski and danah boyd, "Data Voids: Where Missing Data Can Easily Be Exploited," Data & Society, October 29, 2019, https://datasociety.net/library/data-voids/.
Removing a link or other content from search results. The content or website in question is still available but will not be included in a search engine's, website's, or platform's results. De-indexing is a Casebook value under the "Mitigation" variable in the code book.
The removal of individuals or groups from a platform, preventing them from using the platform’s services even if they try to create new accounts. De-platforming is often enforced when a user has violated the terms of service, and may also include removing any existing content previously created by the user. De-platforming is a Casebook value under the "Mitigation" variable in the code book.
Exposing false or misleading claims, such as sham miracle drugs or exaggerated advertising slogans. Debunking includes fact-checking efforts, research and investigation, exposés, and other critical writing that attempts to correct the false claims. Debunking is a Casebook value under the "Mitigation" variable in the code book.
The use of "deep" or machine learning to hybridize or generate human bodies and faces. The word is a portmanteau of “deep learning” and “fake."
Removing the ability for an account, channel, or individual to earn revenue from their content on a platform.
Digital blackface is the use of visual or linguistic signifiers of Black technoculture by non-black people for the purpose of deception, manipulation, or exploitation.
Discord is an instant messaging software that launched in 2015. Users can send text, video, images, and audio through its channels. Discord is a Casebook value under the "Network Terrain" variable in the code book.
Discussion forums are online places where people can ask questions and post text, images, video, or other content. As the name implies, these platforms act as forums for discussion, allowing replies to posts or other forms of engagement.
Information that is deliberately false or misleading, often spread for political gain or profit, or to discredit a target individual, group, movement, or political party.
A call to participants to rapidly and widely spread campaign materials, including propaganda or disinformation. Distributed amplification relies on many campaign participants to individually share sensitive or banned content on their personal social media accounts in an effort to evade platform mitigation efforts or dominate the information ecosystem with repetitive content. Distributed amplification is a Casebook value under the "Tactics" variable in the code book.
The act of publishing on the internet private or identifying information about a specific individual, usually with malicious intent (e.g., retaliation, punishment). This type of activity is often banned on platforms and forums. Dox is a Casebook value under the "Observable outcomes" variable in the code book.
Refers to the time leading up to an election when candidates have begun campaigning. Depending on the country, there may be legal limits to what constitutes a campaign period. Election period is a Casebook variable under the "Observable outcomes" variable in the code book.
An evidence collage is a collection of screenshots and text that is assembled into a shareable document and presented as evidence. No professional computer applications are required to make an evidence collage; they can be made with basic image editing applications. Evidence collages are timed with breaking news events, and are meant to influence both the general public and journalists and preempt authoritative reporting. Evidence collages are a call for civilian research, which itself can lead to the dissemination and propagation of unverified information.
By modeling evidence collages after infographics or official documents and by using graphic design to introduce associated keywords and URLs (links to sites with more disinformation), they direct viewers where to investigate and invite them to support the associated campaign’s sites, resources, hashtags, and communities. Often, evidence collages include both verified and unverified information. This can make them more difficult to debunk. A Casebook example that used evidence collages can be found in “Evidence Collage: Unite the Right Rally,” where participants interpreted these images as clues and used them to incorrectly identify the driver in a fatal attack. This misidentification led to targeted harassment.
Evidence collages are a Casebook value under the "Tactics" variable in the code book.
Groups or individuals that espouse right-leaning radical or violent positions, often associated with organized white supremacy or other prejudice-driven ideologies. Extremists (right wing) is a Casebook value under the "Attribution" variable in the code book.
A social networking website that allows registered users to create unique profiles, have public and private conversations, join groups, create events, upload photos, etc. The company's source of revenue is selling ads on social media websites and mobile applications. Facebook is a Casebook value under the "Network Terrain" variable in the code book.
The act of investigating information presented as facts in order to determine its veracity and correctness. In some cases, a fact check will result in some kind of public refutation, if the information investigated was found to be erroneous. Furthermore, fact checking is not always bipartisan or universally accepted and may be contested by interested parties.
A sociotechnical mechanism for reporting harmful or offensive content to an online social media platform or company. Content can be flagged by an algorithm, content moderator, or another user. If the content is flagged by another user, an employee from the company often reviews the content and if it violates the terms of service, the user is notified and the content is removed. Flagging is a Casebook value under the "Mitigation" variable in the code book.
The creation and distribution of a fake document with intent to deceive. Forgery is a Casebook value under the "Tactics" variable in the code book.
A social networking service launched publicly in May 2017. It is known for its right-leaning user base, and as a safe haven for far-right extremists. Gab is a Casebook value under the "Network Terrain" variable in the code book.
Attempting to manipulate an algorithm in order to gain attention. This may include tactics that elevate content into a platform’s trending list, being recommended to other users, or placing in the top ten of a search engine’s results. Gaming an algorithm is a Casebook value under the "Strategy" variable in the code book.
Founded in 1998 and headquartered in Mountain View, CA, the multinational company provides internet-related services and products, such as a search engine, cloud computing, online advertising technologies, and software and hardware. Google is a Casebook value under the "Network Terrain" variable in the code book and specifically refers to Google’s flagship search engine product and not to the company’s subsidiaries or other products.
Targeted and repeated behavior towards an individual or group of people that causes mental, physical or emotional distress. Harassment includes but is not limited to unwanted threats, insults, touching, and offensive language. Harassment is a Casebook value under the "Observable outcomes" variable in the code book.
The unauthorized use of an individual’s account, typically accessed through stolen credentials or hacking. Hijacked accounts can refer to email accounts, social media profiles, messaging apps, or any other account associated with a digital service or product. Hijacked accounts are a Casebook value under the "Tactics" variable in the code book.
Press that is sharply divided along binary partisan positions, reflecting fierce disagreement with the opposing side.
Pretending to be another person or member of a social identity group, either by mimicking their behavior or creating a falsified online presence. Impersonation is a Casebook value under the "Tactics" variable in the code book.
A single person. Individual is a Casebook value under the "Targets" variable in the code book.
Visible pundits, journalists, or public figures who drive conversation around particular topics in broadcast media and online networks. Influencers are a Casebook value under the "Attribution" variable in the code book.
Information operations is a broad category of activity but generally refers to the strategic use of technological, operational, and psychological resources to disrupt an adversary's informational capacities. For more information, see Caroline Jack's Lexicon of Lies (Data and Society, 2017) and Martin Libicki's "What is Information Warfare" (in Toward a Revolution in Military Affairs?, edited by Thierry Gongora and Harald von Riekhoff, 2000).
Acquired by Facebook in 2012, Instagram is a social network platform that enables users to edit, upload, and comment on photos and short videos, broadcast live videos, and have private conversations using the chat feature. Instagram is a Casebook value under the "Network Terrain" variable in the code book.
Keyword squatting is the tactic of creating online content — including social media accounts — around a specific search-engine-optimized term so as to determine the search results of that term. Since future searches around that term will return the squatted accounts and content, manipulators are able to direct online traffic to their pages and to influence (to varying success) the narrative around the term.1
The term keyword squatting is adapted from “domain squatting,” which is the process of anticipating which domain names will become popular, buying them, and selling them for a profit when they become sought-after URLs. Both keyword and domain squatting can be methods of online impersonation: by acquiring “misleading account names, URLs, or keywords,” manipulators can appear online as their opponents or as the people/groups that they target.
Through search engine optimization, manipulators can make it so that their hashtags and content tags appear before authentic accounts in search results lists. This tactic can be particularly useful in cases of data voids and hidden virality (where there is a lack of authoritative information), but there is growing interest in the term outside the mainstream view.2 Keyword squatting allows manipulators to appropriate keywords around breaking news events, social movements, celebrities, and wedge issues. From there, they can use the accounts they created to flood conversations on the given topic with inaccurate or malicious information. Keyword squatting is a tactic manipulators use to instigate media coverage and shape trending conversations on social media.
A Casebook example of keyword squatting can be found in “Targeted Harassment: The Ukraine Whistleblower,” where manipulators encouraged participants to share the name and photos of an alleged whistleblower before platforms intervened. In this case, a media blackout and asymmetry within the media ecosystem created the perfect conditions for successful keyword squatting by motivated manipulators.
Keyword Squatting is a Casebook value under the "Tactics" variable in the code book.
- 1. Joan Donovan and Brian Friedberg, “Source Hacking: Media Manipulation in Practice” (Data and Society Research Institute, September 4, 2019), https://datasociety.net/library/source-hacking-media-manipulation-in-practice/.
- 2. Brian Friedberg, “The Dark Virality of a Hollywood Blood-Harvesting Conspiracy,” Wired, July 31, 2020, https://www.wired.com/story/opinion-the-dark-virality-of-a-hollywood-blood-harvesting-conspiracy/.
The use of coded language to discuss topics that are often automatically or manually flagged as breaking a site's terms of service, community standards, or user policies, in an attempt to circumvent censorship, account bans, or other forms of information control.
Labelling refers to the application of informational labels to social media posts, accounts, channels, or other content by the host platform in an effort to give viewers additional context. This may include labelling content that could be potentially sensitive or graphic, affiliated with a nation state, containing false or misleading claims, or at risk of inciting violence. Labelling is a Casebook value under the "Mitigation" variable in the code book.
A lax security practice is anything that makes the user more vulnerable to security attacks or scams, like phishing. An example of a lax security practice is having a password that can be guessed easily or is repeated across multiple accounts. Lax security practice is a Casebook value under the "Vulnerabilities" variable in the code book.
The unauthorized release of sensitive materials or documents.
Self-imposed or state-mandated censorship of a certain news topic. Media blackout is a Casebook value under the "Mitigation" variable in the code book.
Media ecosystems are complex combinations of print, broadcast, digital, and social media that work together to create a self-referential information environment.
Coverage and reporting by journalists in popular or mainstream media. Media exposure is a Casebook value under the "Observable outcomes" variable in the code book.
We define media manipulation as the sociotechnical process whereby motivated actors leverage specific conditions or features within an information ecosystem in an attempt to generate public attention and influence public discourse through deceptive, creative, or unfair means. Campaigns or operations that engage in media manipulation may use several tactics, such as memes, viral videos, forged documents, or leaked information.
Media manipulation is not exclusive to any actor or group, nor is it inherently good or bad. Activists, constrained by heavy censorship in traditional media, for example, may rely on media manipulation in the digital space to circumvent such information controls. However, extremists may likewise use the same platforms and tactics to mainstream hateful and dangerous speech. Furthermore, media manipulation is a broad term in that it can be used to define a variety of other terms, such as disinformation, information operations, or influence operations.
Note that media manipulation is distinct from media control, which occurs at the top level by the state and private sector. Media control would instead refer to activity like ISP-level content blocking, government censorship agencies, media ownership, content filtering, or distribution and licensing regimes.
News and entertainment publishers that provide news and feature stories to the public and are not owned or controlled by the state. They may be distributed over broadcast (TV and radio), online, or print media. This variable includes independent and alternative media, mainstream corporate press, and publicly funded media that are free from state interference (e.g., BBC and NPR). Media outlets are a Casebook value under the "Network Terrain" variable in the code book.
Medical misinformation refers to incorrect or unverified information about the form and function of the human body, and/or misperceptions of health practitioners and medical science.
The intentional propagation of political memes on social media for the purpose of political persuasion or community building, or to strategically spread narratives and other messaging crucial to a media manipulation campaign. Meme war is a Casebook value under the "Strategy" variable in the code book.
Memes, a term coined by Richard Dawkins (1976), are “units of culture” that spread through the diffusion of ideas. Memes are particularly salient online because the internet crystallizes them as artifacts of communication and accelerates their distribution through subcultures. Memes are a Casebook value under the "Tactics" variable in the code book.
Erroneously identifying an individual as someone else, intentionally or accidentally. Misidentification is a Casebook value under the "Observable outcomes" variable in the code book.
Misinfographics are infographics with false or misleading information. In some cases, they may also be classified as a forgery when they borrow an existing organization's brand aesthetics and logo in order to make it seem as if the content was coming from the organization.
The #SaveTheChildren hashtag within QAnon uses misinfographics to publicize human trafficking statistics. Vox notes that moms on Instagram sharing “aesthetically pleasing” posts have been “critical” to its spread.1
A Casebook example of a misinfographic can be found in “Misinfographic: The Spread of 'JihadChart' in India,” where a misinfographic titled “Jihad: The Holy war to spread Islam” appeared on Facebook, Reddit, and Twitter to capitalize on and further promote anti-Muslim bias.
Misinfographics are a Casebook value under the "Tactics" variable in the code book.
- 1. Rebecca Jennings, “We’re in the Middle of Another Moral Panic. What Can We Learn from the Past?,” Vox, September 25, 2020, https://www.vox.com/the-goods/2020/9/25/21453036/save-the-children-qanon-human-trafficking-satantic-panic.
Information whose inaccuracy is unintentional, and spread unknowingly.
Attempts, measures, and other actions taken by the private sector, government, media organizations, and civil society in an attempt to contain or prevent the continuation of a campaign, its effects, or messaging. Mitigation is both a stage in the life cycle, and one of the variables in the code book under which Casebook codes fall.
The distribution of information with intent to create confusion during unresolved events that precedes or obscures verified information and consensus. In doing so, the target subject becomes more confusing as credible or authoritative sources are forced to compete with speculation, unfounded claims, or outright false information. Muddy the waters is a Casebook value under the "Strategy" variable in the code book.
Tacit coalitions or groups of people who share some, but not all, political positions, primarily congregate online (though not exclusively), and often come together as a swarm to act in unison as a political force. Networked factions maintain these coalitions using shared phrases, hashtags, memes, or similar media. These factions can form and dissolve according to the political context. Networked faction is a Casebook value under the "Attribution" variable in the code book.
Open collaboration tools are services such as Pastebin or Google Docs, which are open-access, easy-to-use services for hosting, crowdsourcing, and sharing information. The openness of these tools presents an opportunity for campaign planners and participants, giving them a frictionless, easy-to-share repository for collaboration, coordination, and information distribution. Depending on the intentions of the campaign, they can be used for advocacy, resource-sharing, and activism, as well as more malicious means, such as housing false or misleading information, unauthorized leaks of personal and private information, or harassment campaign instructions. Other examples of open collaboration tools include Dropbox, Jira, Asana, Trello, etc. Open collaboration tools are a Casebook value under the "Network Terrain" variable in the code book.
Platforms that have both an editorial arm and a self-publishing arm that allows users to publish and post their own articles and other content. Examples include Medium and Buzzfeed Community. Open editorial platforms are a Casebook value under the "Network Terrain" variable in the code book.
Open science is an approach to scientific inquiry that advocates for collaboration, accessibility, and transparency in an effort to increase the dissemination of scientific knowledge and participation of individuals from diverse backgrounds. Common practices include making research data public, campaigning for open access, and communication strategies that are inclusive of a wide audience. 1
Although open science as a movement confers multiple benefits, 2 its openness and inclusivity can be exploited by motivated actors intent on seeding false or misleading content. Digital data repositories and preprint servers, for example, are an outcome of the movement for open science, but because of their lack of peer review they can be misused or abused to spread poor quality research or disinformation masked as science.3 Publicly available data, even if credible and from authoritative sources, can also be manipulated to mislead or undermine scientific consensus.4
Open science is a Casebook value under the "Vulnerabilities" variable in the code book.
- 1. “Open Science Movement | United Nations Educational, Scientific and Cultural Organization,” accessed January 3, 2021, http://www.unesco.org/new/en/communication-and-information/portals-and-platforms/goap/open-science-movement/; “What Is Open Science? Introduction,” Foster Open Science, accessed January 3, 2021, https://www.fosteropenscience.eu/content/what-open-science-introduction.
- 2. Christopher Allen and David M. A. Mehler, “Open Science Challenges, Benefits and Tips in Early Career and Beyond,” PLOS Biology 17, no. 5 (May 1, 2019): e3000246, https://doi.org/10.1371/journal.pbio.3000246; Martin Lakomý, Renata Hlavová, and Hana Machackova, “Open Science and the Science-Society Relationship,” Society 56, no. 3 (June 1, 2019): 246–55, https://doi.org/10.1007/s12115-019-00361-w.
- 3. Amy Koerber, “Is It Fake News or Is It Open Science? Science Communication in the COVID-19 Pandemic,” Journal of Business and Technical Communication 35, no. 1 (January 1, 2021): 22–27, https://doi.org/10.1177/1050651920958506; Joan Donovan, Irene Pasquetto, and Jennifer Pierre, “Cracking Open the Black Box of Genetic Ancestry Testing,” Proceedings of the 51stHawaii International Conference on System Sciences, https://doi.org/10.24251/HICSS.2018.218; Aaron Panofsky and Joan Donovan, “Genetic Ancestry Testing among White Nationalists: From Identity Repair to Citizen Science,” Social Studies of Science 49, no. 5 (October 1, 2019): 653–81, https://doi.org/10.1177/0306312719861434.
- 4. Crystal Lee, Tanya Yang, Gabrielle Inchoco, Graham M. Jones, and Arvind Satyanarayan, “Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online,” CHI ’21, May 8–13, 2021, Yokohama, Japan, https://arxiv.org/pdf/2101.07993.pdf.
The part of the web that is not protected by passwords, and thus accessible to anyone.
Parler was marketed as the free speech alternative to popular social media. The social media site was launched in 2018 with an interface similar to Twitter’s, and gained users rapidly after the 2020 presidential election. After the January 6, 2021, attack on the U.S. Capitol, it was removed from the Apple and Google app stores, and dropped by its server host, Amazon, based on its alleged use by people involved in the insurrection. Parler is a Casebook value under the "Network Terrain" variable in the code book.
A strong supporter or committed member of a party, cause, or person. Partisans are a Casebook value under the "Attribution" variable in the code book.
Fraudulently posing as a trustworthy entity in a malicious attempt to access confidential information such as usernames, passwords, and credit card details, usually by the means of email. Phishing is a Casebook value under the "Tactics" variable in the code book.
Web-based technology that allows users of that platform to generate content and engage in peer-to-peer conversations and other forms of engagement (e.g., likes, follows, retweets).
When a political party or politician adopts or co-opts a phrase, term, or idea for politically-motivated purposes. Political adoption is a Casebook value under the "Observable outcomes" variable in the code book.
A group of people sharing similar ideology or political positions who participate in elections by fielding candidates that will then carry out their goals and policies. Political party is a Casebook value under the "Targets" variable in the code book.
A politician is a person engaged in party politics or occupying public office. Because of their visibility, policies, or affiliations, a politician can be the target of disinformation campaigns. Politician is a Casebook value under the "Targets" variable in the code book.
Individuals who engage in activity designed to elicit a reaction from a target purely for fun or mischief. Pranksters are a Casebook value under the "Attribution" variable in the code book.
A bias that can result in an injury or detriment to another individual's legal rights or claims, wellbeing, or participation in society. Such preconceived judgements are not informed by facts and often target an individual or group based on race, religion, sexual orientation, age, class, or other demographic identifier. Prejudice is a Casebook value under the "Vulnerabilities" variable in the code book.
A preprint server is a data repository that hosts scholarly articles before they have been formally peer-reviewed and published. Typically, the author(s) of a paper will upload a draft version to the preprint server, which generates a publicly accessible URL where the paper and any other relevant data may be viewed, downloaded, or shared. Because the peer review process can often take months or even years, preprints are a quick way to circulate research outputs and data. However, authors of preprints may seek formal publication after uploading to a preprint server. Preprint servers can be discipline-specific (e.g., bioRxiv), regional (e.g., AfricArXiv), multidisciplinary (e.g., arXiv, SocArXiv), or general purpose (e.g., Zenodo). Preprint server is a Casebook value under the "Network Terrain" variable in the code book.
Technology that allows for peer-to-peer interactions that are private by default and require users to be invited and give consent to participate. Information sent via private messaging may or may not be encrypted.
The deliberate spread of information or ideas to influence a person, group, institution, or nation in support of—or in opposition to—a particular cause. Propaganda is often coded as "white," "grey," or "black." White referring to overt propaganda where the source of the content is clear, grey referring to propaganda with muddy or unclear origins, and black referring to propaganda that disguises its origins, often portraying itself as the target it is trying to discredit.
Publicly available information pertaining to individuals, organizations, companies, or any other entity that has been aggregated into an accessible, searchable, and organized format. Public directory is a Casebook value under the "Vulnerabilities" variable in the code book.
A private company that engages in public relations, branding, advertising and sales, or any other type of activity related to marketing, typically in service to a client. Marketing firms and public relations companies have been used in media manipulation campaigns to game engagement metrics, create the false sense of grassroots support (i.e. astroturfing), and amplify specific narratives or pieces of content for their clients.1 Where possible, the Casebook identifies the clients who have contracted or hired the marketing company. Public relations or marketing firm is a Casebook value under the "Attribution" variable in the code book.
- 1. Jonathan Ong and Jason Vincent Cabañes, “Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines,” 2018, https://doi.org/10.7275/2cq4-5396; Craig Silverman, Jane Lytvynenko, and William Kung, “Disinformation For Hire: How A New Breed Of PR Firms Is Selling Lies Online,” BuzzFeed News, January 6, 2020, https://www.buzzfeednews.com/article/craigsilverman/disinformation-for-hire-black-pr-firms; Samantha Bradshaw, Hannah Bailey, and Philip N. Howard, “Industrialized Disinformation: 2020 Global Inventory of Organized Social Media Manipulation,” Oxford Internet Institute, 2021, https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/127/2021/01/CyberTroop-Report-2020-v.2.pdf.
Racialized disinformation campaigns employ the strategic use of falsified racial or ethnic identities and/or focus on race as a wedge issue.
Recognition by target is when a target of a media manipulation or disinformation campaign acknowledges and responds to the campaign's activities or operators. Recognition by target is a Casebook value under the "Observable outcomes" variable in the code book.
Recontextualized media is any image, video, or audio clip that has been taken out of its original context and reframed for an entirely different purpose or narrative frame. While cheap fakes, more broadly, alter the media, recontextualized media uses unaltered images, video, or audio but presents them in a new or false context according to the manipulators’ agenda.
During the early protests against the murder of George Floyd in June 2020, many recontextualized images spread on social media. One showed an image from the TV show Designated Survivor but claimed it was from a Black Lives Matter protest; another photo of a McDonald’s burning in 2016 was reframed as though it was from a current protest.1
A Casebook example of recontextualized media can be found in the case “Targeted Harassment: The spread of #Coronajihad,” where videos were re-captioned to exploit bias against Muslims and to blame Muslims for the spread of coronavirus in India.
Recontextualized media is a Casebook value under the "Tactics" variable in the code book.
- 1. Jane Lytvynenko and Craig Silverman, “We’re Keeping A Running List Of Hoaxes And Misleading Posts About The Nationwide Police Brutality Protests,” BuzzFeed News, June 5, 2020, https://www.buzzfeednews.com/article/janelytvynenko/hoax-misleading-claims-george-floyd-protests.
Activities with the goal of enlisting or drawing new followers or members to a political party, social movement, extremist organization, ideology, or other distinct movement, group, or idea.
Reddit is a website where users can post information and prompts. These posts and prompts get responses and up or down votes from other users, which ranks the display of the content. The website is divided into user-created categories called "subreddits." The San Francisco-based site was founded in 2006. Reddit is a Casebook value under the "Network Terrain" variable in the code book.
Individual or coordinated group efforts to establish the origins and impact of a manipulation campaign. Research and investigation is a Casebook value under the "Mitigation" variable in the code book.
Individuals or groups involved in scientific research, medicine, or healthcare. This may include scientists, researchers, research labs, scientific organizations, health authorities, doctors, nurses, and other healthcare professionals. Scientific and medical community is a Casebook value under the "Targets" variable in the code book.
Manipulation with the aim of getting people to give up confidential information through trickery and deception rather than technical exploits. These attacks often take advantage of emotions, trust, or habit in order to convince individuals to take actions, such as clicking a fraudulent link, visiting a malicious website, or giving up login credentials. (Adapted from Forcepoint.)
Groups defined by some social, physical, or mental characteristics. Examples include race, ethnicity, gender, social class, sexual orientation, or religious beliefs. Social identity group is a Casebook value under the "Targets" variable in the code book.
Groupings of individuals or organizations that focus on political or social issues.
Comprised of a network of websites and software that link internet users together, often with the intention of fostering social interaction and promoting the exchange of goods and services.
Used as an adjective, the word sociotechnical typically describes something that exists due to a mix of both social and technical conditions. Within the study of media manipulation, for example, sociotechnical is often used to describe specific campaigns or other media phenomena as they are only emergent from the combination of social conditions and technical features.
A false online identity typically created by a person or group in order to promote a specific narrative or opinion, sow division, or circumvent a previous account ban.
A versatile set of techniques for feeding false information to journalists, investigators, and the general public during breaking news events or across highly polarized wedge issues. Specifically, source hacking exploits situations and technologies to obscure the authorship of false claims. For more, read "Source Hacking" (Data and Society, 2019) by Joan Donovan and Brian Friedberg.
An entity that is a part of, or which operates licitly or semi-licitly on behalf or in service of, a government agency. Within media manipulation, this may refer to state-run media outlets, operatives working for or with an intelligence or security agency (or other government agency), or other parties that are deliberately working to advance a given state’s objectives with the support, encouragement, or compulsion of a state actor. State actor is a Casebook value under the "Attribution" variable in the code book.
Media outlets that are under editorial control or influence by a country’s government. The articles and stories produced by these state media outlets may be distributed over broadcast (TV and radio), online, or print media. State-controlled media is a designation that applies when editorial freedom has been taken away by government influence, pressure, or money. These outlets can be used to push government propaganda. The label does not necessarily apply to all media that receives funding from the public. Media organizations that receive public funds but maintain their editorial freedom, such as the British Broadcasting Corporation (BBC) or Canadian Broadcasting Corporation (CBC), are not designated as state media.
State-controlled media is a Casebook value under the "Network Terrain" variable in the code book.
Best practices for ensuring responsibility and accountability when producing news content and the algorithmic systems that help spread it. For more, read "Stop the Presses? Moving From Strategic Silence to Strategic Amplification in a Networked Media Ecosystem" by Joan Donovan and danah boyd (American Behavioral Scientist, September 2019, doi:10.1177/0002764219878229).
The use of editorial discretion for the public good. For example, journalistic or editorial standards to not report on suicide. For more, read "Stop the Presses? Moving From Strategic Silence to Strategic Amplification in a Networked Media Ecosystem" by Joan Donovan and danah boyd (American Behavioral Scientist, September 2019, doi:10.1177/0002764219878229).
When loosely organized online groups come together for specific objectives or campaigns, such as spamming a comment section, engaging in harassment, or obfuscating a hashtag. Swarming is a Casebook value under the "Tactics" variable in the code book.
The continuation of a media manipulation or disinformation campaign with adjustments to the tactics. Tactical adjustment is a Casebook value under the "Campaign adaptation" variable in the code book.
The redeployment of a media manipulation or disinformation campaign's tactics. Tactical redeployment is a Casebook value under the "Campaign adaptation" variable in the code book.
Coordinated and organized online harassment of an individual or groups of individuals to threaten, censor, or upset them or to disrupt their operations or behavior. Targeted harassment is a Casebook value under the "Strategy" variable in the code book.
The practice of engaging in active care and maintenance of digital places as both a defense mechanism against manipulation and disinformation and in service of preserving the health of intra-cultural expression online.
Gaining exposure by placing information or disinformation artifacts in locations that will be taken up and amplified by other systems, individuals, or publications. Typically, information may be posted on smaller blogs or social media before being reported by mainstream media outlets or politicians and other influential individuals. Trading up the chain is a Casebook value under the "Strategy" variable in the code book.
Engaging in inflammatory, divisive, or distracting behavior in an online community with the goal of provoking readers or viewers into an emotional, often negative, response (e.g., anger, outrage, offense). Trolling is a Casebook value under the "Tactics" variable in the code book.
Twitter is an app and website where logged-in users can post up-to-280-character messages—called "tweets"—and up-to-140-second video/audio messages. These user accounts can like, comment on, and share other users' messages. Some user accounts are "verified" by the company, which bestows special privileges on these accounts, such as more moderation options. Users can choose if they want their profile to be public or private. Anyone without an account can access public tweets but cannot engage with them. The San Francisco-based site was founded in 2006. Twitter is a Casebook value under the "Network Terrain" variable in the code book.
The intentional registration of a domain name that incorporates typographical variants of the target domain name in order to deceive visitors. This may involve misspelling a domain or using a different top-level domain. Typosquatting is a form of cybersquatting, or an attempt to mislead users by fraudulently posing under someone else's brand or copyright. Typosquatting is a Casebook value under the "Tactics" variable in the code book.
Cases where there is insufficient evidence to definitively identify campaign operators or participants. Unclear attribution is a Casebook value under the "Attribution" variable in the code book.
Cases where there is insufficient evidence to suggest adaptation, redeployment, or any other tactical change by campaign operators or participants. Unclear or no observable adaptation is a Casebook value under the "Campaign adaptation" variable in the code book.
There is no discernible strategy based on the available evidence. Unclear strategy is a Casebook value under the "Strategy" variable in the code book.
There is no discernible or apparent individual, group, or issue that the campaign is attempting to discredit, disrupt, criticize, or frame in a negative light based on the available evidence. Unclear target is a Casebook value under the "Targets" variable in the code book.
Vimeo is a video-sharing platform launched in 2004. The site does not run ads, but users pay for subscriptions to the site. Vimeo is a Casebook value under the "Network Terrain" variable in the code book.
Viral sloganeering is the tactic of creating short, catchy phrases intended to deliver persuasive, disruptive messaging. Viral slogans may highlight social wedges, and sow additional divisions along political or cultural lines by capturing social media attention, provoking media coverage, and sometimes garnering institutional responses. These often divisive phrases are used on and offline, and spread virally through memes, hashtags, posters, and videos.
To succeed as viral slogans, they must expand past their community of origin and creators into the public sphere. With this scale and distance, authorship and origin are often concealed, enabling mainstream media coverage and further amplification without attribution.1 Successful viral slogans often capitalize on breaking news events, and can themselves be the catalyst for news coverage. As such, the outcome of viral sloganeering often is the popularization of previously underused words or phrases—effective tools in keyword squatting or the filling of data voids, which are terms and search queries about which there isn’t much content.2
Current examples of viral sloganeering include “Lock Her Up” (aimed at Hillary Clinton), “Send Her Back” (aimed at Ilhan Omar),3 and “Quarantine is when you restrict movement of sick people. Tyranny is when you restrict the movement of healthy people.”4 Casebook examples of viral sloganeering can be found in “Jobs Not Mobs” and “It’s OK To Be White,” both of which mainstreamed xenophobic and racist talking points.
Viral sloganeering is a Casebook value under the "Tactics" variable in the code book.
- 1. Joan Donovan and Brian Friedberg, “Source Hacking: Media Manipulation In Practice” (Data & Society Research Institute), 2019, https://datasociety.net/wp-content/uploads/2019/09/Source-Hacking_Hi-res.pdf.
- 2. Michael Golebiewski and danah boyd, “Data Voids: Where Missing Data Can Easily Be Exploited” (Data & Society Research Institute), 2018, https://datasociety.net/output/data-voids-where-missing-data-can-easily-be-exploited/.
- 3. Technology and Social Change Project, “Lock Her Up? The Long Tail of Viral Slogans,” Meme War Weekly, April 28, 2020, https://medium.com/memewarweekly/lock-her-up-the-long-tail-of-viral-slogans-562d799b0d07.
- 4. Jazmine Ulloa, “How memes, text chains, and online conspiracies have fueled coronavirus protesters and discord,” The Boston Globe, May 6, 2020, https://www.bostonglobe.com/2020/05/06/nation/how-memes-text-chains-online-conspiracies-haves-fueled-coronavirus-protesters-discord/.
Artificially boosting or decreasing the ratings on websites that feature crowd voting by coordinating large groups of people to submit (often false or misleading) reviews or votes with the aim of affecting scores on websites.
Political or social issues that are divisive in nature. They typically split along partisan lines, and are often presented as binary positions—for or against. Politicians, political influencers, and those running for office will often amplify these wedges in popular discourse, in mainstream press, and on social media. Wedge issue is a Casebook value under the "Vulnerabilities" variable in the code book.