Below is a list of common terms related to misinformation, disinformation, and media manipulation, as well as specific terms the Casebook uses to code cases.
As the practice of media manipulation evolves in response to changes in policies, regulations, and politics, so do the tactics. Below are some of the ways campaign operators take advantage of our networked media ecosystem.
Filter all definitions
A website that hosts messageboards on a variety of topics from music to politics to anime. The site is broken down into distinct messageboards where mostly anonymous users post and reply to threads. Threads that garner new replies are "bumped" to the top of the respective messageboard. 4chan is a Casebook value under the "Network Terrain" variable in the code book.
A recently defunct imageboard owned by Jim Watkins that has been linked to the propogation of white surpremacy, neo-nazism, and the propogation of manifestos by multiple mass shooters. In 2019, the website was taken offline after termination of its DNS and domain registration. The site has since reformed and rebranded itself as "8kun," primarily comprised of QAnon-related imageboards. 8chan is a Casebook value under the "Network Terrain" variable in the code book.
Accounts may be suspended if they violate the terms of service - also referred to as community guidelines - of a social media platform. The suspension is often flagged by both automated processes and human review. Users can often appeal a suspension if they believe their account was suspended by mistake. Accounts may be permanently removed if there has been repeated violations of the terms of service. Account suspension is a Casebook value under the "Mitigation" variable in the code book.
A period of time when the normal state of affairs is interrupted by unforeseen events that are troubling and potentially dangerous. Active crises trigger confusion and require urgent action and immediate attention. Due to the increased media attention and importance of any decisions made during this time, active crises are vulnerable to being exploited by media manipulation. Active crisis is a Casebook value under the "Vulnerabilities" variable in the code book.
Individuals or groups that campaign for social, political, or legal change. They may be formally organized (ex. registered non-governmental organization) or loosely affiliated (ex. advocacy networks). Activist group is a Casebook value under the "Targets" variable in the code book.
Concealing the operators and origins of a campaign in order to create the false perception of a grassroots movement or organic support for a specific cause. Astroturfing is a Casebook value under the "Strategy" variable in the code book.
Blocking is something one user (of a platform or website) does to another. When one user does not want to see what another user posts or how that user engages on the site, they can opt to "block" that user. After blocking, the blocked user will no longer appear in the blocker's account. Blocking is a Casebook value under the "Mitigation" variable in the code book.
Self-published websites or web pages, with no editorial oversight, that are usually run by an individual or small group and are regularly updated with new content. Blogs are a Casebook value under the "Network Terrain" variable in the code book.
Within the study of disinformation and media manipulation, bots typically refer to social media accounts that are automated and deployed for deceptive purposes, such as to artificially amplify a message, game a trending or recommendation algorithm, or inflate an account's engagement metrics. These accounts are typically centrally controlled or in coordination with each other. Bots are a Casebook value under the "Tactics" variable in the code book.
Periods of heightened attention to current events of local, national, or international importance in mass media and on social media. During these moments of mass attention, legitimate information and misinformation may be indistinguishable until facts are established and vetted by official bodies. This period of confusion creates opportunities to sow confusion, target individuals, or shape certain narratives. Breaking news event is a Casebook value under the "Vulnerabilities" variable in the code book.
Butterfly attacks occur when imposters mimic the patterns of behavior of a social group (usually a group that has to fight for representation). Imposters pretend to be part of the group in order to insert divisive rhetoric and disinformation into popular online conversation or within the information networks used by these groups. Distinct from astroturfing, which tries to falsify grassroots support for an issue, butterfly attacks are designed to infiltrate existing communities, media campaigns, or hashtags to disrupt their operations and discredit the group by sowing divisive, inflammatory, or confusing information. These mimicry campaigns exploit social wedges, prejudice, and encourage targeted harassment.
Coined by Patrick Ryan to describe a series of manipulation campaigns he claims to have orchestrated in 2013, the term butterfly attack is inspired by the mimicry behavior of certain species of butterflies, who impersonate the fluttering patterns of other species to confuse predators.1
Butterfly attack is a Casebook value under the "Strategy" variable in the code book.
- 1. Patrick Ryan, “The Butterfly War,” October 13, 2017, https://cultstate.com/2017/10/13/The-Butterfly-War/.
A cheap fake is altered media that has been changed through conventional and affordable technology. Social media examples of cheap fake techniques include photoshopping (including face swapping), lookalikes, as well as speeding and slowing video. A cheap fake is easier to produce than a deep fake, which requires advanced technology and machine learning. 1
NPR reported that the effectiveness of cheap fakes has, for the most part, kept deep fakes out of the 2020 presidential election. 2 Who needs to spend money on AI when basic photoshopping will do? Unlike deep fakes, cheap fakes are common. In August 2020, a video of Nancy Pelosi was slowed down to make it appear like she was intoxicated. A video with the same claim went viral in May 2019. The post originated on TikTok and was posted to YouTube, Facebook, and Twitter. Facebook did not remove the post, but the other three platforms did. 3
- 1. Britt Paris and Joan Donovan, “Deepfakes and Cheap Fakes” (Data and Society Research Institute, September 18, 2019), https://datasociety.net/library/deepfakes-and-cheap-fakes/.
- 2. Tim Mak and Dina Temple-Raston, “Where Are The Deepfakes In This Presidential Election?” NPR, October 1, 2020, https://www.npr.org/2020/10/01/918223033/where-are-the-deepfakes-in-this-presidential-election.
- 3. Hannah Denham, “Another Fake Video of Pelosi Goes Viral on Facebook,” Washington Post, August 3, 2020, https://www.washingtonpost.com/technology/2020/08/03/nancy-pelosi-fake-video-facebook/.
Groups or organizations engaged in advocating for certain issues, educating the wider public, holding the government accountable, or promoting civil and human rights. They may be formally organized or loosely coordinated and include non-governmental organizations (NGOs), community groups, labor unions, educational organizations, faith-based organizations, professional associations, non-profit think tanks, and foundations.
Civil society response refers to actions taken by members or groups of civil society in an attempt to mitigate a campaign's harms or spread. Civil society response is a Casebook value under the "Mitigation" variable in the code book.
A legal proceeding by a private party or parties against another in a civil court of law that seeks remedy for a wrongdoing or harm. Civil/Private Lawsuit is a Casebook value under the "Mitigation" variable in the code book.
The large-scale screening by automated systems and humans of content uploaded to social-media sites to remove material that violates the law or individual platforms Terms of Service.
Exposing false claims, or impersonation attempts, by the groups or individuals who are targets of manipulation campaigns. These community-driven debunkings play out visibly on social media, and do not always receive press attention or acknowledgement.
Community moderation is the "cleaning up" of the site by users. It involves flagging, closing, commenting, editing, and sometimes deleting posts that violate Terms of Service and community standards.
Individuals or groups that actively propagate unfounded or unverified narratives and frames. This often includes speculation, unsubstantiated claims, and explanations predicated on secretive and powerful actors scheming with malicious intent. Conspiracist is a Casebook value under the "Attribution" variable in the code book.
Content removal is the act of platforms taking down specific pieces of content, like videos, tweets, posts, etc. The platform's terms of service are often a guideline for what can be removed, though these are rarely enforced uniformly or consistently. Content removal is a Casebook value under the "Mitigation" variable in the code book.
A term coined by Facebook to describe the use of multiple Facebook or Instagram assets, working in concert to misrepresent themselves, artificially boost the popularity of content, or engage in behaviors designed to enable other violations under their Community Standards, and where the use of fake accounts is central to the operation.
A tactic used for countering hate speech and misinformation by advancing alternative narratives and challenging information. Counterspeech is a Casebook value under the "Mitigation" variable in the code book.
All activities involved in the process of investigating and prosecuting a crime including collecting evidence or information pertaining to a crime, apprehending a suspect, and any subsequent related proceedings such as a trial or sentencing.
Critical press refers to press coverage that is critical of a manipulation campaign. Articles may debunk false claims or investigate the origins and motivations of a campaign. Critical press is a Casebook value under the "Mitigation" variable in the code book.
A person or account who disseminates political propaganda on the internet, particularly on social media platforms. They may be paid or unpaid, working independently or in tandem with a group or campaign, and may be automated or manual.
Coined and theorized by Michael Golebiewski and danah boyd,1 this refers to unique topics or terms that result in minimal, low-quality, or manipulative information from search engine queries. Data voids are social or technical security risks depending on the subject matter of the query. Data void is a Casebook value under the "Vulnerabilities" variable in the code book.
- 1. Michael Golebiewski and danah boyd, "Data Voids: Where Missing Data Can Easily Be Exploited," Data & Society, October 29, 2019, https://datasociety.net/library/data-voids/.
Removing a link or other content from search results. The content or website in question is still available but will not be included in a search engine's, website's, or platform's results. De-indexing is a Casebook value under the "Mitigation" variable in the code book.
The removal of individuals or groups from a platform, preventing them from using the platform’s services even if they try to create new accounts. De-platforming is often enforced when a user has violated the terms of service, and may also include removing any existing content previously created by the user. Deplatforming is a Casebook value under the "Mitigation" variable in the code book.
Exposing false or misleading claims, such as sham miracle drugs or exaggerated advertising slogans. Debunking includes fact-checking efforts, research and investigation, exposés, and other critical writing that attempts to correct the false claims. Debunking is a Casebook value under the "Mitigation" variable in the code book.
The use of "deep" or machine learning to hybridize or generate human bodies and faces. The word is a portmanteau of “deep learning” and “fake."
Removing the ability for an account, channel, or individual to earn revenue from their content on the platform.
Digital blackface is the use of visual or linguistic signifiers of Black technoculture by non-black people for the purpose of deception, manipulation, or exploitation.
Discord is an instant messaging software that launched in 2015. Users can send text, video, images, and audio through its channels. Discord is a Casebook value under the "Network Terrain" variable in the code book.
Discussion forums are online places where people can ask questions and post text, images, video, or other content. As the name implies, these platforms act as forums for discussion, allowing replies to posts or other forms of engagement.
Information that is deliberately false or misleading, often spread for political gain, profit, or to discredit a target individual, group, movement, or political party.
A call to participants to rapidly and widely spread campaign materials, including propaganda or disinformation. Distributed amplification relies on many campaign participants to individually share sensitive or banned content on their personal social media accounts in an effort to evade platform mitigation efforts or dominate the information ecosystem with repetitive content. Distributed amplification is a Casebook value under the "Tactics" variable in the code book.
The act of publishing on the internet private or identifying information about a specific individual, usually with malicious intent (i.e., retaliation, punishment). This type of activity is often banned on platforms and forums. Dox is a Casebook value under the "Observable outcomes" variable in the code book.
Refers to the time leading up to an election when candidates have begun campaigning. Depending on the country, there may be legal limits to what constitutes a campaign period. Election period is a Casebook variable under the "Observable outcomes" variable in the code book.
An evidence collage is a collection of screenshots and text that is assembled into a shareable document and presented as evidence. No professional computer applications are required to make an evidence collage; they can be made with basic image editing applications. Evidence collages are timed with breaking news events, and are meant to influence both the general public and journalists and preempt authoritative reporting. Evidence collages are a call for civilian research, which itself can lead to the dissemination and propagation of unverified information.
By modeling evidence collages after infographics or official documents and by using graphic design to introduce associated keywords and URLs (links to sites with more disinformation), they direct viewers where to investigate and invite them to support the campaign’s sites, resources, hashtags, and communities. Often, evidence collages include both verified and unverified information. This can make them more difficult to debunk. A Casebook example that used evidence collages can be found in “Misidentification: Unite the Right Rally,” where participants interpreted these images as clues and used them to incorrectly identify the driver in a fatal attack. This misidentification led to targeted harassment.
Evidence collages are a Casebook value under the "Tactics" variable in the code book.
Groups or individuals that espouse right-leaning radical or violent positions, often associated with organized white supremacy or other prejudice-driven ideologies. Extremists (right wing) is a Casebook value under the "Attribution" variable in the code book.
A social networking website that allows registered users to create unique profiles, have public and private conversations, join groups, create events, upload photos, etc. The company's source of revenue is selling ads on social media websites and mobile applications. Facebook is a Casebook value under the "Network Terrain" variable in the code book.
The act of investigating information presented as facts in order to determine its veracity and correctness. In some cases, a fact check will result in some kind of public refutation, if the information investigated was found to be erroneous. Furthermore, fact checking is not always bipartisan or universally accepted and may be contested by interested parties.
A sociotechnical mechanism for reporting harmful or offensive content to an online social media platform or company. Content can be flagged by an algorithm, content moderator, or another user. If the content is flagged by another user, an employee from the company often reviews the content and if it violates the terms of service, the user is notified and the content is removed. Flagging is a Casebook value under the "Mitigation" variable in the code book.
The creation and distribution of a fake document with intent to deceive via distribution. Forgery is a Casebook value under the "Tactics" variable in the code book.
A social networking service launched publicly in May 2017. It is known for its right-leaning user base, and as a safe haven for far right extremists. Gab is a Casebook value under the "Network Terrain" variable in the code book.
Attempting to manipulate an algorithm in order to gain attention. This may include tactics that elevate content into a platform’s trending list, being recommended to other users, or placing in the top ten of a search engine’s results. Gaming an algorithm is a Casebook value under the "Strategy" variable in the code book.
Founded in 1998 and headquartered in Mountain View, CA, the multinational company provides internet-related services and products, such as a search engine, cloud computing, online advertising technologies, and software and harware. Google is a Casebook value under the "Network Terrain" variable in the code book.
Targeted and repeated behavior towards an individual or group of people that causes mental, physical or emotional distress. Harassment includes but is not limited to unwanted threats, insults, touching or offensive language. Harassment is a Casebook value under the "Observable outcomes" variable in the code book.
Press that is sharply divided along binary partisan positions, reflecting fierce disagreement with the opposing side.
Pretending to be another person or member of a social identity group, either by mimicking their behavior or creating a falsified online presence. Impersonation is a Casebook value under the "Tactics" variable in the code book.
A single person. Individual is a Casebook value under the "Targets" variable in the code book.
Visible pundits, journalists, or public figures who drive conversation around particular topics in broadcast media and online networks. Influencers are a Casebook value under the "Attribution" variable in the code book.
Information operations is a broad category of activity but generally refers to the strategic use of technological, operational, and psychological resources to disrupt an adversary's informational capacities. For more information, see the Caroline Jack's Lexicon of Lies (Data and Society, 2017) and Martin Libicki's "What is Information Warfare" (in Toward a Revolution in Military Affairs?, edited by Thierry Gongora and Harald von Riekhoff, 2000).
Acquired by Facebook in 2012, Instagram is a social network platform that enables users to edit, upload and comment on photos and short videos, broastcast live videos, and have private conversations using the chat feature. Instagram is a Casebook value under the "Network Terrain" variable in the code book.
Keyword squatting is the tactic of creating online content — including social media accounts — around a specific search-engine-optimized term so as to determine the search results of that term. Since future searches around that term will return the squatted accounts and content, manipulators are able to direct online traffic to their pages and to influence (to varying success) the narrative around the term.1
The term keyword squatting is adapted from “domain squatting,” which is the process of anticipating which domain names will become popular, buying them, and selling them for a profit when they become sought-after URLs. Both keyword and domain squatting can be methods of online impersonation: by acquiring “misleading account names, URLs, or keywords,” manipulators can appear online as their opponents or as the people/groups that they target.
Through search engine optimization, manipulators can make it so that their hashtags and content tags appear before authentic accounts in search results lists. This tactic can be particularly useful in cases of data voids and hidden virality, where a paucity of authoritative information on a keyword may be missing but there is growing interest in the term outside the mainstream view. 2 Keyword squatting allows manipulators to appropriate keywords around breaking news events, social movements, celebrities, and wedge issues. From there, they can use the accounts they created to flood conversations on the given topic with inaccurate or malicious information. Keyword squatting is a tactic manipulators use to instigate media coverage and shape trending conversations on social media.
A Casebook example of keyword squatting can be found in “Targeted Harassment: The Ukraine Whistleblower,” where manipulators encouraged participants to share the name and photos of an alleged whistleblower before platforms intervened. In this case, a media blackout and asymmetry within the media ecosystem created the perfect conditions for successful keyword squatting by motivated manipulators. The strategic domination of keywords on a social media platform or search engine that will return search results and content aligned with the campaign operators’ goals.
Keyword Squatting is a Casebook value under the "Tactics" variable in the code book.
- 1. Joan Donovan and Brian Friedberg, “Source Hacking: Media Manipulation in Practice” (Data and Society Research Institute, September 4, 2019), https://datasociety.net/library/source-hacking-media-manipulation-in-practice/.
- 2. Brian Friedberg, “The Dark Virality of a Hollywood Blood-Harvesting Conspiracy,” Wired, July 31, 2020, https://www.wired.com/story/opinion-the-dark-virality-of-a-hollywood-blood-harvesting-conspiracy/.
The use of coded language to discuss topics that are often automatically or manually flagged as breaking a site's terms of service, community standards, or user policies, in an attempt to circumvent censorship, account bans, or other forms of information control.
A lax security practice is anything that makes the user more vulnerable to security attacks or scams, like phishing. An example of a lax security practice is having a password that can be guessed easily or is repeated across multiple accounts. Lax security practice is a Casebook value under the "Vulnerabilities" variable in the code book.
The unauthorized release of sensitive materials or documents.
Self-imposed or state mandated censorship of a certain news topic. Media blackout is a Casebook value under the "Mitigation" variable in the code book.
Media ecosystems are complex combinations of print, broadcast, digital, and social media that work together to create a self-referential information environment.
Coverage and reporting by journalists in popular or mainstream media. Media exposure is a Casebook value under the "Observable outcomes" variable in the code book.
We define media manipulation as the sociotechnical process whereby motivated actors leverage specific conditions or features within an information ecosystem in an attempt to generate public attention and influence public discourse through deceptive, creative, or unfair means. Campaigns or operations that engage in media manipulation may use several tactics, such as memes, viral videos, forged documents, or leaked information.
They are not exclusive to any actor or group, nor are they inherently good or bad. Activists, constrained by heavy censorship in traditional media, for example, may rely on media manipulation in the digital space to circumvent such information controls. However, extremists may likewise use the same platforms and tactics to mainstream hateful and dangerous speech. Furthermore, media manipulation is a broad term in that it can be used to define a variety of other terms, such as disinformation, information operations, or influence operations.
Note that media manipulation is distinct from media control, which occurs at the top-level by the state and private sector. Media control would instead refer to activity like ISP-level content blocking, government censorship agencies, media ownership, content filtering, or distribution and licensing regimes.
A media outlet is any independent, mainstream, or state-run news organization. Media outlets' formats include newspaper, radio, magazines, television, websites, and social media. Media outlets are a Casebook value under the "Network Terrain" variable in the code book.
The intentional propagation of political memes on social media for the purpose of political persuasion, community building, or to strategically spread narratives and other messaging crucial to a media manipulation campaign. Meme war is a Casebook value under the "Strategy" variable in the code book.
Memes, a term coined by Richard Dawkins (1976), are “units of culture” that spread through the diffusion of ideas. Memes are particularly salient online because the internet crystallizes them as artifacts of communication and accelerates their distribution through subcultures. Memes are a Casebook value under the "Tactics" variable in the code book.
Erroneously identifying an individual as someone else, intentionally or accidentally. Misidentification is a Casebook value under the "Observable outcomes" variable in the code book.
Misinfographics are infographics with false or misleading information. In some cases, they may also be classified as a forgery when they borrow an existing organization's brand aesthetics and logo in order to make it seem as if the content was coming from the organization.
The #SaveTheChildren hashtag within QAnon uses misinfographics to publicize human trafficking statisitcs. Vox notes that moms on Instagram sharing “aesthetically pleasing” posts have been “critical” to its spread.
1 A Casebook example of a misinfographic can be found in “Misinfographic: The Spread of JihadChart in India,” where a misinfographic titled “Jihad: The Holy war to spread Islam” appeared on Facebook, Reddit, and Twitter to capitalize on and further promote anti-Muslim bias.
Misinfographics are a Casebook value under the "Tactics" variable in the code book.
- 1. Rebecca Jennings, “We’re in the Middle of Another Moral Panic. What Can We Learn from the Past?,” Vox, September 25, 2020, https://www.vox.com/the-goods/2020/9/25/21453036/save-the-children-qanon-human-trafficking-satantic-panic.
Information whose inaccuracy is unintentional, and spread unknowingly.
Attempts, measures, and other actions taken by the private sector, government, media organizations, and civil society in an attempt to contain or prevent the continuation of a campaign, its effects, or messaging. Mitigation is both a stage in the life cycle, and one of the variables in the code book under which Casebook codes fall.
Distribution of conflicting information in an attempt to cloud public perception of an individual, group, or topic, making the target subject more complex or confusing. Muddy the waters is a Casebook value under the "Strategy" variable in the code book.
Tacit coalitions or groups of people who share some, but not all, political positions, primarily congregate online (though not exclusively), and often come together as a swarm to act in unison as a political force. Networked factions maintain these coalitions using shared phrases, hashtags, memes, or similar media. These factions can form and dissolve according to the political context. Networked faction is a Casebook value under the "Attribution" variable in the code book.
Platforms that have both an editorial arm and a self-publishing arm that allows users to publish and post their own articles and other content. Examples include Medium and Buzzfeed Community. Open editorial platforms are a Casebook value under the "Network Terrain" variable in the code book.
The part of the web that is not protected by passwords, and thus accessible to anyone.
A strong supporter or committed member of a party, cause, or person. Partisans are a Casebook value under the "Attribution" variable in the code book.
Fraudulently posing as a trustworthy entity in a malicious attempt to access confidential information such as usernames, passwords and credit card details, usually by the means of email. Phishing is a Casebook value under the "Tactics" variable in the code book.
Web-based technology that allows users of that platform to generate content and engage in peer-to-peer conversations and other forms of engagement (ex. likes, follows, retweets).
When a political party or politician adopts or co-opts a phrase, term, or idea for politically-motivated purposes. Political adoption is a Casebook value under the "Observable outcomes" variable in the code book.
A group of people sharing similar ideology or political positions who participate in elections by fielding candidates that will then carry out their goals and policies. Political party is a Casebook value under the "Targets" variable in the code book.
A politician is a person engaged in party politics or occupying public office. Because of their visibility, policies, or affiliations, a politician can be the target of disinformation campaigns. Politician is a Casebook value under the "Targets" variable in the code book.
Individuals who engage in activity designed to elicit a reaction from a target purely for fun or mischief. Pranksters are a Casebook value under the "Attribution" variable in the code book.
A bias that can result in an injury or detriment to another individual's legal rights or claims, wellbeing, or participation in society. Such preconceived judgements are not informed by facts and often target an individual or group based on race, religion, sexual orientation, age, class, or other demographic identifier. Prejudice is a Casebook value under the "Vulnerabilities" variable in the code book.
Technology that allows for peer-to-peer interactions, which are private by default and require users to be invited and give consent to participate. Information sent via private messaging may or may not be encrypted.
The deliberate spread of information or ideas to influence a person, group, institution or nation in support of — or in opposition to — a particular cause. Propaganda is often coded as "white," "grey," or "black." White referring to overt propaganda where the source of the content is clear, grey referring to propaganda with muddy or unclear origins, and black referring to propaganda that disguises its origins, often portraying itself as the target it is trying to discredit.
Publicly available information pertaining to individuals, organizations, companies, or any other entity that has been aggregated into an accessible, searchable, and organized format. Public directory is a Casebook value under the "Vulnerabilities" variable in the code book.
Racialized disinformation campaigns employ the strategic use of falsified racial or ethnic identities and/or focus on race as a wedge issue.
Recognition by target is when a target of a media manipulation or disinformation campaign acknowledges and responds to the campaign's activities or the operators. Recognition by target is a Casebook value under the "Observable outcomes" variable in the code book.
Recontextualized media is any image, video, or audio clip that has been taken out of its original context and reframed for an entirely different purpose or narrative frame. While cheap fakes, more broadly, alter the media, recontextualized media uses unaltered images, video, or audio but presents them in a new or false context according to the manipulators’ agenda.
During the early protests against the murder of George Floyd in June 2020, many recontextualized images spread on social media. One showed an image from the TV show Designated Survivor but claimed it was from a Black Lives Matter protest; another photo of a McDonald’s burning in 2016 was reframed as though it was from a current protest.
1 A Casebook example of recontextualized media can be found in the case “Targeted Harassment: The spread of #Coronajihad,” where videos were re-captioned to exploit bias against Muslims and to blame Muslims for the spread of coronavirus in India.
Recontextualized media is a Casebook value under the "Tactics" variable in the code book.
- 1. Jane Lytvynenko and Craig Silverman, “We’re Keeping A Running List Of Hoaxes And Misleading Posts About The Nationwide Police Brutality Protests,” BuzzFeed News, June 5, 2020, https://www.buzzfeednews.com/article/janelytvynenko/hoax-misleading-claims-george-floyd-protests.
Activities with the goal of enlisting or drawing new followers or members to a political party, social movement, extremist organization, ideology, or other distinct movement, group, or idea.
Reddit is a website where users can post information and prompts. These posts and prompts get responses and up or down votes from other users, which ranks the display of the content. The website is divided into user-created categories called "subreddits." The San Francisco-based site was founded in 2006. Reddit is a Casebook value under the "Network Terrain" variable in the code book.
Individual or coordinated group efforts to establish the origins and impact of a manipulation campaign. Research and investigation is a Casebook value under the "Mitigation" variable in the code book.
Individuals or groups involved in scientific research, medicine, or healthcare. This may include scientists, researchers, research labs, scientific organizations, health authorities, doctors, nurses, and other healthcare professionals. Scientific and medical community is a Casebook value under the "Targets" variable in the code book.
Manipulation with the aim of getting people to give up confidential information through trickery and deception rather than technical exploits. These attacks often take advantage of emotions, trust, or habit in order to convince individuals to take actions, such as clicking a fraudulent link, visiting a malicious website, or giving up login credentials. (Adapted from Forcepoint).
Groups defined by some social, physical, or mental characteristics. Examples include race, ethnicity, gender, social class, sexual orientation, or religious beliefs. Social identity group is a Casebook value under the "Targets" variable in the code book.
Groupings of individuals or organizations which focus on political or social issues.
Comprised of a network of websites and software that link internet users together, often with the intention of fostering social interaction and promoting the exchange of goods and services.
Used as an adjective, the word sociotechnical typically describes something that exists due to a mix of both social and technical conditions. Within the study of media manipulation, for example, sociotechnical is often used to describe specific campaigns or other media phenomena as they are only emergent from the combination of social conditions and technical features.
A false online identity, typically created by a person or group in order to promote a specific narrative, or opinion, to sow division, or to circumvent a previous account ban.
A versatile set of techniques for feeding false information to journalists, investigators, and the general public during breaking news events or across highly polarized wedge issues. Specifically, source hacking exploits situations and technologies to obscure the authorship of false claims. For more, read Source Hacking (Data and Society, 2019) by Joan Donovan and Brian Friedberg.
Best practices for ensuring responsibility and accountability when producing news content and the algorithmic systems that help spread it. For more, read "Stop the Presses? Moving From Strategic Silence to Strategic Amplification in a Networked Media Ecosystem" by Joan Donovan and danah boyd (American Behavioral Scientist, September 2019, doi:10.1177/0002764219878229).
The use of editorial discretion for the public good. For example, journalistic or editorial standards to not report on suicide. For more, read "Stop the Presses? Moving From Strategic Silence to Strategic Amplification in a Networked Media Ecosystem" by Joan Donovan and danah boyd (American Behavioral Scientist, September 2019, doi:10.1177/0002764219878229).
When loosely organized online groups come together for specific objectives or campaigns, such as spamming a comment section, engaging in harassment, or obfuscating a hashtag. Swarming is a Casebook value under the "Tactics" variable in the code book.
The continuation of a media manipulation or disinformation campaign with adjustments to the tactics. Tactical adjustment is a Casebook value under the "Campaign adaptation" variable in the code book.
The redeployment of a media manipulation or disinformation campaign's tactics. Tactical redeployment is a Casebook value under the "Campaign adaptation" variable in the code book.
Coordinated and organized online harassment of an individual or groups of individuals to threaten, censor, or upset them or to disrupt their operations or behavior. Targeted harassment is a Casebook value under the "Strategy" variable in the code book.
The practice of engaging in active care and maintenance of digital places as both a defense mechanism against manipulation and disinformation and in service of preserving the health of intra-cultural expression online.
Gaining exposure by placing information or disinformation artifacts in locations that will be taken up and amplified by other systems, individuals, or publications. Typically, information may be posted on smaller blogs or social media before being reported by mainstream media outlets or politicians and other influential individuals. Trading up the chain is a Casebook value under the "Strategy" variable in the code book.
Engaging in inflammatory, divisive, or distracting behavior in an online community with the goal of provoking readers or viewers into an emotional, often negative, response (ex. anger, outrage, offense). Trolling is a Casebook value under the "Tactics" variable in the code book.
Twitter is an app and website where logged-in users can post up-to-280-character messages — called "tweets" — and up-to-140-second video/audio messages. These user accounts can like, comment on, and share other users' messages. Some user accounts are "verified" by the company, which bestows on the accounts special privileges, such as more moderation options. Users can choose if they want their profile to be public or private. Anyone without an account can access public tweets but cannot engage with them. The San Francisco-based site was founded in 2006. Twitter is a Casebook value under the "Network Terrain" variable in the code book.
The intentional registration of a domain name that incorporates typographical variants of the target domain name in order to deceive visitors. This may involve misspelling a domain or using a different top level domain. Typosquatting is a form of cybersquatting, or an at attempt to mislead users by fraudulently posing under someone else's brand or copyright. Typosquatting is a Casebook value under the "Tactics" variable in the code book.
Cases where there is insufficient evidence to definitely identify campaign operators or participants. Unclear attribution is a Casebook value under the "Attribution" variable in the code book.
Cases where there is insufficient evidence to suggest adaptation, redeployment, or any other tactical change by campaign operators or participants. Unclear or no observable adaptation is a Casebook value under the "Campaign adaptation" variable in the code book.
There is no discernible strategy based on the available evidence. Unclear strategy is a Casebook value under the "Strategy" variable in the code book.
Vimeo is a video-sharing platform launched in 2004. The site does not run ads, but users pay for subscriptions to the site. Vimeo is a Casebook value under the "Network Terrain" variable in the code book.
Viral sloganeering is the tactic of creating short, catchy phrases intended to deliver persuasive, disruptive messaging. Viral slogans may highlight social wedges, and sow additional divisions along political or cultural lines by capturing social media attention, provoking media coverage, and sometimes garnering institutional responses. These often divisive phrases are used on and offline, spread virally through memes, hashtags, posters, and videos.
To succeed as viral slogans, they must expand past their community of origin and creators into the public sphere. With this scale and distance, authorship and origin are often concealed, enabling mainstream media coverage and further amplification without attribution.1 Successful viral slogans often capitalize on breaking news events, and can themselves be the catalyst for news coverage. As such, the outcome of viral sloganeering often is the popularization of previous underused words or phrases — effective tools in keyword squatting or the filling of data voids, which are terms and search queries about which there isn’t much content.2
Current examples of viral sloganeering include “Lock Her Up” (aimed at Hillary Clinton), “Send Her Back” (aimed at Ilhan Omar),3 and “Quarantine is when you restrict movement of sick people.4 Tyranny is when you restrict the movement of healthy people.” Casebook examples of viral sloganeering can be found in “Jobs Not Mobs” and “It’s OK To Be White,” both of which mainstreamed xenophobic and racist talking points.
Viral sloganeering is a Casebook value under the "Tactics" variable in the code book.
- 1. Joan Donovan and Brian Friedberg, “Source Hacking: Media Manipulation In Practice” (Data & Society Research Institute), 2019, https://datasociety.net/wp-content/uploads/2019/09/Source-Hacking_Hi-res.pdf.
- 2. Michael Golebiewski and danah boyd, “Data Voids: Where Missing Data Can Easily Be Exploited” (Data & Society Research Institute), 2018, https://datasociety.net/output/data-voids-where-missing-data-can-easily-be-exploited/.
- 3. Technology and Social Change Project, “Lock Her Up? The Long Tail of Viral Slogans,” Meme War Weekly, April 28, 2020, https://medium.com/memewarweekly/lock-her-up-the-long-tail-of-viral-slogans-562d799b0d07.
- 4. Jazmine Ulloa, “How memes, text chains, and online conspiracies have fueled coronavirus protesters and discord,” The Boston Globe, May 6, 2020, https://www.bostonglobe.com/2020/05/06/nation/how-memes-text-chains-online-conspiracies-haves-fueled-coronavirus-protesters-discord/.
Artificially boosting or decreasing the ratings on websites that feature crowd voting by coordinating large groups of people to submit (often false or misleading) reviews or votes with the aim of affecting scores on websites.
Political or social issues that are divisive in nature. They typically split along partisan lines, and are often presented as binary positions – for or against. Politicians, political influencers, and those running for office will often amplify these wedges in popular discourse, in mainstream press, and on social media. Wedge issues are a Casebook value under the "Vulnerabilities" variable in the code book.