Hubbry Logo
search
logo
2265468

Disinformation

logo
Community Hub0 Subscribers

Wikipedia

from Wikipedia

Disinformation is false or misleading information deliberately spread to deceive people,[1][2][3][4][5] or to secure economic or political gain and which may cause public harm.[6] Disinformation is an orchestrated adversarial activity in which actors employ strategic deceptions and media manipulation tactics to advance political, military, or commercial goals.[7] Disinformation is implemented through coordinated campaigns[8] that "weaponize multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value judgements—to exploit and amplify culture wars and other identity-driven controversies."[9]

In contrast, misinformation refers to inaccuracies that stem from inadvertent error.[10] Misinformation can be used to create disinformation when known misinformation is purposefully and intentionally disseminated.[11] "Fake news" has sometimes been categorized as a type of disinformation, but scholars have advised not using these two terms interchangeably or using "fake news" altogether in academic writing since politicians have weaponized it to describe any unfavorable news coverage or information.[12]

Etymology

[edit]
The Etymology of Disinformation by H. Newman as published in The Journal of Information Warfare in 2021.[13][14] Elements of the word disinformation have their origins in Proto-Indo-European language family. The Latin 'dis' and 'in' and can both be considered to have Proto-Indo-European roots, 'forma' is considerably more obscure. The green box in the figure highlights the origin 'forma' is uncertain, however, it may have its roots in the Aristotelean concept of μορφή (morphe) where something becomes a 'thing' when it has 'form' or substance.

The English word disinformation comes from the application of the Latin prefix dis- to information making the meaning "reversal or removal of information". The rarely used word had appeared with this usage in print at least as far back as 1887.[15][16][17][18]

Some consider it a loan translation of the Russian дезинформация, transliterated as dezinformatsiya,[19][1][2] apparently derived from the title of a KGB black propaganda department.[20][1][21][19] Soviet planners in the 1950s defined disinformation as "dissemination (in the press, on the radio, etc.) of false reports intended to mislead public opinion."[22]

Disinformation first made an appearance in dictionaries in 1985, specifically, Webster's New College Dictionary and the American Heritage Dictionary.[23] In 1986, the term disinformation was not defined in Webster's New World Thesaurus or New Encyclopædia Britannica.[19] After the Soviet term became widely known in the 1980s, native speakers of English broadened the term as "any government communication (either overt or covert) containing intentionally false and misleading material, often combined selectively with true information, which seeks to mislead and manipulate either elites or a mass audience."[2]

By 1990, use of the term disinformation had fully established itself in the English language within the lexicon of politics.[24] By 2001, the term disinformation had come to be known as simply a more civil phrase for saying someone was lying.[25] Stanley B. Cunningham wrote in his 2002 book The Idea of Propaganda that disinformation had become pervasively used as a synonym for propaganda.[26]

Operationalization

[edit]

The Shorenstein Center at Harvard University defines disinformation research as an academic field that studies "the spread and impacts of misinformation, disinformation, and media manipulation", including "how it spreads through online and offline channels, and why people are susceptible to believing bad information, and successful strategies for mitigating its impact".[27] According to a 2023 research article published in New Media & Society,[7] disinformation circulates on social media through deception campaigns implemented in multiple ways including: astroturfing, conspiracy theories, clickbait, culture wars, echo chambers, hoaxes, fake news, propaganda, pseudoscience, and rumors.

Activities that operationalize disinformation campaigns online[7]
Term Description Term Description
Astroturfing A centrally coordinated campaign that mimics grassroots activism by making participants pretend to be ordinary citizens Fake news Genre: The deliberate creation of pseudo-journalism

Label: The instrumentalization of the term to delegitimize news media

Conspiracy theories Rebuttals of official accounts that propose alternative explanations in which individuals or groups act in secret Greenwashing Deceptive communication makes people believe that a company is environmentally responsible when it is not
Clickbait The deliberate use of misleading headlines and thumbnails to increase online traffic for profit or popularity Propaganda Organized mass communication, on a hidden agenda, and with a mission to conform belief and action by circumventing individual reasoning
Culture wars A phenomenon in which multiple groups of people, who hold entrenched values, attempt to steer public policy contentiously Pseudoscience Accounts that claim the explanatory power of science, borrow its language and legitimacy but diverge substantially from its quality criteria
Doxxing A form of online harassment that breaches privacy boundaries by releasing information intending physical and online harm to a target Rumors Unsubstantiated news stories that circulate while not corroborated or validated
Echo chamber An epistemic environment in which participants encounter beliefs and opinions that coincide with their own Trolling Networked groups of digital influencers that operate 'click armies' designed to mobilize public sentiment
Hoax News in which false facts are presented as legitimate Urban legends Moral tales featuring durable stories of intruders incurring boundary transgressions and their dire consequences

In order to distinguish between similar terms, including misinformation and malinformation, scholars collectively agree on the definitions for each term as follows: (1) disinformation is the strategic dissemination of false information with the intention to cause public harm;[28] (2) misinformation represents the unintentional spread of false information; and (3) malinformation is factual information disseminated with the intention to cause harm,[29][30] these terms are abbreviated 'DMMI'.[31]

In 2019, Camille François devised the "ABC" framework of understanding different modalities of online disinformation:

In 2020, the Brookings Institution proposed amending this framework to include Distribution, defined by the "technical protocols that enable, constrain, and shape user behavior in a virtual space".[33] Similarly, the Carnegie Endowment for International Peace proposed adding Degree ("distribution of the content ... and the audiences it reaches") and Effect ("how much of a threat a given case poses").[34]

Comparisons with propaganda

[edit]

Whether and to what degree disinformation and propaganda overlap is subject to debate. Some (like U.S. Department of State) define propaganda as the use of non-rational arguments to either advance or undermine a political ideal, and use disinformation as an alternative name for undermining propaganda,[35] while others consider them to be separate concepts altogether.[36] One popular distinction holds that disinformation also describes politically motivated messaging designed explicitly to engender public cynicism, uncertainty, apathy, distrust, and paranoia, all of which disincentivize citizen engagement and mobilization for social or political change.[22]

Practice

[edit]

Disinformation is the label often given to foreign information manipulation and interference (FIMI).[37][38] Studies on disinformation are often concerned with the content of activity whereas the broader concept of FIMI is more concerned with the "behaviour of an actor" that is described through the military doctrine concept of tactics, techniques, and procedures (TTPs).[37]

Disinformation is primarily carried out by government intelligence agencies, but has also been used by non-governmental organizations and businesses.[39] Front groups are a form of disinformation, as they mislead the public about their true objectives and who their controllers are.[40] Most recently, disinformation has been deliberately spread through social media in the form of "fake news", disinformation masked as legitimate news articles and meant to mislead readers or viewers.[41] Disinformation may include distribution of forged documents, manuscripts, and photographs, or spreading dangerous rumours and fabricated intelligence. Use of these tactics can lead to blowback, however, causing such unintended consequences such as defamation lawsuits or damage to the dis-informer's reputation.[40]

Worldwide

[edit]

Disinformation is widely seen as a pressing challenge for democracies worldwide.[42][43] Information that could undermine democracy and inspire violent or dangerous actions.[sentence fragment]

Soviet disinformation

[edit]
Former Romanian secret police senior official Ion Mihai Pacepa exposed disinformation history in his book Disinformation (2013).[44]
Use of disinformation as a Soviet tactical weapon started in 1923,[45] when it became a tactic used in the Soviet political warfare called active measures.[46]

Russian disinformation

[edit]

Russian disinformation campaigns have occurred in many countries.[47][48][49][50] For example, disinformation campaigns led by Yevgeny V. Prigozhin, a Russian oligarch, have been reported in several African countries.[51][52] Russia, however, denies that it uses disinformation to influence public opinion.[53]

Often Russian campaigns aim to disrupt domestic politics within Europe and the United States in an attempt to weaken the West due to its long-standing commitment to fight back against "Western imperialism" and shift the balance of world power to Russia and her allies. According to the Voice of America, Russia seeks to promote American isolationism, border security concerns and racial tensions within the United States through its disinformation campaigns.[54][55][56]

Chinese disinformation

[edit]
A low quality video produced by Spamouflage, featuring an AI-generated newscaster with a stiff, robotic voice, alleging that the US and India are secretly selling weapons to Myanmar.[57]
Spamouflage, Dragonbridge, Spamouflage Dragon, Storm 1376, or Taizi Flood is an online propaganda and disinformation operation that has been using a network of social media accounts to make posts in favor of the Chinese Communist Party and government of the People's Republic of China and harass dissidents and journalists overseas since 2017.[58][59][60] Beginning in the early 2020s, Spamouflage accounts also began making posts about American and Taiwanese politics.[61][62] It is widely believed that the Chinese government, particularly the Ministry of Public Security, is behind the network.[63][59][64][65][62] Spamouflage has increasingly used generative artificial intelligence for influence operations.[57] The campaign has largely failed to receive views from real users,[61] although it has attracted some organic engagement using new tactics.[65][66]: 2 

American disinformation

[edit]
How Disinformation Can Be Spread, explanation by U.S. Defense Department (2001)

The United States Intelligence Community appropriated use of the term disinformation in the 1950s from the Russian dezinformatsiya, and began to use similar strategies[67][68] during the Cold War and in conflict with other nations.[21] The New York Times reported in 2000 that during the CIA's effort to substitute Mohammed Reza Pahlavi for then-Prime Minister of Iran Mohammad Mossadegh, the CIA placed fictitious stories in the local newspaper.[21] Reuters documented how, subsequent to the 1979 Soviet Union invasion of Afghanistan during the Soviet–Afghan War, the CIA put false articles in newspapers of Islamic-majority countries, inaccurately stating that Soviet embassies had "invasion day celebrations".[21] Reuters noted a former U.S. intelligence officer said they would attempt to gain the confidence of reporters and use them as secret agents, to affect a nation's politics by way of their local media.[21]

In October 1986, the term gained increased currency in the U.S. when it was revealed that two months previously, the Reagan Administration had engaged in a disinformation campaign against then-leader of Libya, Muammar Gaddafi.[69] White House representative Larry Speakes said reports of a planned attack on Libya as first broken by The Wall Street Journal on August 25, 1986, were "authoritative", and other newspapers including The Washington Post then wrote articles saying this was factual.[69] U.S. State Department representative Bernard Kalb resigned from his position in protest over the disinformation campaign, and said: "Faith in the word of America is the pulse beat of our democracy."[69]

The executive branch of the Reagan administration kept watch on disinformation campaigns through three yearly publications by the Department of State: Active Measures: A Report on the Substance and Process of Anti-U.S. Disinformation and Propaganda Campaigns (1986); Report on Active Measures and Propaganda, 1986–87 (1987); and Report on Active Measures and Propaganda, 1987–88 (1989).[67]

According to a report by Reuters, the United States ran a propaganda campaign to spread disinformation about the Sinovac Chinese COVID-19 vaccine, including using fake social media accounts to spread the disinformation that the Sinovac vaccine contained pork-derived ingredients and was therefore haram under Islamic law.[70] Reuters said the ChinaAngVirus disinformation campaign was designed to "counter what it perceived as China's growing influence in the Philippines" and was prompted by the "[fear] that China's COVID diplomacy and propaganda could draw other Southeast Asian countries, such as Cambodia and Malaysia, closer to Beijing".[70] The campaign was also described as "payback for Beijing's efforts to blame Washington for the pandemic".[71] The campaign primarily targeted people in the Philippines and used a social media hashtag for "China is the virus" in Tagalog.[70] The campaign ran from 2020 to mid-2021.[70] The primary contractor for the U.S. military on the project was General Dynamics IT, which received $493 million for its role.[70]

Since 2023, Republican members of the US Congress have attacked researchers who study disinformation as being against freedom of speech and as a euphemism for government censorship.[72][73] On April 18, 2025, citing an Executive Order signed by Trump,[74][75] the US National Science Foundation released a statement cancelling funding for disinformation research,[76] citing it does not fit with the NSF priorities, "including but not limited to those on diversity, equity, and inclusion (DEI) and misinformation/disinformation."[77]

Response

[edit]

Responses from cultural leaders

[edit]

Pope Francis condemned disinformation in a 2016 interview, after being made the subject of a fake news website during the 2016 U.S. election cycle which falsely claimed that he supported Donald Trump.[78][79][80] He stated that the worst thing the news media could do was spread disinformation and said the act was a sin,[81][82] comparing those who spread disinformation to individuals who engage in coprophilia.[83][84]

Ethics in warfare

[edit]

In a contribution to the 2014 book Military Ethics and Emerging Technologies, writers David Danks and Joseph H. Danks discuss the ethical implications in using disinformation as a tactic during information warfare.[85] They note there has been a significant degree of philosophical debate over the issue as related to the ethics of war and use of the technique.[85] The writers describe a position whereby the use of disinformation is occasionally allowed, but not in all situations.[85] Typically the ethical test to consider is whether the disinformation was performed out of a motivation of good faith and acceptable according to the rules of war.[85] By this test, the tactic during World War II of putting fake inflatable tanks in visible locations on the Pacific Islands in order to falsely present the impression that there were larger military forces present would be considered as ethically permissible.[85] Conversely, disguising a munitions plant as a healthcare facility in order to avoid attack would be outside the bounds of acceptable use of disinformation during war.[85]

Research

[edit]
Disinformation spreads through controversies.[9]

Research related to disinformation studies is increasing as an applied area of inquiry.[86][87] The call to formally classify disinformation as a cybersecurity threat is made by advocates due to its increase in social networking sites.[88] Despite the proliferation of social media websites, Facebook and Twitter showed the most activity in terms of active disinformation campaigns. Techniques reported on included the use of bots to amplify hate speech, the illegal harvesting of data, and paid trolls to harass and threaten journalists.[89]

Whereas disinformation research focuses primarily on how actors orchestrate deceptions on social media, primarily via fake news, new research investigates how people take what started as deceptions and circulate them as their personal views.[9] As a result, research shows that disinformation can be conceptualized as a program that encourages engagement in oppositional fantasies (i.e., culture wars), through which disinformation circulates as rhetorical ammunition for never-ending arguments.[9] As disinformation entangles with culture wars, identity-driven controversies constitute a vehicle through which disinformation disseminates on social media. This means that disinformation thrives, not despite raucous grudges but because of them. The reason is that controversies provide fertile ground for never-ending debates that solidify points of view.[9]

Scholars have pointed out that disinformation is not only a foreign threat as domestic purveyors of disinformation are also leveraging traditional media outlets such as newspapers, radio stations, and television news media to disseminate false information.[90] Current research suggests right-wing online political activists in the United States may be more likely to use disinformation as a strategy and tactic.[91] Governments have responded with a wide range of policies to address concerns about the potential threats that disinformation poses to democracy, however, there is little agreement in elite policy discourse or academic literature as to what it means for disinformation to threaten democracy, and how different policies might help to counter its negative implications.[92]

Consequences of exposure to disinformation online

[edit]

There is a broad consensus amongst scholars that there is a high degree of disinformation, misinformation, and propaganda online; however, it is unclear to what extent such disinformation has on political attitudes in the public and, therefore, political outcomes.[93] This conventional wisdom has come mostly from investigative journalists, with a particular rise during the 2016 U.S. election: some of the earliest work came from Craig Silverman at Buzzfeed News.[94] Cass Sunstein supported this in #Republic, arguing that the internet would become rife with echo chambers and informational cascades of misinformation leading to a highly polarized and ill-informed society.[95]

Research after the 2016 election found: (1) for 14 percent of Americans social media was their "most important" source of election news; 2) known false news stories "favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times"; 3) the average American adult saw fake news stories, "with just over half of those who recalled seeing them believing them"; and 4) people are more likely to "believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks."[96] Correspondingly, whilst there is wide agreement that the digital spread and uptake of disinformation during the 2016 election was massive and very likely facilitated by foreign agents, there is an ongoing debate on whether all this had any actual effect on the election. For example, a double blind randomized-control experiment by researchers from the London School of Economics (LSE), found that exposure to online fake news about either Trump or Clinton had no significant effect on intentions to vote for those candidates. Researchers who examined the influence of Russian disinformation on Twitter during the 2016 US presidential campaign found that exposure to disinformation was (1) concentrated among a tiny group of users, (2) primarily among Republicans, and (3) eclipsed by exposure to legitimate political news media and politicians. Finally, they find "no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior."[97] As such, despite its mass dissemination during the 2016 Presidential Elections, online fake news or disinformation probably did not cost Hillary Clinton the votes needed to secure the presidency.[98]

Research on this topic remains inconclusive, for example, misinformation appears not to significantly change political knowledge of those exposed to it.[99] There seems to be a higher level of diversity of news sources that users are exposed to on Facebook and Twitter than conventional wisdom would dictate, as well as a higher frequency of cross-spectrum discussion.[100][101] Other evidence has found that disinformation campaigns rarely succeed in altering the foreign policies of the targeted states.[102]

Research is also challenging because disinformation is meant to be difficult to detect and some social media companies have discouraged outside research efforts.[103] For example, researchers found disinformation made "existing detection algorithms from traditional news media ineffective or not applicable...[because disinformation] is intentionally written to mislead readers...[and] users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy."[103] Facebook, the largest social media company, has been criticized by analytical journalists and scholars for preventing outside research of disinformation.[104][105][106][107]

Alternative perspectives and critiques

[edit]

Researchers have criticized the framing of disinformation as being limited to technology platforms, removed from its wider political context and inaccurately implying that the media landscape was otherwise well-functioning.[108] "The field possesses a simplistic understanding of the effects of media technologies; overemphasizes platforms and underemphasizes politics; focuses too much on the United States and Anglocentric analysis; has a shallow understanding of political culture and culture in general; lacks analysis of race, class, gender, and sexuality as well as status, inequality, social structure, and power; has a thin understanding of journalistic processes; and, has progressed more through the exigencies of grant funding than the development of theory and empirical findings."[109]

Alternative perspectives have been proposed:

  1. Moving beyond fact-checking and media literacy to study a pervasive phenomenon as something that involves more than news consumption.
  2. Moving beyond technical solutions including AI-enhanced fact checking to understand the systemic basis of disinformation.
  3. Develop a theory that goes beyond Americentrism to develop a global perspective, understand cultural imperialism and Third World dependency on Western news,[110] and understand disinformation in the Global South.[111]
  4. Develop market-oriented disinformation research that examines the financial incentives and business models that nudge content creators and digital platforms to circulate disinformation online.[7][112]
  5. Include a multidisciplinary approach, involving history, political economy, ethnic studies, feminist studies, and science and technology studies.
  6. Develop understandings of Gendered-based disinformation (GBD) defined as "the dissemination of false or misleading information attacking women (especially political leaders, journalists and public figures), basing the attack on their identity as women."[113][114]

Strategies for spreading disinformation

[edit]

Disinformation attack

[edit]

The research literature on how disinformation spreads is growing.[93] Studies show that disinformation spread in social media can be classified into two broad stages: seeding and echoing.[9] "Seeding", when malicious actors strategically insert deceptions, like fake news, into a social media ecosystem, and "echoing" is when the audience disseminates disinformation argumentatively as their own opinions often by incorporating disinformation into a confrontational fantasy.

Internet manipulation

[edit]

Internet manipulation is the use of online digital technologies, including algorithms, social bots, and automated scripts, for commercial, social, military, or political purposes.[115][116] Internet and social media manipulation are the prime vehicles for spreading disinformation due to the importance of digital platforms for media consumption and everyday communication.[117] When employed for political purposes, internet manipulation may be used to steer public opinion,[118] polarise citizens,[119] circulate conspiracy theories,[120] and silence political dissidents. Internet manipulation can also be done for profit, for instance, to harm corporate or political adversaries and improve brand reputation.[121] Internet manipulation is sometimes also used to describe the selective enforcement of Internet censorship[122][123] or selective violations of net neutrality.[124]

Internet manipulation for propaganda purposes with the help of data analysis and internet bots in social media is called computational propaganda.

Studies show four main methods of seeding disinformation online:[93]

  1. Selective censorship
  2. Manipulation of search rankings
  3. Hacking and releasing
  4. Directly sharing disinformation

Exploiting online advertising technologies

[edit]

Disinformation is amplified online due to malpractice concerning online advertising, especially the machine-to-machine interactions of real-time bidding systems.[125] Online advertising technologies have been used to amplify disinformation due to the financial incentives and monetization of user-generated content and fake news.[112] The lax oversight over the online advertising market can be used to amplify disinformation, including the use of dark money used for political advertising.[126]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]

Grokipedia

from Grokipedia
Disinformation refers to false or misleading information deliberately created and disseminated with the intent to deceive, manipulate perceptions, or obscure reality. Unlike misinformation, which involves the unintentional spread of inaccuracies, disinformation requires purposeful agency and is often deployed in coordinated efforts to achieve strategic objectives, such as undermining adversaries or shaping public opinion.[1][2] The concept traces its roots to the Russian term dezinformatsiya, formalized in the Soviet Union during the early 20th century as a component of intelligence operations known as "active measures." A dedicated disinformation office was established in 1923, and the practice was defined in the Great Soviet Encyclopedia (1952) as the deliberate provision of false information to misinform the enemy or the public.[3] This tactic was extensively utilized by the KGB throughout the Cold War to fabricate narratives, plant rumors, and erode trust in Western institutions through covert channels.[3] In the modern era, disinformation has adapted to digital platforms, enabling rapid amplification via social media and algorithmic echo chambers, though empirical assessments reveal that claims of widespread disinformation often emanate from sources with institutional biases, such as academia and legacy media, which systematically undervalue dissenting empirical data in favor of narrative conformity. Defining characteristics include plausible deniability, repetition for normalization, and exploitation of cognitive vulnerabilities like confirmation bias, rendering detection reliant on rigorous causal analysis over authoritative pronouncements.[4] Controversies arise from its dual-edged nature: while genuine disinformation poses risks to informed decision-making, the term's invocation frequently serves as a tool for censorship, as evidenced by state and corporate efforts to label politically inconvenient facts as such, thereby inverting truth-seeking priorities.[5]

Definition and Conceptual Foundations

Etymology and Historical Terminology

The term "disinformation" originates from the Russian word dezinformatsiya (дезинформация), which was coined in 1923 by Soviet intelligence services under Joseph Stalin's regime.[3][6] This neologism combined the prefix dez- (indicating reversal or removal) with informatsiya (information), denoting the deliberate creation and dissemination of false information to deceive adversaries.[3] In that year, the Bolshevik Party Politburo established a specialized Disinformation Bureau (Dezinformburo) within the state security apparatus to systematize such tactics, marking the term's formal institutionalization as a tool of counterintelligence and political control.[4] The concept emerged in the context of early Soviet operations, such as Operation Trust (1921–1926), a counterintelligence effort by the GPU (predecessor to the KGB) that fabricated a phony anti-Bolshevik underground organization to lure and neutralize exiled opponents through planted misinformation.[7] By the 1952 Great Soviet Encyclopedia, dezinformatsiya was defined as "false information with the intention to deceive public opinion," distinguishing it from mere propaganda by emphasizing strategic deception over overt persuasion.[3] This usage reflected a shift from ad hoc wartime ruses—evident in ancient texts like Sun Tzu's The Art of War (c. 5th century BCE), which advised "all warfare is based on deception"—to structured, state-orchestrated campaigns aimed at undermining enemies through fabricated narratives rather than battlefield feints alone.[3] Post-World War II, dezinformatsiya expanded in Soviet doctrine to encompass comprehensive information warfare, integrating media manipulation and agent provocateurs to erode Western alliances and ideologies.[6] The English term "disinformation" first appeared in Western intelligence analyses in the 1950s, but gained prominence in the 1980s through accounts by high-ranking defectors, including Romanian intelligence chief Ion Mihai Pacepa, who in 1978 detailed KGB methodologies that popularized the loanword in public discourse.[6] Pacepa's revelations underscored the term's connotation of intentional falsehoods, contrasting with "misinformation," which lacks deliberate deceit, thereby embedding a precise, pejorative framing rooted in Cold War adversarial tactics.[6] Disinformation refers to information that is verifiably false and deliberately disseminated with the intent to deceive or mislead a target audience.[8] This distinguishes it from misinformation, which involves false or inaccurate information shared without deliberate intent to harm or deceive, often arising from errors, misunderstandings, or unintentional sharing.[8] For instance, empirical analyses of information disorders categorize misinformation as lacking the causal mechanism of purposeful fabrication, relying instead on cognitive biases or hasty dissemination.[8] In contrast, malinformation entails genuine facts or data repurposed out of context to inflict harm, such as selective quoting or timing to manipulate perceptions, without altering the underlying truth but exploiting it for deceptive ends.[9] These boundaries hinge on intent and verifiability: disinformation requires evidence of fabrication and targeted deceit, often traceable through perpetrator admissions or declassified operational records, whereas misinformation and malinformation may stem from negligence or opportunistic misuse absent coordinated malice.[8] Propaganda, while overlapping in persuasive aims, diverges from disinformation through its broader use of partial truths, selective omissions, or ideological framing to advance a specific agenda, rather than relying exclusively on outright falsehoods.[10] Studies of historical and modern campaigns highlight propaganda's causal emphasis on long-term attitude shaping via repeated messaging, which may incorporate verifiable elements to build credibility, unlike disinformation's direct fabrication for immediate erosion of trust or confusion.[11] For example, state actors have employed propaganda to glorify national narratives using factual events, but disinformation campaigns fabricate events entirely, as seen in declassified Soviet "active measures" documents revealing invented atrocity stories.[12] Loose conflation of the two risks diluting analytical precision, particularly when dissenting hypotheses—such as the COVID-19 lab-leak origin theory, initially branded as disinformation by media and officials in 2020 despite lacking evidence of fabrication—are retroactively validated by intelligence assessments.[13] The FBI, with moderate confidence, concluded in 2023 that a laboratory-associated incident in Wuhan was the most likely origin, underscoring how premature "disinformation" labels can obscure causal inquiry when intent to deceive is unproven.[14] Psychological operations (psyops), typically military or strategic efforts to influence adversary emotions, motives, and behaviors, encompass disinformation as one tool but extend to non-deceptive tactics like morale-building or deterrence messaging.[15] Empirical reviews of psyops doctrine emphasize their structured application within conflict zones, often verifiable via operational logs, differing from civilian disinformation's diffuse, non-military propagation through media or networks.[16] While psyops may deploy fabricated narratives for tactical deception, their causal mechanisms prioritize behavioral outcomes over sustained public deception, as in wartime leaflet drops combining truth and lies; disinformation, by contrast, operates independently of formal command structures, complicating attribution without admissions or forensic traces.[15] This distinction prevents overgeneralization, where non-state actors' viral falsehoods are misclassified as psyops absent evidence of organized influence campaigns.[17]

Historical Development

Pre-Modern and Early Modern Examples

In ancient Rome, Octavian employed disinformation tactics against Mark Antony during their power struggle in the 30s BCE, disseminating claims that Antony had become subservient to Cleopatra, portraying him as effeminate and under Eastern influence through speeches, written pamphlets, and coinage imagery depicting Antony in subservient poses.[18] This campaign, leveraging Antony's alliance with Cleopatra to evoke fears of foreign domination, eroded his support in Italy and contributed to his defeat at the Battle of Actium on September 2, 31 BCE, after which Octavian consolidated power as Augustus.[19] During the Black Death pandemic of 1347–1351, false accusations spread across Europe that Jews were poisoning water wells to spread the plague, a rumor originating in Savoy in 1348 and rapidly disseminating via oral networks and local authorities despite lack of evidence.[20] These claims incited pogroms, resulting in the deaths of an estimated 200–900 Jews in Strasbourg alone on February 14, 1349, and similar massacres in hundreds of communities, even as Pope Clement VI issued bulls on July 6, 1348, and September 26, 1348, exonerating Jews and attributing the plague to natural causes.[21] In early modern England, the Popish Plot of 1678 involved fabricated testimonies by Titus Oates alleging a Jesuit conspiracy to assassinate King Charles II and install his Catholic brother James, claims that, though unsupported by physical evidence, fueled anti-Catholic hysteria through parliamentary inquiries and sermons.[22] This disinformation led to the execution of 35 individuals, primarily Catholic priests, between 1679 and 1681, before Oates's perjury was exposed in 1685, highlighting how elite-orchestrated falsehoods could exploit religious tensions in a society with literacy rates below 30% among the populace.[23] The Great Moon Hoax of 1835 exemplifies print media's emerging capacity for deception, as the New York Sun published six articles from August 25 to September 1 claiming British astronomer John Herschel had discovered lunar life forms, including bat-winged humanoids, via a fictional advanced telescope.[24] Authored pseudonymously by Richard Adams Locke to boost circulation—which rose from 2,300 to nearly 20,000 daily copies—the hoax deceived readers for weeks until Herschel's denial, demonstrating sensationalism's appeal in an era of expanding newspapers amid U.S. literacy rates around 70–80% in urban areas.[25] These instances reveal disinformation's reliance on personal networks, elite endorsement, and pre-industrial communication limits, constraining propagation to regional scales unlike later mass dissemination, with effects amplified by prevailing low verification mechanisms and societal vulnerabilities such as religious prejudice or scientific credulity.[26]

20th Century Origins and State-Sponsored Operations

During World War II, state-sponsored deception operations proliferated as integral to military strategy, with both Allied and Axis powers deploying disinformation to mislead adversaries. The British executed Operation Mincemeat in April 1943, equipping a corpse disguised as a fictional Royal Marines officer with fabricated documents suggesting an Allied invasion of Greece and Sardinia rather than Sicily; these papers, planted off the coast of Spain, convinced German high command to redirect forces, contributing to the successful Allied landing in Sicily on July 10, 1943.[27] Similarly, Nazi Germany under Joseph Goebbels' Propaganda Ministry disseminated false reports and staged deceptions, such as fabricated radio broadcasts and dummy airfields to divert Allied reconnaissance and bombing efforts, exemplifying how totalitarian regimes integrated disinformation into total war tactics without regard for post-hoc ethical constraints.[28] The Cold War marked the systematic institutionalization of disinformation through Soviet "active measures," a KGB doctrine originating in the 1920s but peaking in the mid-20th century as a covert toolkit encompassing forgery, agent recruitment, and media manipulation to undermine Western societies. Declassified KGB directives reveal that Service A (disinformation unit) orchestrated thousands of operations, including planting over 5,500 fabricated stories in foreign media in 1975 alone to sow distrust in capitalist institutions.[29] A prominent example was Operation INFEKTION (also known as Denver), initiated by the KGB around 1983, which falsely claimed the United States engineered HIV/AIDS at Fort Detrick as a biological weapon; the narrative originated in a March 1983 article in the Indian newspaper Patriot and proliferated globally via proxies, eroding U.S. credibility during the Reagan era.[30] Evidence from the Mitrokhin Archive—smuggled KGB files defected by archivist Vasili Mitrokhin in 1992—documents the KGB's orchestration of such campaigns, including coordination with Eastern Bloc allies to amplify reach, underscoring the state's prioritization of ideological subversion over factual accuracy.[31] In response to Soviet efforts, the United States engaged in counter-disinformation while developing its own media influence programs, as revealed in congressional probes. The CIA's Project Mockingbird, active from the early 1950s through the 1970s, involved recruiting journalists and establishing relationships with over 400 American media figures to shape domestic and international reporting against communist narratives, though declassified records indicate it focused more on anti-Soviet placement than wholesale fabrication.[32] The 1975 Church Committee investigation, formally the U.S. Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, exposed these ties in its interim report on CIA intelligence collection, criticizing overreach but noting that claims of pervasive control often exceeded documented evidence, with most journalists acting as informal assets rather than directed propagandists.[33] This bipartisan use of state media operations during the 20th century highlighted disinformation's evolution from wartime expediency to peacetime ideological warfare, driven by mutual perceptions of existential threats.

Post-Cold War and Digital Era Evolution

Following the end of the Cold War in 1991, disinformation shifted from predominantly state-orchestrated broadcasts to decentralized networks enabled by the internet's expansion, allowing non-state actors and smaller groups to propagate false narratives via forums and early websites. In Russia, precursors to organized online influence operations emerged during the Chechen wars (1994–1996 and 1999–2009), where hacker collectives like the Siberian Network Brigade coordinated cyberattacks and disinformation to shape perceptions of the conflicts, marking an early adaptation of digital tools for narrative control beyond traditional media.[34] This era saw disinformation evolve from top-down propaganda to participatory online ecosystems, though empirical evidence of its scale remains limited by the nascent state of digital tracking at the time. The rise of social media in the late 2000s accelerated this decentralization, with platforms facilitating rapid spread during the Arab Spring uprisings of 2010–2011, where initial mobilization via Twitter and Facebook coexisted with unverified rumors and regime-sponsored counter-narratives that sowed confusion and amplified divisions.[35] [36] By enabling viral dissemination without gatekeepers, these tools democratized disinformation production, shifting emphasis from elite control to crowd-sourced manipulation, though studies indicate the platforms' role in outcomes was overstated relative to offline factors like economic grievances. In the 2020s, artificial intelligence exacerbated propagation speeds, as seen in AI-generated deepfakes during the 2024 U.S. presidential election, where fabricated videos circulated but post-election analyses found their direct influence on voter behavior marginal amid broader information overload.[37] Climate-related disinformation similarly surged around extreme weather events, with Statista data showing elevated shares of misleading social media claims—peaking during hurricanes and wildfires—between April 2023 and April 2025, often questioning anthropogenic causes despite consensus scientific attribution.[38] A 2025 Pew Research Center survey across 25 nations revealed median 72% of respondents viewing online false information as a major national threat, reflecting heightened global awareness amid regulatory pushes like Singapore's 2019 fake news law.[39] [40] Yet causal connections to tangible harms, such as election interference, remain empirically contested, with research highlighting weak correlations between exposure and behavioral shifts due to confounding variables like preexisting biases and low discernment rates.[41] This underscores challenges in isolating disinformation's effects from organic misinformation in decentralized digital environments.

Methods and Techniques

Traditional Psychological and Media Manipulation

Traditional psychological manipulation in disinformation encompasses tactics such as false flag operations and controlled opposition, designed to exploit cognitive vulnerabilities like trust in apparent allies and misattribution of aggression. False flag operations involve staging events to appear as enemy actions, thereby justifying retaliation or policy shifts. A prominent pre-World War II example is the Gleiwitz incident on August 31, 1939, where Nazi SS operatives, disguised as Poles, attacked a German radio station near the Polish border, broadcasting anti-German messages to simulate Polish aggression; this deception, part of Operation Himmler, provided a pretext for Germany's invasion of Poland the next day.[42] Similarly, the Soviet Union's Shelling of Mainila on November 26, 1939, entailed artillery fire on its own border village, falsely blamed on Finland, enabling the USSR to launch the Winter War three days later despite international skepticism.[42] These operations succeeded in mobilizing domestic support and international acquiescence initially, though later revelations from declassified records underscored their fabricated nature.[43] Controlled opposition techniques create illusory resistance movements to infiltrate, monitor, and neutralize genuine adversaries. The Soviet Cheka's Operation Trust (1921–1926) exemplifies this, fabricating the Monarchist Organization of Central Russia as an underground anti-Bolshevik network to lure White Russian émigrés and monarchists back to the USSR. Posing as high-level officials like a supposed deputy to White leader Boris Savinkov, agents deceived figures including British spy Sidney Reilly, leading to their arrest and execution; the operation eliminated over 100 targets and demoralized émigré groups without direct confrontation.[44] Declassified assessments confirm its efficacy in sustaining deception for five years through forged documents and staged communications, disrupting counter-revolutionary efforts until a defector exposed it in 1927.[45] Media manipulation integrates disinformation by embedding false narratives into established outlets, leveraging authority and repetition to foster belief. Yellow journalism in the 1890s, pioneered by publishers William Randolph Hearst and Joseph Pulitzer, amplified unverified atrocity stories against Spanish rule in Cuba, such as exaggerated accounts of reconcentration camp horrors, boosting circulation and public outrage. Following the USS Maine explosion in Havana harbor on February 15, 1898—later attributed to an internal coal bunker fire rather than Spanish mines—headlines like Hearst's "DESTRUCTION OF THE WAR SHIP MAINE WAS THE WORK OF AN ENEMY" demanded war, contributing to U.S. intervention in April 1898 despite lacking evidence of sabotage.[46] Hearst's New York Journal saw daily sales rise from 400,000 to nearly 1 million amid the frenzy, illustrating how sensationalism exploits emotional priming over factual scrutiny.[47] These tactics draw on psychological principles including conformity to perceived consensus and deference to authority figures. Edward Bernays, in his 1928 book Propaganda, outlined engineering consent through media by appealing to unconscious desires, using third-party endorsements from leaders to simulate grassroots support, as in his orchestration of women's smoking campaigns via staged "Torches of Freedom" events.[48] Repetition reinforces familiarity, akin to Nazi propagandist Joseph Goebbels' emphasis on simple, unrelenting messaging to embed falsehoods, where sustained exposure overrides initial doubt through cognitive ease rather than evidence.[49] Historical analogs to Solomon Asch's 1951 conformity experiments—where 75% of participants yielded to incorrect group judgments at least once—demonstrate how media-amplified narratives mimic social pressure, eroding independent verification in favor of apparent majority views. Empirical outcomes from declassified operations reveal these methods' potency when unopposed, though source tracing and defections eventually unveiled fabrications, highlighting the causal role of unchecked repetition in sustaining deception.

Digital Propagation and Social Media Exploitation

Social media platforms facilitate the rapid digital propagation of disinformation through algorithmic recommendations that prioritize content maximizing user engagement, often amplifying novel or emotionally charged falsehoods over verified information. A 2018 study analyzing over 126,000 Twitter cascades from 2006 to 2017 found that false news diffused significantly farther, faster, deeper, and more broadly than true news, reaching 1,500 people six times quicker on average, primarily due to human sharing rather than bots.[50] [51] This dynamic exploits platform vulnerabilities, where engagement metrics—likes, shares, and retweets—drive visibility, creating feedback loops that reward sensationalism irrespective of accuracy.[52] Bot networks and astroturfing operations further exploit these mechanisms by simulating grassroots support to seed and amplify disinformation at scale. During the 2016 U.S. presidential election, Russia's Internet Research Agency (IRA) operated troll farms employing hundreds of personnel to manage fake accounts across platforms like Facebook and Twitter, generating content that reached an estimated 126 million users on Facebook alone through organic shares and paid promotion.[53] These efforts involved astroturfing tactics, such as coordinating inauthentic accounts to mimic organic movements (e.g., posing as Black Lives Matter activists or pro-Trump rallies), which boosted visibility via algorithmic promotion of high-interaction posts.[54] Empirical analysis of IRA activity showed coordinated posting patterns that evaded detection, with bots amplifying human-shared content to achieve virality metrics comparable to legitimate campaigns.[55] Echo chambers exacerbate propagation by leveraging confirmation bias, where users preferentially engage with and share content aligning with preexisting beliefs, fostering isolated networks that reinforce disinformation. Algorithms compound this by curating feeds to retain users through personalized content, reducing exposure to countervailing views and enabling rapid intra-group spread. In the 2024 U.S. election, disinformation narratives—such as unsubstantiated claims about voter fraud or candidate misconduct—gained traction within partisan communities, shaping perceptions despite limited crossover to broader audiences, as evidenced by platform data showing concentrated shares among ideologically aligned users.[56] However, research indicates that while virality within echo chambers is pronounced, overall electoral impacts from such dynamics remain empirically modest, with studies finding no causal link to vote shifts amid abundant competing information.[57] Confirmation-driven sharing thus sustains disinformation persistence but is constrained by network boundaries and user fatigue.[58] Online advertising systems enable exploitation through microtargeting, where advertisers use granular user data to deliver tailored disinformation ads to susceptible demographics, often bypassing content moderation. Foreign actors, including state-linked operations, have leveraged this for influence campaigns; for instance, IRA-purchased Facebook ads in 2016 targeted swing-state voters with divisive messaging, amassing over 3,500 ads viewed millions of times before removal.[59] By 2024-2025, European Union assessments identified ongoing foreign information manipulation via targeted ads, prompting regulations like the Transparency and Targeting of Political Advertising (TTPA) directive, which bans microtargeting based on sensitive data to curb such ops amid rising threats from actors like Russia and China.[60] [61] These exploits rely on platforms' ad auction algorithms, which optimize for click-through rates, inadvertently prioritizing provocative falsehoods that yield higher returns for bad-faith advertisers.[62]

Advanced Technologies Including AI and Deepfakes

Advanced technologies, particularly artificial intelligence (AI) systems capable of generating synthetic media, have expanded the toolkit for disinformation creators since 2020 by enabling rapid production of realistic text, images, audio, and video. Deepfakes—AI-manipulated videos or audio that superimpose one person's likeness onto another's—emerged as a prominent concern, with tools like DeepFaceLab and Faceswap democratizing creation for non-experts. In 2024, during global elections affecting over half the world's population, instances included a fabricated audio deepfake of U.S. President Joe Biden urging New Hampshire voters to skip primaries, which platforms like X (formerly Twitter) labeled but did not remove, and a video purporting to show Ukrainian President Volodymyr Zelenskyy surrendering, debunked by fact-checkers via inconsistencies in audio waveforms and metadata timestamps.[63][64] Despite proliferation, controlled experiments indicate deepfakes exert no greater persuasive influence on political perceptions than traditional fake news videos, with viewers' skepticism often matching that toward non-AI fabrications.[65][66] AI-driven text generation has similarly lowered barriers to scalable disinformation, allowing actors to produce convincing articles or social media posts en masse. Large language models like GPT variants can generate coherent narratives mimicking journalistic style, as seen in post-2020 campaigns where AI-synthesized content flooded platforms with geopolitical falsehoods, such as amplified Russian narratives on Ukraine. At the Black Hat Middle East and Africa conference in October 2025, experts forecasted that by year's end, AI would enable hyper-personalized disinformation at unprecedented volumes, potentially evading moderation through iterative refinement.[67][68] However, these tools require human intent and curation to deploy effectively; absent deliberate malice, they do not autonomously originate falsehoods, underscoring that technology amplifies agency rather than supplants it.[69] Detection capabilities have advanced in tandem, mitigating some risks. Watermarking protocols, such as Google's SynthID and the Coalition for Content Provenance and Authenticity (C2PA) standards, embed imperceptible digital signatures in AI outputs, enabling forensic verification with reported accuracies exceeding 90% for compliant models in 2024 tests.[70][71] Empirical reviews of deepfake exposure reveal harms often overstated, with pilot studies showing minimal shifts in belief or behavior compared to baseline misinformation, and speculation on societal disruption outpacing verifiable causal evidence.[72][73] Calls for stringent regulation risk curtailing beneficial AI innovations, such as in scientific modeling, without proportionally addressing root human incentives for deception.[37] By 2026, disinformation campaigns had increasingly leveraged advanced AI for synthetic media creation, including deepfakes and generated content, to influence elections, public opinion, and geopolitical narratives. Key examples from recent years include an AI-generated deepfake video during Ireland's 2025 presidential election that falsely depicted candidate Catherine Connolly announcing her withdrawal from the race, which went viral before being removed by platforms such as Meta. In the Netherlands, synthetic images were used to target political opponents amid election disruptions. Broader trends involved narrative inflation, the recycling of decontextualized footage, and coordinated inauthentic behavior on social media platforms. State actors, including Iran and Israel in the context of 2026 regional conflicts involving the U.S., deployed direct fabrications such as false claims of U.S. casualties or destroyed military assets. Global analyses highlighted AI's role in amplifying cyber fraud, propaganda, and emotional manipulation, exacerbating challenges to democratic processes through enhanced plausible deniability and difficulties in distinguishing authentic from fabricated content.

Notable Examples and Campaigns

Authoritarian State Efforts (Soviet/Russia, China)

The Soviet Union's KGB employed "active measures" as a core component of its intelligence operations, involving the dissemination of fabricated information to foreign media through forgeries, leaks, and rumors aimed at undermining Western societies and promoting Soviet interests.[74] Lt. Gen. Ion Mihai Pacepa, the highest-ranking Soviet bloc defector in 1978, detailed in his 2013 book Disinformation how the KGB orchestrated campaigns such as framing the U.S. for AIDS origins and infiltrating religious movements to erode anti-communist sentiments, drawing from internal directives he oversaw.[75] These efforts prioritized long-term narrative planting over immediate effects, with verifiable examples including the 1959 forgery alleging U.S. biological weapons tests on POWs, which circulated in global press for decades.[76] Post-Soviet Russia adapted these tactics through state media and cyber units, notably during the 2014 Crimea annexation, where GRU-linked actors and outlets like RT propagated claims of a spontaneous local uprising against a "fascist" Kyiv government, masking Russian troop involvement and referendum irregularities.[77] In the 2022 Ukraine invasion, Russian officials amplified false narratives of U.S.-funded biolabs developing ethnic-specific bioweapons, echoing Soviet-era accusations to justify preemptive action, despite UN and WHO rejections of these claims as lacking evidence.[78][79] The Mueller Report documented GRU hacking and social media influence operations targeting the 2016 U.S. election, including 126 million Facebook interactions, but concluded no sufficient evidence that these altered vote outcomes, attributing limited causal impact amid domestic factors.[80][81] Amid the 2026 escalations in the Middle East involving Iran, Israel, and the United States, disinformation intensified with state-affiliated efforts spreading fabricated claims—often AI-enhanced—regarding military outcomes, including exaggerated successes, false reports of U.S. casualties, and destroyed assets to manipulate perceptions and geopolitical narratives. China's United Front Work Department coordinates influence operations blending propaganda and disinformation to shape overseas perceptions, often through state-linked actors rather than overt military channels.[82] During the 2020-2021 COVID-19 outbreak, Chinese diplomats and media deflected scrutiny from Wuhan lab origins by promoting unsubstantiated theories of U.S. Army importation of the virus to Wuhan or frozen food transmission, amplified via official channels to counter lab-leak hypotheses deemed credible by some U.S. intelligence assessments.[83][84] In Taiwan, ByteDance's TikTok algorithm has been observed pushing pro-Beijing content, with 2025 studies finding frequent users 20-30% more likely to endorse unification narratives or distrust local institutions, correlating with a 60% rise in detected Chinese disinformation since 2022 per Taiwan's National Security Bureau.[85][86] PLA-linked operations, as analyzed in declassified reports, emphasize cognitive warfare via social media amplification, though empirical evidence of decisive electoral or policy shifts remains sparse, often overshadowed by algorithmic self-selection.[87] Leaks like the 2023 Vulkan files reveal Russian cyber tools supporting similar hybrid efforts, but quantifiable budgets for disinformation subunits evade public verification, with operations relying on deniable proxies over direct funding traces.[88]

Western and Democratic Contexts (US, Europe)

In the United States, the Central Intelligence Agency's MKUltra program, initiated on April 13, 1953, and running until 1973, involved covert experiments on unwitting subjects using LSD, hypnosis, and other techniques aimed at developing mind control and interrogation methods.[89] These operations, which included dosing individuals without consent in settings like prisons and universities, constituted a form of state-sponsored psychological manipulation that deceived participants and the public about the program's existence and ethical violations.[90] The program's abuses were exposed in 1975 by the Church Committee, a Senate select committee that investigated intelligence agency overreaches, revealing how the CIA had destroyed many records to conceal the extent of non-consensual testing and its lack of scientific rigor.[91][92] Another notable U.S. example occurred in the lead-up to the 2003 Iraq invasion, where the Bush administration publicly asserted that intelligence confirmed Saddam Hussein's possession of active weapons of mass destruction programs, including chemical and biological stockpiles, as justification for military action.[93] Post-invasion searches yielded no such stockpiles, prompting declassified reviews that identified systemic intelligence failures, such as overreliance on unverified sources like defector Curveball and failure to challenge assumptions about Iraq's capabilities despite dissenting analyses.[94] The 2005 Commission on the Intelligence Capabilities reported that these errors stemmed from analytical flaws and groupthink within agencies, though critics debate whether political pressure amplified misleading assessments to build public support for war.[93] The Church Committee's earlier framework highlighted a pattern of intelligence agencies presenting selective or exaggerated information to policymakers and the public, underscoring risks in democratic oversight of covert operations.[91] In Europe, allegations of disinformation surfaced during the 2016 Brexit referendum on June 23, where pro-Leave campaigns emphasized sovereignty and immigration concerns, prompting claims of Russian state-backed efforts via social media bots and funding to amplify division.[95] Investigations, including by the UK Parliament's Intelligence and Security Committee, found evidence of attempted Russian influence operations targeting the vote, but quantified their reach as limited compared to organic public debates driven by verifiable economic data and policy frustrations.[95] Critiques from security analysts argue that emphasizing foreign disinformation overstated its causal role, potentially to undermine the 52% Leave victory by attributing it to manipulation rather than voter priorities like EU migration policies, which empirical polling data consistently ranked high pre-referendum.[96] In the U.S. context, veteran disillusionment with leadership and media narratives has persisted, with many drawing parallels between perceived institutional corruption in overseas conflicts and domestic failures, further eroding trust in official information sources and amplifying skepticism toward mainstream accounts. More recent examples in Europe demonstrate the continued evolution of these tactics in democratic settings. During Ireland's 2025 presidential election, a high-quality AI deepfake video falsely showed candidate Catherine Connolly withdrawing her candidacy, sparking concerns over electoral integrity, official complaints, and platform interventions. The Netherlands' elections similarly encountered interference through AI-generated synthetic images attacking political figures, contributing to broader discussions on technological vulnerabilities in voting periods. Regarding recent U.S. elections, the 2020 presidential contest saw widespread claims from then-President Trump and allies of systemic fraud, particularly in mail-in voting expanded due to COVID-19, citing irregularities like late-night ballot dumps, unsigned affidavits, and chain-of-custody issues in states such as Pennsylvania and Georgia. Over 60 lawsuits challenging results were largely dismissed on procedural grounds or lack of standing, with federal and state officials, including CISA, concluding no evidence of fraud on a scale to affect outcomes, though databases document hundreds of proven fraud cases nationwide from that period.[97][98] These assertions fueled the "Stop the Steal" narrative, but statistical analyses found no systematic patterns sufficient to overturn results, while acknowledging procedural changes introduced untested vulnerabilities in verification processes.[99] In democratic contexts, such disputes highlight tensions between official validations and public perceptions of transparency, with empirical reviews emphasizing isolated incidents over coordinated disinformation campaigns.[97]

Non-State and Ideological Campaigns

Non-state actors, including nongovernmental organizations (NGOs), advocacy groups, partisan media outlets, and technology companies, have pursued disinformation campaigns driven by ideological motivations rather than state directives. These efforts often involve amplifying selective data, suppressing counter-evidence, or framing narratives to mobilize support for policy agendas, complicating empirical verification due to the absence of centralized records or declassified materials typical in state operations. Studies highlight methodological hurdles in detecting such campaigns, including difficulties in distinguishing deliberate deception from genuine interpretive disagreements and overreliance on anecdotal rather than aggregate data to assess intent and impact.[100][101] In environmental advocacy, left-leaning NGOs and aligned media have propagated exaggerated climate alarmism since the 1980s, such as predictions of rapid Arctic ice disappearance rendering the region ice-free by 2013 or widespread famine from warming by the 2000s, which satellite observations and agricultural yield data contradict. These narratives, disseminated through outlets like the United Nations Environment Programme, aimed to catalyze global policy shifts but eroded credibility when timelines failed, as global crop production rose 20% per decade amid modest warming. Conversely, fossil fuel industry groups, including ExxonMobil and the American Petroleum Institute, funded denialist campaigns from the 1990s onward, internally acknowledging CO2-driven warming risks as early as 1977 while publicly sponsoring think tanks to question anthropogenic causes, per subpoenaed documents revealing coordinated messaging to delay emissions regulations.[102][103] Partisan disinformation in U.S. politics exemplifies ideological entrenchment, with left-leaning public health influencers and officials in 2020 dismissing the COVID-19 lab-leak theory as a "conspiracy" via coordinated emails and media statements, despite private admissions of its plausibility, as shown in Anthony Fauci's correspondence expressing early concerns over viral features suggestive of engineering before pivoting to natural origin advocacy. This suppression delayed scrutiny of Wuhan Institute of Virology funding ties until 2021 declassifications. On the right, post-2020 election claims of systemic fraud, including allegations of 3-5 million illegal votes, persisted despite audits in battleground states finding discrepancies under 0.01% and dismissals in 61 federal and state lawsuits for lack of evidence, fueling prolonged distrust without causal proof of outcome-altering irregularities.[104][99] Technology platforms have enabled ideological biases in non-state moderation, as exposed by the Twitter Files releases from December 2022 to early 2023, which documented internal deliberations suppressing the New York Post's October 2020 Hunter Biden laptop reporting—deemed potential Russian disinformation without verification—while amplifying narratives aligned with Democratic priorities, including COVID policy dissent throttling. These files, comprising over 10,000 internal documents, revealed algorithmic tweaks and executive overrides favoring left-leaning viewpoints, such as shadowbanning critics of lockdowns, amid consultations with government entities, though defenders argued decisions reflected anti-spam protocols rather than partisan censorship. Empirical analysis of moderation logs showed disproportionate visibility reductions for conservative accounts, correlating with 2020 election-period traffic drops exceeding 50% in some cases.[105][106]

Empirical Impacts and Research Findings

Documented Societal and Psychological Effects

Empirical studies indicate that exposure to disinformation can contribute to increased affective polarization, where individuals develop more negative views toward out-groups, though causal effects are typically modest and context-dependent. A 2023 analysis of experimental data found that political disinformation and hate speech on social media platforms heightened partisan animus, with effect sizes around Cohen's d = 0.2-0.4 in controlled settings, suggesting limited but measurable shifts in intergroup attitudes rather than wholesale belief change.[107] Similarly, a comprehensive review of psychological drivers highlighted how continued belief in misinformation persists due to factors like motivated reasoning and source credibility, fostering resistance to corrective information and exacerbating echo chambers, yet lab-based interventions show only small reductions in susceptibility (e.g., inoculation effects of d ≈ 0.36).[108][109] On the societal level, claims of disinformation decisively swaying elections, such as in the 2016 U.S. presidential contest, have been tempered by meta-analyses revealing minimal causal impact. Research estimating fake news exposure during that election found it reached about 8% of voters on average, with persuasive effects translating to at most a 0.8 percentage point shift in vote shares, far below thresholds for outcome determination, and no robust evidence of broader turnout alterations.[110] Links to violence remain rare and indirect; while social media amplification preceded events like the January 6, 2021, Capitol riot, empirical models attribute participation more to network dynamics and pre-existing mobilization than disinformation alone, with predictive correlations (e.g., r ≈ 0.3 for exposure to inflammatory content) failing causal tests in panel data.[111] Recent 2025 surveys document rapid dissemination of misinformation on topics like weather events and geopolitical incidents, often peaking within hours on social platforms before decaying due to algorithmic deprioritization and fact-checks, yet contributing to short-term spikes in public anxiety.[112] Concurrently, the Reuters Institute Digital News Report 2025 reports eroding trust in news media across 47 markets, with overall trust falling to 40% globally—down from 44% in 2024—attributed partly to perceived disinformation overload, though causal attribution is confounded by broader institutional skepticism rather than isolated exposures.[113] These patterns underscore that while disinformation correlates with societal fragmentation, rigorous causal inference, including instrumental variable approaches in field experiments, reveals effects dwarfed by baseline partisan divides and media consumption habits.[114]

Critiques of Overstated Harms and Methodological Flaws

Critics argue that claims of disinformation's societal harms are frequently exaggerated, with media coverage amplifying rare instances into widespread panics, as seen in the 2016 U.S. election "fake news" episode where initial reports of pervasive influence later gave way to evidence of limited actual consumption and impact among the general population.[115] Empirical studies indicate that exposure to false news often fails to substantially alter beliefs or behaviors beyond reinforcing pre-existing partisan views, with one analysis concluding that effects on political participation are minimal and frequently overstated.[116] Public resilience manifests through widespread skepticism and avoidance; for instance, surveys show that nearly half of individuals exposed to fake news on social media choose to ignore it rather than engage or believe, suggesting innate coping mechanisms mitigate potential harms.[117] Methodological shortcomings in disinformation research contribute to these inflated perceptions, including reliance on surveys that omit "I don't know" options, which artificially inflate reported belief rates—over 90% of 180 reviewed studies suffered from this flaw, leading to estimates of misinformation endorsement up to 20 times higher than reality.[115] Experimental designs often use contrived, high-exposure scenarios unrepresentative of everyday digital environments, failing to account for real-world factors like algorithmic filtering or user selectivity, as highlighted in scoping reviews of post-2016 studies that found most do not test ecologically valid conditions.[118] Confirmation bias pervades labeling practices, with analyses revealing disproportionate flagging of conservative-leaning content by fact-checking organizations dominated by left-leaning personnel and funding, undermining claims of neutral threat assessment.[101] From a causal perspective, disinformation often serves as a symptom of deeper elite distrust rather than its primary driver; eroded faith in institutions prompts individuals to seek alternative narratives, including erroneous ones, as evidenced by patterns where policy failures—like inconsistent public health messaging during crises—precede and fuel misinformation uptake more than deliberate campaigns.[119][120] This dynamic is compounded by institutional biases in academia and media, where left-leaning orientations systematically underemphasize elite-generated errors while hyper-focusing on populist alternatives, distorting research priorities and empirical baselines.[115] Such critiques underscore the need for rigorous, unbiased methodologies to distinguish genuine causal harms from perceptual or secondary effects.

Responses and Counterstrategies

Institutional and Technological Measures

Fact-checking organizations such as Snopes and PolitiFact aim to verify claims and correct misinformation through detailed analyses, with empirical studies indicating they can reduce false beliefs about specific claims by an average of 0.59 standard deviations across global contexts, effects persisting up to two weeks or more. However, critiques highlight asymmetrical scrutiny, where false statements mentioning political elites—often conservative figures—are fact-checked 20% more frequently than true ones, potentially reflecting institutional biases in selection criteria.[121] Independent assessments, including AllSides' bias ratings, classify PolitiFact as left-leaning, raising concerns over selective application that may undermine perceived neutrality and efficacy among skeptical audiences.[122] [123] Social media platforms have implemented prebunking and debunking interventions, particularly during the 2024 U.S. election cycle, where prebunking—providing inoculating information before exposure to falsehoods—proved more effective than post-hoc debunking in boosting trust in elections and reducing belief in voter fraud myths, as shown in experiments across the U.S. and Brazil.[124] The Carnegie Endowment's 2024 evidence-based guide notes mixed results overall, with prebunking showing promise in proactive scenarios but limited scalability due to challenges in anticipating narratives, while debunking via credible sources corrects specific misperceptions without broad spillover to unrelated claims.[125] Unintended consequences include rare backfire effects, where corrections reinforce misinformation among strongly aligned individuals, though replicability studies find no consistent evidence for standalone backfire and attribute persistence to familiarity rather than worldview reinforcement.[126] [127] Technological measures, including AI-driven deepfake detectors, have advanced by 2025, with on-device tools achieving up to 98% accuracy in identifying manipulated videos through analysis of facial inconsistencies, lip-sync artifacts, and behavioral anomalies.[128] [129] Despite these improvements, limitations persist in open-source verification, as detectors struggle with evolving generation techniques and real-world variability, rendering them unreliable for standalone use without human oversight, per journalistic evaluations.[130] Multi-layered approaches incorporating explainable AI and real-time scanning offer partial mitigation but highlight the arms-race dynamic, where detection efficacy lags behind synthetic media proliferation.[131] The European Union's Digital Services Act (DSA), adopted in 2022, mandates online platforms to conduct risk assessments for systemic threats including disinformation and imposes fines of up to 6% of a company's global annual turnover for noncompliance.[132][133] Singapore's Protection from Online Falsehoods and Manipulation Act (POFMA), enacted in 2019, empowers authorities to issue correction directions requiring publishers to append fact-checks to disputed content without mandating removal, with 114 such directions issued across 66 cases by mid-2024.[134][135] These measures reflect a broader global proliferation of anti-disinformation regulations, with over 100 countries enacting laws targeting misinformation between 2011 and 2022, often prioritizing state-directed corrections or content moderation over outright bans.[136] In the United States, debates over reforming Section 230 of the Communications Decency Act of 1996 have intensified in the 2020s, with proposals seeking to condition platform liability protections on proactive measures against disinformation while preserving incentives for user-generated content hosting.[137][138] Following the 2020 presidential election, legislative efforts included bills like the Educating Against Misinformation and Disinformation Act, aimed at funding awareness programs, and subsequent proposals such as the 2025 bipartisan measure to counter AI-generated election disinformation through enhanced disclosures and penalties.[139][140] Critics argue these approaches risk overreach, citing the 2020 suppression of the New York Post's reporting on Hunter Biden's laptop by platforms like Twitter and Facebook, which limited sharing based on FBI warnings of potential Russian involvement—warnings later contradicted by forensic verification of the laptop's authenticity and admissions of error by former executives.[141][142][143] Empirical assessments indicate limited deterrent effects from such regulations, as state-sponsored disinformation campaigns, including Russia's persistent operations undermining Ukraine since 2014, have adapted and expanded despite sanctions and legal pressures.[144][145] A 2025 Pew Research Center survey found dipping public support in the US for government or tech-led restrictions on false information, with only 48% favoring them amid concerns over access freedoms.[146] Moreover, these frameworks carry risks of governmental abuse, as seen in authoritarian contexts where "fake news" laws have been wielded to target opposition voices and independent media, chilling dissent under vague definitions of falsehoods.[147][148] Such outcomes underscore tensions between curbing verifiable harms and eroding speech protections, with evidence suggesting regulatory interventions often amplify state influence without proportionally reducing deceptive narratives.[149]

Individual and Educational Interventions

Individual interventions against disinformation emphasize cultivating personal skepticism and critical evaluation skills, drawing from psychological research that prioritizes long-term cognitive resilience over reliance on external fact-checking. Experiments demonstrate that techniques fostering active reasoning, such as pausing to assess motives and evidence, enhance discernment without suppressing information flow.[108] This approach aligns with causal analyses indicating that motivated skepticism reduces belief in false claims by addressing underlying psychological drivers like confirmation bias, rather than attempting to control content exposure.[101] Inoculation theory, which pre-exposes individuals to weakened forms of misleading arguments to build resistance, has shown empirical efficacy in reducing susceptibility to disinformation. Peer-reviewed trials in the 2020s, including gamified interventions like the "Bad News" game, conferred psychological resistance against common manipulation tactics, with participants demonstrating improved accuracy in identifying fake news by up to 20-30% in controlled settings.[150] A 2025 cross-cultural study extended these findings, confirming that prebunking videos based on inoculation principles boosted confidence in spotting misinformation and lowered acceptance rates across diverse populations.[151] Such programs operate on first-exposure principles, akin to vaccination, enabling individuals to recognize rhetorical patterns in disinformation without needing real-time corrections. Educational media literacy initiatives further support individual resilience by teaching systematic source evaluation and bias detection. Digital media literacy training has been found to increase discernment between mainstream and false news headlines, with participants in randomized experiments showing sustained improvements in evaluation accuracy post-intervention.[152] Practical tools, such as the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims), provide actionable checklists for rapid assessment, helping users verify origins and cross-reference without overload.[153] Complementing these, strategies leveraging social proof—highlighting that disinformation lacks broad endorsement—counter viral falsehoods by altering perceptions of normative support, as outlined in organizational analyses applicable to personal use.[154] While effective in experimental contexts, these interventions face scalability challenges due to varying engagement levels and the need for repeated practice to maintain skills. Nonetheless, causal studies underscore their superiority to censorship, as top-down restrictions can erode trust and motivate defensive entrenchment, whereas skill-building fosters intrinsic motivation and adaptability to novel threats.[125] Longitudinal evidence suggests that skepticism-oriented training yields durable effects, prioritizing individual agency in navigating information environments over paternalistic controls.[155]

Controversies and Debates

Weaponization of the Term for Censorship

In early 2020, social media platforms such as Facebook and Twitter suppressed discussions of the COVID-19 lab-leak hypothesis, labeling it as false information or disinformation under policies influenced by government communications.[156][157] Internal documents revealed that the Biden White House pressured Meta to censor lab-leak posts despite platform regrets over prior handling, as detailed in a 2024 House Judiciary Committee report on the "censorship-industrial complex."[158] By February 2023, FBI Director Christopher Wray stated that the bureau assessed with moderate confidence that the pandemic likely resulted from a lab incident in Wuhan, validating elements of the previously suppressed theory.[159] This timeline illustrates how initial labeling facilitated content removal, with outcomes shifting as evidence emerged without corresponding platform accountability. Following the 2020 U.S. presidential election, allegations of voter fraud and irregularities raised by then-President Trump and allies were designated as disinformation by platforms, prompting widespread deplatforming. Twitter suspended Trump's account on January 8, 2021, after the Capitol riot, which correlated with a 73% drop in online election fraud misinformation volume.[160] Similar actions across platforms reduced the reach of such claims, as analyzed in studies of post-January 6 deplatforming effects.[161] By 2024, accusations of disinformation became reciprocal, with both major parties leveling claims against opponents' election narratives—Democrats highlighting foreign interference and Republicans questioning mail-in processes—yet without equivalent preemptive deplatforming, reflecting a shift in enforcement patterns.[56] Empirical tracking shows the term "disinformation" surged in U.S. newspaper usage post-2016, paralleling declines in public trust in media and institutions rather than proportional rises in verified false information threats.[162] A Nature analysis of media archives documented spikes in "disinformation" alongside "fake news" and "misinformation" terms, coinciding with events like the 2016 election, while trust metrics from sources like Gallup polls fell from 72% media trust in 1976 to 32% by 2023. This pattern suggests the label's expanded application served narrative control amid eroding credibility, as exposure to contested claims has been linked to further trust erosion without evidence of escalating disinformation scale justifying the response intensity.[163]

Balancing Free Speech with Harm Prevention

The tension between free speech protections and preventing harms from disinformation arises from philosophical frameworks emphasizing limits only on speech that directly endangers others. John Stuart Mill's harm principle, articulated in his 1859 work On Liberty, posits that the sole justification for restricting individual liberty, including expression, is to avert harm to non-consenting parties, excluding paternalistic or moralistic interventions.[164] This contrasts with free speech absolutism, which contends that any governmental restriction risks broader erosion of discourse, as even potentially harmful ideas contribute to truth discovery through open debate.[165] In legal precedents, the United States Supreme Court has applied variants of the harm principle to delineate speech boundaries under the First Amendment. In Schenck v. United States (1919), the Court established the "clear and present danger" test, upholding restrictions on speech that poses an immediate threat akin to falsely shouting "fire" in a crowded theater, such as anti-draft advocacy during wartime deemed likely to obstruct military recruitment.[166] This was refined in Brandenburg v. Ohio (1969), protecting advocacy of illegal action unless it is directed at inciting imminent lawless behavior and likely to produce it, thereby narrowing interventions to verifiable, proximate causation rather than speculative risks.[167] Contemporary regulatory efforts reflect trade-offs, with bodies like the World Economic Forum's Global Risks Report 2025 identifying misinformation and disinformation as the top short-term global risk for the second year, prompting calls for enhanced oversight amid events like election interference and public health disruptions.[168] However, empirical analyses reveal chilling effects from such measures, where anticipated penalties lead to widespread self-censorship, reducing expression of dissenting views even absent direct enforcement, as documented in studies of online platforms and surveillance regimes.[169] Prioritizing verifiable harms underscores the empirical rarity of disinformation directly causing mass-scale societal damage without intervening factors like pre-existing vulnerabilities or amplification by state actors. Behavioral science reviews indicate low population-level exposure to false content and weak causal links to outcomes like violence or policy failures, challenging precautionary restrictions that may suppress legitimate inquiry.[101] This supports a high evidentiary threshold for interventions, ensuring protections against overreach while addressing only demonstrated imminent threats.

Ideological and Institutional Biases in Labeling

Studies indicate that ideological biases influence the designation of information as disinformation, with institutions often applying the label more readily to conservative-leaning claims while exhibiting leniency toward aligned narratives. For example, concerns raised about election procedures in the 2020 U.S. presidential election, such as ballot handling irregularities, were frequently categorized as disinformation by fact-checking organizations and media outlets, despite subsequent admissions of administrative errors in states like Georgia and Pennsylvania. In contrast, erroneous official assertions, such as the 2003 U.S. government claims of Iraqi weapons of mass destruction justifying invasion, received minimal retrospective labeling as disinformation, even after intelligence assessments confirmed their inaccuracy and role in misleading public opinion toward war.[56][170] Institutional alignments exacerbate these asymmetries, as seen in the October 19, 2020, public letter signed by 51 former intelligence officials stating that reporting on Hunter Biden's laptop "has all the classic earmarks of a Russian information operation," which suppressed discussion despite the laptop's contents being later authenticated by federal investigations. This episode, involving coordination with political campaigns and active CIA contractors among signatories, illustrates how government-adjacent entities can leverage authority to preemptively label inconvenient information as disinformation, often aligning with prevailing institutional narratives. Critiques highlight that such actions stem from systemic biases in media and intelligence communities, where left-leaning ideological capture leads to selective scrutiny, underlabeling errors in endorsed positions like certain public health mandates or foreign policy rationales.[171][172] Empirical analyses reveal that partisan source credibility perceptions mediate these labeling decisions, with individuals and institutions judging ideologically congruent information as more veridical, fostering inconsistent application across the spectrum. While research documents higher vulnerability to certain misinformation types among conservatives, the institutional machinery—predominantly oriented toward progressive priors—amplifies mislabeling of dissenting views, such as climate model critiques or integrity questions, while downplaying comparable inaccuracies in official outputs. This selective framing erodes public trust, as configurational factors like prior beliefs account for substantial variance in what qualifies as disinformation, perpetuating cycles of perceived hypocrisy.[173]

References

User Avatar
No comments yet.