Hubbry Logo
History of pseudoscienceHistory of pseudoscienceMain
Open search
History of pseudoscience
Community hub
History of pseudoscience
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
History of pseudoscience
History of pseudoscience
from Wikipedia

The Alchemist in Search of the Philosopher's Stone, by Joseph Wright, 1771

The history of pseudoscience is the study of pseudoscientific theories over time. A pseudoscience is a set of ideas that presents itself as science, while it does not meet the criteria to properly be called such.[1][2]

Distinguishing between proper science and pseudoscience is sometimes difficult. One popular proposal for demarcation between the two is the falsification criterion, most notably contributed to by the philosopher Karl Popper. In the history of pseudoscience it can be especially hard to separate the two, because some sciences developed from pseudosciences. An example of this is the science chemistry, which traces its origins from the protoscience of alchemy.

The vast diversity in pseudosciences further complicates the history of pseudoscience. Some pseudosciences originated in the pre-scientific era, such as astrology and acupuncture. Others developed as part of an ideology, such as Lysenkoism, or as a response to perceived threats to an ideology. An example of this is creationism, which was developed as a response to the scientific theory of evolution.

Despite failing to meet proper scientific standards, many pseudosciences survive. This is usually due to a persistent core of devotees who refuse to accept scientific criticism of their beliefs, or due to popular misconceptions. Sheer popularity is also a factor, as is attested by astrology which remains popular despite being rejected by a large majority of scientists.[3][4][5][6]

19th century

[edit]
A phrenology chart from 1883. During the first half of the 19th century, phrenology was a popular study and considered scientific. By the second half of the century, the theory was largely abandoned.

Among the most notable developments in the history of pseudoscience in the 19th century are the rise of Spiritualism (traced in America to 1848), homeopathy (first formulated in 1796), and phrenology (developed around 1800). Another popular pseudoscientific belief that arose during the 19th century was the idea that there were canals visible on Mars. A relatively mild Christian fundamentalist backlash against the scientific theory of evolution foreshadowed subsequent events in the 20th century.

The study of bumps and fissures in people's skulls to determine their character, phrenology, was originally considered a science. It influenced psychiatry and early studies into neuroscience.[7] As science advanced, phrenology was increasingly viewed as a pseudoscience. Halfway through the 19th century, the scientific community had prevailingly abandoned it,[8] although it was not comprehensively tested until much later.[9]

Halfway through the century, iridology was invented by the Hungarian physician Ignaz von Peczely.[10] The theory would remain popular throughout the 20th century as well.[11]

The astrological signs of the zodiac.

Spiritualism (sometimes referred to as "Modern Spiritualism" or "Spiritism")[12] or "Modern American Spiritualism"[13] grew phenomenally during the period. The American version of this movement has been traced to the Fox sisters who in 1848 began claiming the ability to communicate with the dead.[14] The religious movement would remain popular until the 1920s, when renowned magician Harry Houdini began exposing famous mediums and other performers as frauds (see also Harry Houdini#Debunking spiritualists). While the religious beliefs of Spiritualism are not presented as science, and thus are not properly considered pseudoscientific, the movement did spawn numerous pseudoscientific phenomena such as ectoplasm and spirit photography.

The principles of homeopathy were first formulated in 1796, by German physician Samuel Hahnemann. At the time, mainstream medicine was a primitive affair and still made use of techniques such as bloodletting. Homeopathic medicine by contrast consisted of extremely diluted substances, which meant that patients basically received water. Compared to the damage often caused by conventional medicine, this was an improvement.[15] During the 1830s homeopathic institutions and schools spread across the US and Europe.[16] Despite these early successes, homeopathy was not without its critics.[17] Its popularity was on the decline before the end of the 19th century, though it has been revived in the 20th century.

The supposed Martian canals were first reported in 1877, by the Italian astronomer Giovanni Schiaparelli. The belief in them peaked in the late 19th century, but was widely discredited in the beginning of the 20th century.

The publication of Atlantis: The Antediluvian World by politician and author Ignatius L. Donnelly in 1882, renewed interest in the ancient idea of Atlantis. This highly advanced society supposedly existed several millennia before the rise of civilizations like Ancient Egypt. It was first mentioned by Plato, as a literary device in two of his dialogues. Other stories of lost continents, such as Mu and Lemuria also arose during the late 19th century.

In 1881 the Dutch Vereniging tegen de Kwakzalverij (English: Society against Quackery) was formed to oppose pseudoscientific trends in medicine. It is still active.

20th century

[edit]

Among the most notable developments to pseudoscience in the 20th century are the rise of Creationism, the demise of Spiritualism, and the first formulation of ancient astronaut theories.

Reflexology, the idea that an undetectable life force connects various parts of the body to the feet and sometimes the hands and ears, was introduced in the US in 1913 as 'zone therapy'.[18][19]

Creationism arose during the 20th century as a result of various other historical developments. When the modern evolutionary synthesis overcame the eclipse of Darwinism in the first half of the 20th century, American fundamentalist Christians began opposing the teaching of the theory of evolution in public schools. They introduced numerous laws to this effect, one of which was notoriously upheld by the Scopes Trial. In the second half of the century the Space Race caused a renewed interest in science and worry that the USA was falling behind on the Soviet Union. Stricter science standards were adopted and led to the re-introduction of the theory of evolution in the curriculum. The laws against teaching evolution were now ruled unconstitutional, because they violated the separation of church and state. Attempting to evade this ruling, the Christian fundamentalists produced a supposedly secular alternative to evolution, Creationism. Perhaps the most influential publication of this new pseudoscience was The Genesis Flood by young Earth creationists John C. Whitcomb and Henry M. Morris.

The dawn of the space age also inspired various versions of ancient astronaut theories. While differences between the specific theories exists, they share the idea that intelligent extraterrestrials visited Earth in the distant past and made contact with then living humans. Popular authors, such as Erich von Däniken and Zecharia Sitchin, began publishing in the 1960s. Among the most notable publications in the genre is Chariots of the Gods?, which appeared in 1968.

Late in the 20th century several prominent skeptical foundations were formed to counter the growth of pseudosciences. In the US, the most notable of these are, in chronological order, the Center for Inquiry (1991), The Skeptics Society (1992), the James Randi Educational Foundation (1996), and the New England Skeptical Society (1996). The Committee for Skeptical Inquiry, which has similar goals, had already been founded in 1976. It became part of the Center for Inquiry as part of the foundation of the latter in 1991. In the Netherlands Stichting Skepsis was founded in 1987.

21st century

[edit]

At the beginning of the 21st century, a variety of pseudoscientific theories remain popular and new ones continue to crop up.

The Flat Earth is the idea that the Earth is flat. It is believed to have existed for thousands of years, but studies show this is a relatively new theory that begun in the 1990s when the internet starting up allowed such ideas to spread much quicker.

Creationism, in the form of Intelligent Design, suffered a major legal defeat in the Kitzmiller v. Dover Area School District trial. Judge John E. Jones III ruled that Intelligent Design is inseparable from Creationism, and its teaching in public schools violates the Establishment Clause of the First Amendment. The trial sparked much interest, and was the subject of several documentaries including the award-winning NOVA production Judgment Day: Intelligent Design on Trial (2007).

The pseudoscientific idea that vaccines cause autism originated in the 1990s, but became prominent in the media during the first decade of the 21st century. Despite a broad scientific consensus against the idea that there is a link between vaccination and autism,[20][21][22][23][24] several celebrities have joined the debate. Most notable of these is Jenny McCarthy, whose son has autism. In February 2009, surgeon Andrew Wakefield, who published the original research supposedly indicating a link between vaccines and autism, was reported to have fixed the data by The Sunday Times.[25] A hearing by the General Medical Council began in March 2007, examining charges of professional misconduct. On 24 May 2010, he was struck off the United Kingdom medical register, effectively banning him from practicing medicine in Britain.

The most notable development in the ancient astronauts genre was the opening of Erich von Däniken's Mystery Park in 2003. While the park had a good first year, the number of visitors was much lower than the expected 500,000 a year. This caused financial difficulties, which led to the closure of the park in 2006.[26]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
![Joseph Wright of Derby, The Alchemist][float-right]
Pseudoscience encompasses doctrines, methodologies, and claims presented as scientific inquiry but deficient in empirical validation, , and adherence to reproducible testing standards. The term "pseudoscience" first appeared in English in 1796, applied by historian James Pettit Andrews to , marking an early demarcation from emerging professional . Its historical trajectory reveals persistent patterns where such ideas, from ancient and to 19th-century and mesmerism, gained adherents through appeals to authority, , or ideological utility rather than rigorous experimentation. Notable controversies include the —distinguishing genuine from —as articulated by philosophers like , who emphasized as a hallmark of scientific legitimacy, a criterion pseudosciences systematically evade. In the , state-endorsed pseudosciences like in the suppressed valid research for political conformity, illustrating how pseudoscience can wield causal influence on societal outcomes, often at the expense of empirical truth. Despite repeated refutations, pseudoscientific practices endure due to cognitive biases, social reinforcement, and resistance to evidential disconfirmation, underscoring ongoing challenges in and institutional vigilance against non-empirical claims.

Ancient and Pre-Scientific Origins

Astrology and Early Divinatory Practices

Astrology originated in ancient , particularly among the Babylonians around the second millennium BCE, where priests conducted systematic observations of celestial bodies such as and , correlating their positions and movements with earthly events like agricultural cycles, political upheavals, and . These practices relied on omen texts, such as compilations linking lunar eclipses to royal fates, but lacked controlled comparisons or falsification mechanisms to distinguish causal influences from coincidental patterns. In parallel, ancient Egyptian astronomers tracked heliacal risings of and decans to predict Nile inundations and ritual timings, attributing divine significance to these observations without empirical validation of predictive accuracy beyond seasonal regularities. By the Hellenistic era, Greek scholars incorporated Babylonian techniques, refining them into a more structured system that emphasized natal horoscopes and zodiacal divisions. Claudius Ptolemy's Tetrabiblos, composed in the second century CE, systematized these ideas by arguing that celestial configurations impart qualitative influences—such as heat from Mars or moisture from the Moon—on human temperaments and events, drawing on earlier Mesopotamian and Egyptian data but without rigorous testing against null outcomes or alternative causal factors like genetics or environment. Ptolemy's framework mimicked philosophical reasoning by invoking sympathy between macrocosm and microcosm, yet it permitted vague, retrofittable interpretations that evaded disproof, establishing a template for non-empirical prediction. Early critiques of these divinatory methods surfaced among Hellenistic skeptics, who highlighted logical inconsistencies undermining astrological causality. Marcus Tullius Cicero, in his of 44 BCE, rejected Chaldean astrology by citing examples like twins born under identical stars yet leading dissimilar lives, or populations under the same celestial signs experiencing divergent fortunes, arguments that exposed the unfalsifiable nature of claims positing stellar over observable variability in human affairs. Such objections, rooted in probabilistic reasoning rather than systematic experimentation, represented nascent recognitions of pseudoscientific flaws—prioritizing unverified correlations over evidence-based causation—but gained limited traction amid widespread cultural acceptance of celestial .

Alchemy and Proto-Chemical Pursuits

Alchemy emerged in ancient during the third century BCE, where practitioners sought elixirs of through processes involving the refinement of minerals and metals, guided by Taoist concepts of harmonizing cosmic forces rather than isolated mechanistic reactions. These efforts combined rudimentary experimental techniques, such as heating to produce mercury, with symbolic interpretations linking material transformation to spiritual longevity, evading empirical disconfirmation by attributing failures to ritual impurities or unseen energies. Independently, Greco-Egyptian alchemy developed by the third century CE, exemplified by , who integrated metallurgical practices with Gnostic , viewing transmutation as a divine revelation accessible through visionary dreams and allegorical texts rather than reproducible causal sequences. During the , figures like (c. 721–815 CE) advanced proto-chemical methods, including and for purifying substances, yet framed these within a spiritual paradigm where material operations paralleled the soul's purification, rendering alchemical claims resilient to falsification through layered esoteric meanings. corpus emphasized systematic experimentation alongside numerological and theosophical principles, such as balancing the four elements with astrological influences, which prioritized holistic correspondences over testable predictions. This synthesis yielded practical innovations like improved acids but subordinated them to unfalsifiable goals of universal transmutation, distinguishing from emerging empirical sciences. In , alchemical pursuits persisted into the seventeenth century, with (1493–1541) advocating medical applications through his tria prima—salt, , and mercury—as fundamental principles replacing the classical four humors, claiming therapeutic efficacy from alchemical preparations without verifiable causal mechanisms. promoted empirical observation in dosing minerals like mercury for treatment, yet relied on untested assumptions of archetypal signatures linking substances to diseases via , blending trial-and-error with occult correspondences that resisted systematic refutation. Such traditions exemplified proto-pseudoscience by achieving incidental chemical knowledge through persistent experimentation while anchoring interpretations in non-empirical , paving the way for chemistry's later demarcation via falsifiable hypotheses and mechanistic explanations.

Medieval and Early Modern Foundations

Integration with Scholasticism and Theology

In the 12th century, the translation of Arabic texts into Latin, particularly through centers like Toledo, introduced astrological and alchemical knowledge into European universities, blending it with emerging scholastic curricula. These works, including treatises on celestial influences and proto-chemical processes derived from Hellenistic and Islamic sources, were incorporated as extensions of natural philosophy, often justified as revealing God's ordered creation rather than empirical mechanisms. Scholastics like Albertus Magnus (c. 1193–1280) exemplified this fusion in texts such as De mineralibus and Speculum astronomiae, where he defended judicial astrology as a legitimate tool for understanding divine intentions in natural events, while cautioning against demonic superstitions. This integration framed natural phenomena teleologically, positing that celestial bodies and material transmutations served purposeful ends ordained by God, sidelining verifiable causation in favor of authoritative interpretation aligned with . During crises like the of 1347–1351, which killed an estimated 30–60% of Europe's population, physicians prescribed remedies based on zodiacal positions, such as under favorable lunar aspects or herbal concoctions timed to planetary conjunctions, without testing efficacy against observed outcomes. The University of Paris medical faculty's 1348 consultation, for instance, attributed the plague to a Saturn-Jupiter-Mars conjunction in Aquarius, recommending astrological timing for evacuations and diets over isolating bacterial vectors. Nominalist critiques, notably from (c. 1287–1347), exposed limitations in this approach by rejecting inherent universals and teleological necessities in nature, arguing that God's absolute power rendered causal chains contingent rather than divinely mandated essences. Ockham's razor—preferring simpler explanations without multiplied entities—implicitly undermined elaborate astrological-alchemical hierarchies lacking direct evidence, though it did not fully displace scholastic reliance on non-empirical divine purpose until later Aristotelian gained traction. This period thus perpetuated pseudoscientific elements by subordinating observation to theological harmony, delaying mechanistic scrutiny.

Renaissance Natural Magic and Hermetic Traditions

The Renaissance witnessed a revival of ancient esoteric traditions under humanist scholarship, which reinterpreted classical and purportedly primordial texts as sources of profound natural knowledge. Central to this was the rediscovery of the Corpus Hermeticum, a collection of Greek treatises attributed to Hermes Trismegistus, portraying the cosmos as a living, interconnected entity animated by divine sympathies and occult influences rather than purely mechanical causes. Marsilio Ficino, at the behest of Cosimo de' Medici, completed a Latin translation of the Corpus Hermeticum in 1463, with the first printed edition appearing in 1471; this work elevated Hermeticism as prisca theologia—an ancient theology predating Plato and Moses—emphasizing a hierarchical universe where celestial bodies exerted harmonious influences on terrestrial matter through analogical correspondences, enabling "natural magic" to manipulate these sympathies for practical ends like healing or divination. Such views bridged medieval alchemy and astrology with emerging empirical inquiries, yet prioritized unfalsifiable claims of invisible forces over verifiable causal mechanisms. Heinrich Cornelius Agrippa von Nettesheim advanced this framework in his De occulta philosophia libri tres, first circulated in manuscript around 1510 and published in in 1533, which systematically classified magic into (elemental virtues), celestial (astrological influences), and divine (kabbalistic ) categories. Agrippa's treatise integrated Neoplatonic emanations, cabala, and to argue that magicians could harness cosmic sympathies—such as planetary rays imprinting sigils on talismans—to produce effects like protection or amplification of virtues in herbs and stones, presented as a rational extension of rather than . However, these operations eschewed controlled replication or disconfirmation, relying instead on qualitative observations and traditional authorities, thus embodying a proto-pseudoscientific where empirical served preconceived correspondences without rigorous testing against alternatives. While natural magic yielded incidental empirical gains, such as refined pharmacopeias derived from sympathetic associations (e.g., planetary rulerships guiding plant collection timings), these were eclipsed by overarching commitments to animistic and hierarchical influences unverifiable by repeatable means. Proponents like Ficino and Agrippa framed their pursuits as superior to vulgar , yet the absence of —evident in talismanic rituals or astrological elections presumed effective via hidden virtues—distinguished them from emerging mechanistic paradigms, prefiguring later demarcations between testable and speculation. This Hermetic synthesis influenced subsequent fringe traditions, perpetuating a of animated amenable to manipulative arts over strictly observational .

Enlightenment Critiques and Initial Demarcation

18th-Century Skeptical Inquiries

David Hume's essay "Of Miracles," published in 1748 as part of An Enquiry Concerning Human Understanding, advanced a probabilistic critique of claims, asserting that beliefs must proportion to the and that violations of —such as —demand testimony stronger than the collective human record against them. This framework implicitly targeted divinatory practices like , which posited causal influences from celestial bodies defying observable regularities, by elevating empirical uniformity over anecdotal or testimonial support for extraordinary causal assertions. Hume's emphasis on from repeated observations thus eroded confidence in non-progressive traditions reliant on unverifiable mechanisms, fostering a demarcation favoring testable natural explanations. Antoine Lavoisier's quantitative experiments from the early 1770s onward dismantled the of , which held that a weightless substance escaped during burning; instead, Lavoisier demonstrated mass conservation and the role of oxygen (termed "dephlogisticated air") through sealed vessel trials showing weight gains in and respiration. By exposing phlogiston's inconsistencies—such as negative weight implications resolved only via hypotheses—Lavoisier indirectly critiqued alchemical legacies, where transmutative claims evaded falsification through qualitative rather than replicable metrics. This chemical prioritized precise and hypothesis-testing, rendering residue alchemical pursuits non-empirical and stagnant. The Royal Society of London, operational since its 1660 charter and influential in the , institutionalized by prioritizing peer-reviewed, reproducible demonstrations over speculative pursuits, effectively sidelining and through exclusion of unsubstantiated claims from its Philosophical Transactions. By the early 1700s, elite consensus viewed these fields as disreputable due to their resistance to verification, as the Society's evidentiary standards promoted mechanistic aligned with Newtonian principles. This corporate policy accelerated the shift toward empirical demarcation, diminishing the cultural authority of non-falsifiable doctrines amid broader Enlightenment .

Emergence of the Term "Pseudoscience"

The term "pseudoscience" first appeared in print in 1796, coined by the English historian James Pettit Andrews (1737–1797) in the first volume of his History of , from the Death of to the Revolution. Andrews applied it to , describing it as a "pretended" or false science that mimicked empirical methods but lacked genuine rigor and verifiable outcomes, contrasting it with the emerging standards of experimental validation in chemistry. This usage reflected broader late-Enlightenment efforts to demarcate legitimate scientific inquiry from speculative pursuits, emphasizing observation, repeatability, and over tradition or . Preceding Andrews by over a decade, Immanuel Kant's philosophical critiques in works like the (1781) laid conceptual groundwork by cautioning against "dogmatic metaphysics"—systems that presumed scientific certainty for unprovable claims about supersensible realities, such as the soul or , without grounding in experience. Kant argued that such approaches illicitly extended pure reason beyond its limits, producing illusory knowledge akin to pseudoscience, and advocated privileging Newtonian mechanics as the model for true science through critical self-examination of cognitive faculties. He distinguished this from empirical sciences, warning that uncritical rationalism masquerading as metaphysics undermined genuine knowledge by evading empirical testing. In the volatile context of the during the 1790s, the term's emergence aligned with skepticism toward ideologically motivated "sciences" that served political agendas over evidence, such as early craniological speculations precursors to , which were derided for blending with unsubstantiated claims of innate character tied to revolutionary egalitarianism. Revolutionaries' suppression of the Académie des Sciences in 1793, ostensibly to democratize knowledge, inadvertently highlighted tensions between politicized speculation and rigorous method, fostering critiques of fields that prioritized ideological utility— like mesmerism's vital forces—over controlled experimentation, as exemplified by the 1784 royal commission's debunking of led by Lavoisier and Franklin. This period intensified calls for demarcation, positioning "pseudoscience" as a label for pursuits that appropriated scientific authority without adhering to evidentiary standards amid the era's fervor for rational reform.

19th-Century Proliferation Amid Industrialization

Mesmerism, Spiritualism, and Vitalism

In the , mesmerism, spiritualism, and emerged as pseudoscientific doctrines positing undetectable forces or essences to account for , communication with the dead, and biological organization, often as a Romantic counter to the era's ascendant mechanistic and materialist paradigms in physics and chemistry. These movements gained traction amid toward reductionist , appealing to those seeking holistic explanations for subjective experiences like states or organismal development that eluded empirical quantification. Despite lacking falsifiable predictions or replicable protocols, they influenced and fringe therapeutics, persisting through anecdotal endorsements even after controlled inquiries revealed reliance on suggestion, deception, or interpretive overreach. Franz Anton Mesmer, a German physician, developed the theory of "animal " in the 1770s, claiming an invisible universal fluid permeated bodies and could be manipulated via passes of the hands or magnetized objects to cure ailments by restoring fluid balance. By 1778, after relocating to , Mesmer's salons induced convulsive "crises" in patients—interpretated as fluid releases—drawing crowds and royal patronage, with reported success rates in cases attributed to the dramatic ritual rather than any fluid. A 1784 French Royal Commission, including and , conducted blinded trials where subjects experienced effects from inert "magnetized" objects indistinguishable from active ones, concluding no magnetic influence existed and effects stemmed from imagination or expectation—early precursors to placebo-controlled studies. Mesmerism's core claims faltered under scrutiny, yet its techniques evolved into hypnotism and informed vitalist notions of innate healing forces, evading full discrediting through rebranding as psychological phenomena. Spiritualism crystallized in 1848 with the , and Kate, in Hydesville, New York, who claimed poltergeist "rappings" communicated messages from spirits via codified knocks, sparking a transatlantic movement of mediums, séances, and purported ectoplasmic manifestations. Adherents, numbering in the millions by mid-century, invoked spirit agencies to explain the sounds and table-tippings, often without isolating variables like participant expectation or mechanical aids, leading to widespread but uncontrolled testimonies of contact. Skeptical probes, including Fox's 1888 public confession and demonstration of joint-cracking to mimic raps, exposed fraud in key cases, with ectoplasm later proven as or regurgitated in mediums like . Absent standardized controls or independent replication, spiritualism's evidential base rested on subjective validations prone to , sustaining belief amid grief-driven demand despite systemic fraud revelations. Vitalism posited a non-physical vis vitalis or directive principle animating life, contrasting 19th-century mechanistic biology's chemical reductions, with renewed vigor in Hans Driesch's 1891–1892 sea urchin embryo experiments where separated blastomeres regulatively formed complete larvae, defying mosaic development models and prompting his inference of "entelechy"—a holistic, non-spatial agency guiding morphogenesis beyond physicochemical laws. Driesch argued this force resolved regulative phenomena unaccounted for by Darwinian selection or material causation alone, influencing neo-vitalists against emergentist alternatives. However, advances like Friedrich Wöhler's 1828 urea synthesis from inorganic precursors eroded barriers between organic and inorganic chemistry, while subsequent enzymatic isolations and genetic mechanisms provided mechanistic accounts of development, rendering vitalism's immaterial postulates superfluous and unfalsifiable. By century's end, empirical biology's predictive successes marginalized vitalism, though its emphasis on organismal wholeness echoed in later holistic critiques without restoring non-material causal claims.

Phrenology, Physiognomy, and Early Eugenics

Phrenology, developed by in the late , posited that the consists of discrete organs corresponding to specific mental faculties, with their development manifesting as bumps on the measurable for character assessment. Gall began formulating these ideas around 1796, drawing from observations of schoolchildren and criminals, and collaborated with Johann Gaspar Spurzheim to popularize the system across after 1800. Proponents claimed empirical support from correlations between skull shapes and behaviors, but lacked evidence for causal mechanisms linking external cranial features directly to internal functions. Modern studies, such as a 2018 analysis using MRI scans on over 6,000 participants, found no significant associations between scalp morphology and personality traits, confirming phrenology's invalidation due to methodological flaws and absence of localization specificity. Physiognomy, the practice of judging character from facial features, gained renewed prominence through Johann Kaspar Lavater's Physiognomische Fragmente published between 1775 and 1778, which argued that external appearance reflects inner moral qualities through divine design. Lavater's work, illustrated with engravings and profiles, influenced European intellectuals by blending theological and observational claims, leading to widespread application in social judgments and portraiture. However, critiques highlighted its reliance on subjective interpretations prone to , where preconceived notions of character shaped feature assessments rather than objective traits predicting disposition. In the , physiognomy revived alongside for practical uses like employment screening, yet empirical validation failed as studies showed judgments were culturally conditioned and lacked predictive reliability beyond stereotypes. Early eugenics emerged from Francis Galton's 1883 coinage of the term in Inquiries into Human Faculty and Its Development, advocating to enhance desirable hereditary traits using statistical methods like applied to human populations. Galton, influenced by his cousin Charles Darwin's , proposed "positive" eugenics to encourage reproduction among the "fit" and "negative" measures to restrict the "unfit," based on assumptions of high for without accounting for environmental interactions. These ideas extrapolated bivariate correlations to policy recommendations, such as marriage counseling for genetic quality, but overlooked longitudinal data needs and genetic complexities later revealed by . and fed into eugenics by providing anthropometric tools for classifying innate worth, influencing early 20th-century social reforms like immigration quotas in Britain and the U.S., though foundational claims rested on unverified causal determinism rather than controlled estimates.

20th-Century Ideological and State-Sponsored Pseudosciences

Lysenkoism and Soviet Agricultural Policies

Trofim Lysenko, a Soviet agronomist, gained influence in the late 1920s and early 1930s through claims of transformative agricultural techniques, such as vernalization—exposing seeds to cold to supposedly accelerate maturation and boost yields—rooted in a rejection of Mendelian genetics. Lysenko denied the existence of genes and hereditary particles, instead advocating a neo-Lamarckian framework where environmental modifications to plants could be directly inherited by offspring, aligning with dialectical materialism's emphasis on rapid, environmentally driven change over gradual genetic selection. This ideology appealed to Soviet leaders seeking quick fixes for collectivization's disruptions, but experiments failed to replicate promised results under controlled conditions, as verified by independent geneticists like Nikolai Vavilov, whose arrest and death in 1943 exemplified the purge of dissenters. Stalin personally endorsed Lysenko, appointing him head of the Lenin All-Union of Agricultural Sciences by the mid-1930s and culminating in the 1948 VASKhNIL decree that banned genetics as "bourgeois ," enforcing Lysenko's methods nationwide. Policies prioritized short-term yield hacks, like planting dense crops without hybrid breeding or ignoring specificity, over evidence-based practices, leading to widespread failures such as sown in summer fields that germinated but did not mature, depleting soils via unbalanced rotations lacking for . These approaches contributed to agricultural collapses, including exacerbations of the 1932–1933 famine where and improper seeding in reduced outputs amid grain requisitions, with estimates of 5–7 million deaths partly attributable to such pseudoscientific mismanagement atop collectivization errors. Later implementations under Khrushchev, like "storming" harvests through overplanting, yielded illusory gains followed by multi-year declines, as unproven claims prevented sustainable breeding programs. The persistence of stemmed from state coercion—scientists faced imprisonment or execution for opposing it—overriding empirical disproofs, such as field trials showing no transgenerational environmental inheritance beyond basic adaptation. This causal prioritization of suppressed causal realism in biology, delaying Soviet crop by decades and costing an estimated 30 million lives across famines linked to policy failures from to 1950s. Lysenko's ouster began post-Stalin in 1964, when accumulating harvest shortfalls and international genetic advances prompted Khrushchev's partial reversal, fully debunking the doctrine by the late as revived, underscoring how politicization, not evidence, sustained the .

Nazi Racial Hygiene and Aryan Science

The pseudoscientific foundations of Nazi racial hygiene traced back to French diplomat Arthur de Gobineau's Essai sur l'inégalité des races humaines (1853–1855), which asserted the innate superiority of the white Aryan race over others, attributing civilizational achievements solely to Aryan bloodlines while decrying racial mixing as degenerative. Gobineau's deterministic racial hierarchy, devoid of empirical genetic evidence and reliant on historical speculation, influenced later völkisch thinkers and was selectively appropriated by Nazi ideologues to frame Aryans as a master race destined for supremacy. This evolved into Alfred Rosenberg's Der Mythus des 20. Jahrhunderts (1930), a foundational Nazi text positing Aryan racial and spiritual superiority through pseudohistorical narratives and anthropometric claims of physical distinctiveness, such as skull measurements purportedly linking modern Germans to ancient Indo-European conquerors, without controlled validation or falsifiability. Nazi racial hygiene policies operationalized these ideas through state mechanisms, culminating in the Nuremberg Laws of September 15, 1935, which legally defined Jews as racially alien via genealogical criteria rather than religious practice, prohibiting intermarriage and extramarital relations to preserve "German blood" purity. These laws, justified by eugenic pseudoscience claiming hereditary racial traits determined societal fitness, extended prior sterilization programs—over 400,000 Germans deemed "hereditarily ill" were sterilized by 1945 under the 1933 Law for the Prevention of Hereditarily Diseased Offspring—to explicit anti-Semitic measures, ignoring environmental factors in human variation. The SS-affiliated , established in 1935 under and active until 1945, institutionalized pseudoscience through ideologically driven expeditions, including archaeological digs in occupied territories to fabricate evidence of Nordic origins for ancient civilizations like those in and , often employing non-peer-reviewed methods and suppressing contradictory findings. These efforts, budgeted at millions of Reichsmarks, prioritized mythic narratives over stratigraphic analysis or , producing reports like those on " " that conflated with empirical . Post-war scrutiny during the (December 1946–August 1947) at revealed the methodological fraud in Nazi racial experiments, such as Josef Mengele's twin studies at Auschwitz (1943–1945), which involved injecting chemicals, amputations, and organ removals on over 1,000 sets of twins to ostensibly prove hereditary traits, yet yielded no replicable data due to uncontrolled variables, high mortality (over 90% of subjects), and absence of ethical controls or statistical rigor. Prosecutors highlighted how such work subordinated science to , with defendants like admitting to align with racial dogma, confirming the enterprise's pseudoscientific nature through lack of hypothesis-testing and peer scrutiny.

Post-War Fringe Sciences and Cold War Parapsychology

In the aftermath of World War II, parapsychology persisted as a fringe pursuit within academic settings, notably through J.B. Rhine's Parapsychology Laboratory at Duke University, which operated from 1930 but extended its ESP experiments into the post-war decades until the 1960s. Rhine utilized Zener cards—decks featuring five symbols (circle, cross, waves, square, star)—to assess telepathy and clairvoyance, yielding hit rates above chance in early trials, such as 32% versus the expected 20% in controlled guessing. However, replication attempts highlighted vulnerabilities to sensory leakage, where inadvertent cues from experimenters' nonverbal signals or card imperfections allowed subconscious information transfer, undermining claims of genuine psi effects; independent reviews, including those by Rhine's own associates, confirmed these flaws and the absence of robust, repeatable evidence under double-blind protocols. Cold War anxieties over Soviet advances in prompted U.S. intelligence agencies to fund speculative research blending and . The CIA's program, initiated in 1953 under Director of Central Intelligence , encompassed over 149 subprojects exploring mind control via dosing on unwitting subjects, electroshock, , and sensory isolation, with expenditures reaching millions annually by the 1960s. Though focused on behavioral modification for , it incorporated parapsychological elements like altered states for potential ESP enhancement, driven by fears of communist "brainwashing" techniques; the program ended in 1973 amid internal reviews questioning efficacy, with Director ordering record destruction, though surviving documents revealed ethical violations and negligible practical yields. This exploratory ethos extended to the Defense Intelligence Agency's and CIA's initiatives, formalized as the Stargate Project from 1978 to 1995, which trained "viewers" to mentally visualize distant or hidden targets for espionage, costing about $20 million over its lifespan. Sessions involved viewers sketching impressions from geographic coordinates, but a 1995 CIA-commissioned evaluation by the deemed results inconsistent, prone to , and devoid of falsifiable predictions or operational intelligence, prompting termination as the methods failed by invoking unverified psychic mechanisms over prosaic explanations. Concurrently, surged as a pseudoscientific domain post-1947, ignited by the Roswell Army Air Field incident, where rancher William Brazel's recovery of metallic debris was briefly announced as a "flying disc" before clarification as remnants from —a classified array for detecting Soviet nuclear tests. The U.S. Air Force's , active from March 1952 to December 1969 under astronomers like , cataloged 12,618 sightings, resolving 94% as misidentified aircraft, balloons, astronomical bodies, or hoaxes through prosaic analysis, with the remainder unexplained due to insufficient data rather than extraterrestrial hypotheses; its final report affirmed no security threat or technological breakthrough, applying causal parsimony to dismiss anomalous claims lacking empirical corroboration.

21st-Century Resurgence in the Digital Era

Alternative Medicine and Anti-Vaccination Movements

encompasses a range of practices promoted as therapeutic alternatives to , often lacking empirical support and relying on mechanisms incompatible with established pharmacological principles. In the , these practices have proliferated globally, exploiting uncertainties in complex biological systems and amplified by dissemination, despite rigorous testing revealing negligible efficacy beyond effects. Anti-vaccination movements, intertwined with alternative medicine advocacy, exemplify this by rejecting vaccination's proven causal role in preventing infectious diseases, favoring anecdotal claims over population-level data from randomized controlled trials (RCTs) and epidemiological studies. A pivotal event fueling modern anti-vaccination sentiment was the 1998 publication by and colleagues in , which described 12 children with developmental disorders allegedly linked to the measles, , and (MMR) vaccine, proposing a novel syndrome involving gut inflammation and autism onset. The study was retracted in 2010 after investigations revealed ethical violations, undisclosed financial conflicts, and data manipulation, with stripped of his . Subsequent meta-analyses of millions of children, including cohort studies tracking status and autism incidence, have consistently found no causal association, attributing perceived links to temporal coincidence and diagnostic changes rather than vaccine-induced pathology. Despite this, the paper's narrative persists in circles, contributing to resurgence, as seen in outbreaks exceeding 1,200 U.S. cases in 2019, predominantly among unvaccinated communities. Homeopathy, originating from Samuel Hahnemann's 1796 formulation of "like cures like" and extreme dilutions purportedly imprinting water with therapeutic memory, represents a longstanding pseudoscientific holdover into the . Modern dilutions often exceed Avogadro's number, yielding remedies statistically indistinguishable from pure solvent, yet claims persist without adherence to dose-response relationships central to . A 2005 meta-analysis by Shang et al., reviewing 110 homeopathy trials and 110 matched conventional trials, concluded that any apparent effects were attributable to bias or , with rigorous subsets showing no superiority over inert controls. Regulatory bodies, including the UK's in 2017 and Australia's National Health and Medical Research Council in 2015, have deemed ineffective for any condition based on systematic reviews, yet global sales reached $5.4 billion annually by 2018, driven by consumer distrust in pharmaceutical rigor rather than empirical validation. The COVID-19 pandemic from 2020 highlighted alternative medicine's vulnerability to hasty causal inferences amid empirical gaps, with hydroxychloroquine (HCQ) hyped as a prophylactic or treatment based on in vitro antiviral data and small observational studies. Early promotion by figures in alternative health communities ignored pharmacokinetic realities, such as inadequate lung tissue concentrations at safe doses. The RECOVERY trial, a large-scale RCT involving over 11,000 hospitalized patients, reported on June 5, 2020, that HCQ conferred no mortality benefit (27% death rate versus 25% in controls) and potentially increased risks like prolonged QT intervals, prompting discontinuation of its global Solidarity trial arm. Multiple RCTs, including those in The Lancet and NEJM, corroborated these null findings, underscoring how pseudoscientific endorsement bypasses falsification through controlled experimentation.

Conspiracy-Driven Pseudosciences and UFOlogy

In the digital era, conspiracy-driven pseudosciences have proliferated through online platforms, repackaging discredited ideas like geocentrism into modern narratives that prioritize anecdotal testimonies and selective data over empirical verification. These movements often challenge established by alleging institutional cover-ups, yet they consistently fail to produce testable artifacts or falsifiable predictions, relying instead on viral videos and unverified claims. The theory, dormant since antiquity, experienced a notable revival after 2010, fueled by algorithms that amplified conspiracy content and persuaded viewers to question models. A 2019 study by researchers analyzed YouTube recommendations, finding that exposure to initial Flat Earth videos led to algorithmic funnels promoting more extreme content, contributing to a surge in believers who dismissed evidence. Proponents ignore satellite imagery from programs like NASA's , which has provided over 20 years of continuous global data confirming Earth's curvature and rotation since 1999, as well as ground-based demonstrations such as the , first exhibited in 1851 and replicated worldwide to show rotational precession varying predictably with . This rejection favors interpretive anecdotes, such as horizon observations, over reproducible physics, exemplifying a causal disconnect where prosaic explanations rooted in mechanics are supplanted by narratives of elite deception. UFOlogy has similarly evolved into a conspiracy framework, with government disclosures like the 2017 release of three videos—captured by pilots in 2004 and 2015—sparking renewed claims of extraterrestrial visitation despite analyses attributing them to artifacts or distant . These videos emerged from the (AATIP), a initiative running from 2007 to 2012 that allocated $22 million to investigate unidentified aerial phenomena, yet produced no peer-reviewed such as craft debris or biological samples for independent scrutiny. Advocates interpret ambiguous footage as proof of , preferring grand theories of over mundane alternatives like optical illusions or classified drones, a pattern evident in the absence of verifiable artifacts across decades of sightings reported since the . This anecdotal emphasis undermines causal realism, as no UFO claim has yielded artifacts enabling material analysis, contrasting with verifiable aerospace recoveries like the debris. Such pseudosciences thrive by leveraging echo chambers to aggregate unfiltered eyewitness accounts, sidelining institutional science's reliance on and replication, though mainstream sources documenting these trends—often from academic studies—remain more credible than proponent blogs due to methodological rigor. The result is a challenge to evidence-based , where emotional appeal to hidden truths overrides data-driven prosaic resolutions.

Algorithmic and AI-Revived Pseudoscientific Claims

In the and , advancements in have facilitated the resurgence of , the discredited practice of inferring character traits or behavioral tendencies from facial features, through applications in facial recognition and inference models. Early examples include a 2017 study by researchers demonstrating high accuracy in classifying from facial images, which critics likened to modern due to its reliance on superficial correlations without establishing causation or controlling for confounders like grooming or photo context. By 2018, AI systems purporting to predict criminality from facial structure prompted warnings from experts about reviving pseudoscientific , as these models often overfit to biased training datasets reflecting societal prejudices rather than innate traits. Deep learning models have extended this trend to other domains, echoing phrenology's flawed localization of mental faculties by imposing on complex data without robust causal frameworks. For instance, genomic analyses using AI have been critiqued for generating spurious associations between genetic markers and traits, driven by to noise in large datasets rather than validated mechanisms, as highlighted in 2024 reviews of AI-assisted studies that produced erroneous imputations and predictions. Such approaches prioritize predictive accuracy over interpretability, leading to claims of trait prediction that mimic historical pseudosciences but fail under scrutiny for lacking experimental or replication outside training distributions. Empirical evaluations reveal systemic vulnerabilities, including hallucinated correlations amplified by unregularized models, as documented in 2024 analyses warning that machine learning's "" nature conceals statistical artifacts akin to p-hacking or cherry-picked fits in pseudoscientific literature. A November 2024 preprint on argues that deep learning's data-driven paradigm has resurrected pseudoscientific methodologies by sidelining statistical principles like the distinction between correlation and causation, evidenced by failures in out-of-sample generalization for facial inference tasks. These issues persist despite methodological safeguards, underscoring how algorithmic often outpaces rigorous validation, resulting in applications deployed in hiring, , and diagnostics that perpetuate unverified deterministic claims.

Philosophical Debates and Demarcation Challenges

Historical Evolution of Demarcation Criteria

The demarcation problem in philosophy of science seeks criteria to distinguish empirical inquiry from non-empirical or systematically flawed doctrines purporting scientific status. Early 20th-century efforts emphasized logical and methodological tests, evolving toward assessments of evidential responsiveness and theoretical productivity. Karl Popper introduced falsifiability as a demarcation criterion in his 1934 Logik der Forschung (English: The Logic of Scientific Discovery, 1959), positing that genuine scientific theories risk refutation through empirical tests, whereas pseudosciences like psychoanalysis and historical materialism evade disproof via ad hoc modifications or immunizing strategies. Popper argued this criterion resolves the induction problem by prioritizing bold conjectures open to severe testing over verification, rejecting doctrines that explain all outcomes without predictive risk. Thomas Kuhn's 1962 The Structure of Scientific Revolutions challenged Popper's strict falsificationism by describing science as operating within paradigms—shared frameworks guiding "normal science"—where anomalies accumulate without immediate rejection, leading to crises and revolutionary shifts only when alternatives emerge. This paradigm model complicates demarcation, as pseudosciences mimic paradigmatic resistance to counterevidence but lack the problem-solving success and eventual anomaly resolution that characterize scientific revolutions, highlighting communal and historical dynamics over isolated logical tests. Imre Lakatos refined these ideas in his 1970s methodology of scientific research programmes, distinguishing "hard core" axioms protected by auxiliary hypotheses: progressive programmes extend explanatory power by predicting novel facts, while degenerative ones merely retrofit anomalies , marking the latter as pseudoscientific. operationalized similar distinctions in 1978, deeming a pseudoscientific if it persists despite being less progressive than rivals over extended periods and if its adherents systematically ignore unsolved anomalies; he elaborated this computationally in his 1988 Computational Philosophy of Science, applying it to cases like . Contemporary Bayesian frameworks address demarcation through probabilistic updating, where scientific claims revise credences via likelihood ratios and priors responsive to evidence, contrasting pseudosciences' failure to diminish support for core tenets amid disconfirming data. This approach integrates confirmation theory, emphasizing cumulative evidential warrant over binary , though it requires assessing community practices to detect dogmatic non-updating.

Criticisms of Overbroad Pseudoscience Labeling

Critics of rigid demarcation criteria argue that attempts to sharply distinguish from often fail, leading to the mislabeling of legitimate but minority or preliminary hypotheses as pseudoscientific, thereby stifling empirical investigation. Philosopher , in his 1983 analysis, contended that the is a "pseudoproblem" because historical efforts to define via criteria like or have proven either too vague or inapplicable, resulting in arbitrary exclusions rather than objective boundaries. This overbroad application risks prioritizing consensus over evidence, pathologizing dissenting views that later contribute to scientific progress. Historical precedents illustrate how ideas initially dismissed as pseudoscience were eventually vindicated through accumulating evidence. In the 1790s, Ernst Chladni's hypothesis that meteorites originated from extraterrestrial sources was ridiculed by the as superstitious, yet the 1803 fall of meteorites in L'Aigle, , confirmed their cosmic origin via chemical analysis. Similarly, Alfred Wegener's 1912 theory of was derided as fringe speculation lacking a plausible mechanism, with geologists like Rollin T. Chamberlin labeling it "the potsherds of a wrecked "; evidence in the 1960s, including data, substantiated the core idea. These cases demonstrate how institutional resistance, often under the guise of labeling, delayed acceptance despite empirical anomalies. In contemporary contexts, overbroad labeling has been applied to research on , where twin studies consistently estimate genetic contributions at 50-80% in adulthood, yet findings like those in Herrnstein and Murray's 1994 faced accusations of from outlets prioritizing environmental explanations despite data from adoption and . Such critiques often stem from ideological commitments in academia, where systemic biases favor nurture-over-nature priors, suppressing causal inquiries into genetic factors even when supported by large-scale datasets like the Study of Twins Reared Apart. This pattern echoes broader concerns that equating dissent with undermines causal realism, as preliminary evidence challenging dominant paradigms—such as early doubts about climate model sensitivities—is preemptively marginalized, potentially retarding adaptive policy and inquiry.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.