Hubbry Logo
VerisimilitudeVerisimilitudeMain
Open search
Verisimilitude
Community hub
Verisimilitude
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Verisimilitude
Verisimilitude
from Wikipedia

In philosophy, verisimilitude (or truthlikeness) is the notion that some propositions are closer to being true than other propositions. The problem of verisimilitude is the problem of articulating what it takes for one false theory to be closer to the truth than another false theory.[1]

This problem was central to the philosophy of Karl Popper, largely because Popper was among the first to affirm that truth is the aim of scientific inquiry while acknowledging that most of the greatest scientific theories in the history of science are, strictly speaking, false. If this long string of purportedly false theories is to constitute progress with respect to the goal of truth, then it must be at least possible for one false theory to be closer to the truth than others.

Karl Popper

[edit]

Popper's formal definition of verisimilitude was challenged since 1974 by Pavel Tichý,[2][3] John Henry Harris,[4] and David Miller,[5] who argued that Popper's definition has an unintended consequence: that no false theory can be closer to the truth than another. Popper himself stated: "I accepted the criticism of my definition within minutes of its presentation, wondering why I had not seen the mistake before."[6] This result gave rise to a search for an account of verisimilitude that did not deem progress towards the truth an impossibility.

Post-Popperian theories

[edit]

Theories proposed by David Miller and by Theo Kuipers build on Popper's approach, guided by the notion that truthlikeness is a function of a truth factor and a content factor. Others (e.g. those advanced by Gerhard Schurz [de] in collaboration with Paul Weingartner [de], by Mortensen, and by Ken Gemes) are also inspired by Popper's approach but locate what they believe to be the error of Popper's proposal in his overly generous notion of content, or consequence, proposing instead that the consequences that contribute to closeness to truth must be, in a technical sense, "relevant". A different approach (already proposed by Tichý and Risto Hilpinen [fi] and developed especially by Ilkka Niiniluoto and Graham Oddie) takes the "likeness" in truthlikeness literally, holding that a proposition's likeness to the truth is a function of the overall likeness to the actual world of the possible worlds in which the proposition would be true. An attempt to use the notion of point-free metric space is proposed by Giangiacomo Gerla.[7] There is currently a debate about whether or to what extent these different approaches to the concept are compatible.[8][9][10]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Verisimilitude is the quality of appearing true or real, derived from the Latin verisimilitudo, meaning "likelihood of truth," combining verus ("true") and similis ("similar"). In and , it denotes the believability of fictional elements—such as characters, settings, dialogue, and events—that mimic reality to engage audiences and suspend disbelief, a concept rooted in classical and emphasized by in his as essential for effective drama. For instance, detailed sensory descriptions, like the smell of a or the imperfections in an urban , enhance verisimilitude by grounding improbable narratives in plausible details, as seen in Jay McInerney's Bright Lights, Big City. In philosophy of science, verisimilitude, or "truthlikeness," refers to the degree to which a scientific theory approximates the truth, even if false, allowing comparisons between competing hypotheses to assess progress toward greater accuracy. Introduced by Karl Popper in his 1963 work Conjectures and Refutations, it addresses the problem of evaluating false theories, positing that a theory gains verisimilitude by increasing its true content while minimizing false content, thus supporting falsificationism's optimistic view of scientific advancement. However, Popper's initial qualitative and probabilistic definitions faced criticism for paradoxical results, such as implying that some false theories have more verisimilitude than others in counterintuitive ways, leading to subsequent refinements in the literature. The term entered English in the early 1600s, initially in translations of classical texts, and has since influenced movements, such as 19th-century naturalism, where authors like prioritized empirical detail to achieve lifelike authenticity. Beyond and , verisimilitude extends to , theater, and , where it balances artistic invention with historical plausibility to maintain audience immersion without claiming absolute truth.

Origins and Definition

Historical Context

The concept of verisimilitude, or likeness to truth, traces its philosophical roots to thought, where and explored notions of approximation to reality in and representation. In 's works, such as the , theories and imitations in the arts or sciences were evaluated based on their degree of resemblance to ideal forms or truth, though he prioritized exact over mere probability. , building on this, emphasized verisimilitude in and as a means to persuasive arguments that align with probable truths, distinguishing it from outright fiction while acknowledging the fallibility of human understanding. These ancient ideas laid a foundational concern with how incomplete or erroneous beliefs could still advance toward greater accuracy, influencing later epistemological debates. The modern revival of verisimilitude occurred within the of science in the mid-20th century, amid a shift toward that rejected indubitable knowledge in favor of provisional theories subject to revision. This period was marked by rapid advancements in physics and biology—such as , relativity, and evolutionary refinements—that highlighted how successive scientific theories, though often falsified, represented incremental progress despite their errors. , prominently advanced by thinkers like , underscored the need for a metric to assess theoretical improvement even when absolute truth remained elusive, responding to the instability of scientific paradigms in an era of accelerating discovery. Karl Popper first publicly proposed verisimilitude at the 1960 International Congress for the in , as a way to formalize progress in a fallible enterprise. He elaborated this motivation in his 1963 book Conjectures and Refutations: The Growth of Scientific Knowledge, arguing that scientific advancement arises from bold conjectures tested through refutations, where theories gain verisimilitude by approximating truth more closely than their predecessors, even if ultimately false. Popper's qualitative definition framed verisimilitude as a comparative notion, enabling the evaluation of how refuted theories contribute to the growth of knowledge.

Popper's Qualitative Definition

Karl Popper introduced the qualitative notion of verisimilitude in his 1963 work Conjectures and Refutations to address the idea of degrees of truth in scientific theories, particularly emphasizing that even false theories can approximate truth to varying extents. He conceptualized verisimilitude as a comparative measure where a theory's "truthlikeness" depends on the balance between its truth-content—the set of true propositions logically implied by the theory—and its falsity-content—the set of false propositions it implies. A theory possesses verisimilitude insofar as its truth-content exceeds its falsity-content, rendering it more "truthlike" despite being false overall. Popper's qualitative comparison of verisimilitude between two theories, say A and B, hinges on subclass relations among their contents. Specifically, A has greater verisimilitude than B if the truth-content of A includes the truth-content of B (or is equivalent) while the falsity-content of A is properly included in that of B, or if the truth-content of A properly includes that of B while the falsity-content of A is included in or equivalent to B's. Intuitively, this means A implies a greater number of true statements and fewer false ones relative to B, allowing for ordinal rankings without numerical assignment. This approach avoids probabilistic interpretations, focusing instead on logical consequences to evaluate theoretical progress. Central to this framework is Popper's principle of the value of content, which posits that bolder, more contentful (and thus more falsifiable) theories are preferable when their increased truth-content outweighs any added falsity-content. Such theories risk more but offer greater potential insight, aligning with the idea that scientific advancement favors hypotheses with substantial informative power over ad hoc or empty ones. This qualitative definition integrates seamlessly with Popper's falsificationism, enabling optimism in scientific methodology by permitting the replacement of a falsified with a successor that, while still potentially false, possesses greater verisimilitude through expanded truth-content and reduced relative falsity. Thus, progress occurs not through confirmation but via successive approximations to truth, where each refutation guides the formulation of more truthlike conjectures.

Formal Developments and Challenges

Quantitative Formalization Attempts

In the 1970s, shifted toward quantitative measures of verisimilitude, building on his earlier qualitative ideas to provide a more precise assessment of a theory's closeness to the truth. In Objective Knowledge: An Evolutionary Approach (1972), he proposed defining the verisimilitude of a statement bb, denoted Vs(b)Vs(b), as the difference between its truth content Ct(b)Ct(b) and falsity content Cf(b)Cf(b), where truth content represents the class of true propositions logically entailed by bb, and falsity content the class of false propositions it entails. This aimed to capture how a theory could increase in truthlikeness by expanding its true consequences while minimizing false ones, allowing numerical comparison even for false theories that approximate truth to varying degrees. A major challenge in this quantitative approach arose from the infinite nature of logical consequences in standard languages, making it impossible to assign meaningful weights or measures to Ct(b)Ct(b) and Cf(b)Cf(b) without arbitrary restrictions. Popper recognized this issue, noting that the totality of consequences forms an , which complicates direct subtraction and requires some form of weighting based on logical improbability or content strength. Early proposals to address this included restricting the domain to finite models, where the contents could be enumerated and compared via cardinalities, or limiting consideration to observational statements with finite possible outcomes, thereby enabling computable measures of truth and falsity contents. For instance, in languages with finite models, verisimilitude could be quantified by comparing the sizes of the sets of true and false propositions entailed by competing theories, such that a theory b1b_1 has greater verisimilitude than b2b_2 if the cardinality of its true propositions exceeds that of b2b_2's while its false propositions do not. David Miller, a key collaborator in developing Popper's ideas, contributed initial quantitative refinements during this period, emphasizing comparative measures over absolute ones to sidestep some infinitary problems; in his 1974 analysis, he explored how relative increases in truth content could establish verisimilitude orderings without full numerical evaluation. These efforts laid groundwork for later formalizations, though they highlighted the need for careful scoping to observational or finite contexts.

Key Objections and Theorems

The Tichý-Miller theorem, independently proved by Pavel Tichý and David Miller in 1974, demonstrates a fundamental flaw in 's proposed quantitative measure of verisimilitude, which defines the truthlikeness of a theory as the excess of its truth content over its falsity content. Specifically, the theorem establishes that under Popper's definition, no false theory can possess greater verisimilitude than any other false theory, rendering comparative assessments of truth approximation impossible among falsified scientific hypotheses. Tichý illustrated this failure through a counterexample involving two theories about the weather, formulated in a simple propositional language with atomic sentences for rain (p), wind (q), and warmth (r), where the actual conditions are p, q, and r. One theory asserts not p, not q, and not r (wrong on all counts), while the other asserts p, q, and not r (correct on two, wrong on one). Intuitively, the second theory is closer to the truth, yet Popper's measure assigns them equal verisimilitude, as both have identical truth and falsity contents relative to the language's logical structure. This counterexample highlights how Popper's approach counterintuitively equates the truthlikeness of theories with vastly different accuracies. Miller extended this critique with his relative worthlessness result, proving that Popper's definition implies all false theories are equally distant from the truth, irrespective of how closely they approximate known facts. For instance, a highly detailed but false scientific model cannot be deemed more truthlike than a trivially false under this measure, as any increase in true consequences is offset by added false ones. This result, derived from the same logical framework as the Tichý-Miller theorem, underscores the measure's inability to rank false theories hierarchically. These objections collectively undermine the core behind verisimilitude as a metric for scientific progress, particularly for false theories, which constitute the majority in empirical science. By showing that verisimilitude cannot increase among falsified hypotheses, the theorems challenge the notion that through successively better approximations to truth, forcing a reevaluation of how to formalize degrees of truthlikeness.

Alternative Theories

Likeness Approach

The likeness approach to verisimilitude conceptualizes truthlikeness as a measure of similarity or distance between a theory's models and the actual truth within a logical space of possible worlds. This framework represents theories as sets of possible worlds, where verisimilitude is inversely related to the distance from the true world, allowing for a graded ordering of false theories based on how closely they approximate reality. Unlike earlier content-based attempts, this method emphasizes structural resemblance in a metric or , enabling the resolution of paradoxes where no false theory could exceed another's truthlikeness. Pavel Tichý introduced the core idea in 1974 by proposing similarity relations among possible worlds to define verisimilitude, suggesting at the conclusion of his critique of Popper that spheres of similarity around propositions could order theories by their proximity to truth. Building on this, Risto Hilpinen formalized the approach in 1976 using metric spaces on possible worlds, where the degree of truthlikeness of a theory bb is determined by the minimum distance from its models to the true world, such that closer approximations yield higher verisimilitude. This metric formulation allows verisimilitude to be treated as a primitive notion of likeness, avoiding reliance on logical entailment alone. Ilkka Niiniluoto advanced the likeness approach in 1978 with his min-sum-average measure, which combines the minimum from a theory's models to the truth with the average across those models: Lik(b)=12[mind(b,truth)+1bwbd(w,truth)]\text{Lik}(b) = \frac{1}{2} \left[ \min d(b, \text{truth}) + \frac{1}{|b|} \sum_{w \in b} d(w, \text{truth}) \right] Here, dd denotes the function in the logical space, and b|b| is the number of models in bb. This measure balances the best-case approximation (minimum) with overall performance (average), providing a robust ordering even for complex theories. For instance, consider two contradictory theories: one positing that all swans are white (false due to black swans) and another that all swans are black (also false). The likeness approach can rank the former higher if its models are, on average, closer to the true world where swans vary in color, based on the distances in the space of possible avian distributions. The advantages of the likeness approach include its ability to progressively order false theories—such as ranking Newtonian mechanics as more truthlike than Aristotelian physics relative to relativity—and its alignment with intuitive notions of scientific improvement, where refinements reduce distances to truth without requiring full accuracy. Developed to address the Tichý-Miller paradoxes that undermined Popper's qualitative definition, this method restores the possibility of comparative verisimilitude for incomplete or erroneous hypotheses.

Content Approach

The content approach to verisimilitude seeks to refine Karl Popper's original qualitative notion by addressing its limitations, particularly the problem of infinite sets in truth-content and falsity-content measures, through the use of partial or estimated contents that focus on probabilistic or approximative alignments with truth. This refinement maintains the emphasis on a theory's informational content—its capacity to entail true propositions—while mitigating the formal challenges that rendered Popper's definition inadequate for comparing false theories. By incorporating estimative functions, these measures evaluate how closely a approximates the full truth without requiring exhaustive of all logical consequences. A key development in this approach came from Ilkka Niiniluoto in , who defined verisimilitude in terms of the similarity between a theory's estimated truth-content and the complete truth, employing probabilistic or estimative functions to quantify degrees of approximation. Niiniluoto's framework introduces a truth-estimate function trtr, which assigns values between 0 and 1 to propositions based on their estimated alignment with reality given available evidence. The verisimilitude of a hh relative to the truth tt is then given by ver(h,t)=tr(ht)+(1tr(h¬t))2,\text{ver}(h, t) = \frac{\text{tr}(h \land t) + (1 - \text{tr}(h \land \lnot t))}{2}, where tr(ht)\text{tr}(h \land t) represents the estimated truth-content (the portion of hh that aligns with tt), and 1tr(h¬t)1 - \text{tr}(h \land \lnot t) captures the avoidance of falsity-content. This formula derives from averaging the normalized truth-likeness in the conjunctive parts with the complement of the falsity-likeness, ensuring that verisimilitude increases as hh better estimates both true and false aspects of tt; for instance, if hh fully entails tt, then ver(h,t)=1\text{ver}(h, t) = 1, while complete contradiction yields 0. This estimative method allows practical comparisons even for complex theories, as the truth-estimates can be derived from empirical evidence or inductive probabilities without invoking infinite sets. David Miller's post-1974 contributions further advanced comparative content measures within this approach, emphasizing relevant consequences—those propositions logically implied by a theory that are pertinent to its domain—over total logical content to enable meaningful rankings of theories. By focusing on such restricted sets of consequences, Miller's framework avoids the trivialization issues in Popper's original measure, allowing for the assessment of progress where one theory has greater verisimilitude than another if it entails more true relevant consequences while minimizing false ones. For example, in comparing astronomical theories, Kepler's laws exhibit higher content-verisimilitude than Ptolemy's geocentric model because they better align with the true elliptical orbits of planets through more accurate relevant predictions of positional data, despite both being approximations to Newtonian truth. More recent extensions of the content approach include the partial consequence account proposed by Gustavo Cevolani and Roberto Festa in 2020, which adapts the measure for non-monotonic logics by considering partial entailments—subsets of consequences that hold defeasibly—thus accommodating theories with exceptions or revisions common in scientific practice. This account preserves the core idea of content-based approximation while extending its applicability to dynamic systems, ensuring that verisimilitude reflects incremental improvements in partial alignments with truth.

Applications and Implications

Epistemological Role

In the context of fallible , verisimilitude addresses a central epistemological challenge: determining whether one theory is closer to the truth than another without direct access to the ultimate truth itself. This problem arises from the fallibilist assumption that all scientific theories are potentially false, yet aims to approximate truth progressively. Without knowing the complete truth, direct of theories' truthlikeness seems impossible, leading to a where verisimilitude either becomes redundant (if truth is fully ascertainable) or inaccessible (if it is not). One proposed solution involves using empirical confirmation measures as proxies for verisimilitude, positing that theories with higher truthlikeness are more confirmable by evidence. For instance, expected truthlikeness can be estimated via Bayesian formulas, such as ETL(Ae)=iTL(ACi)×P(Cie)\mathbb{E}TL(A|e) = \sum_i TL(A|C_i) \times P(C_i|e), where TLTL denotes truthlikeness, AA is the theory, ee is the evidence, and PP is the over possible complete truths CiC_i. This approach allows epistemic agents to infer relative truthlikeness indirectly through observable confirmational success, thereby guiding theory choice in practice. Graham Oddie has argued that verisimilitude serves as a key guide for scientific , underpinning epistemic —the view that successful inquiry tends to increase truthlikeness over time, even amid falsifications. Verisimilitude complements by emphasizing qualitative dimensions of progress, such as structural similarity to truth, which probabilistic measures alone may overlook. While Bayesianism excels at quantifying degrees of belief and evidential support, verisimilitude captures how successive theories can represent deeper aspects of reality, fostering a notion of non-probabilistic advancement. A related challenge is the swamping problem in epistemic utility theory, where the value of outright truth overshadows finer gradations of truthlikeness, rendering intermediate approximations epistemically negligible. This is addressed through weighted utility functions that assign independent value to verisimilitude, ensuring that partial closeness to truth contributes meaningfully to epistemic worth beyond mere truth attainment. This epistemological framework ties back to Karl Popper's original optimism, which held that science progresses in verisimilitude despite the falsity of its theories, as bolder conjectures that survive testing approximate truth more closely than their predecessors.

Significance in Scientific Progress

Verisimilitude serves as a key metric for evaluating theory change in , positing that scientific progress occurs when successor theories exhibit higher degrees of truthlikeness compared to their predecessors. This approach aligns with a fallibilist perspective, allowing for progress even among false theories by measuring their proximity to the complete truth through retained true content and minimized false assertions. In this framework, the advancement from one theory to another is not merely empirical confirmation but an increase in overall similarity to reality, providing a normative standard for methodological decisions in scientific inquiry. A representative example of this metric in action is the transition from Newtonian physics to Einsteinian relativity, where the latter achieves greater verisimilitude by preserving the accurate predictions of Newton's laws in low-velocity, weak-gravity regimes while correcting inaccuracies, such as those in planetary perihelion and deflection. Einstein himself framed Newtonian mechanics as an valid under special circumstances, thereby enhancing truthlikeness by incorporating additional true consequences without discarding the core empirical successes of the predecessor theory. This shift illustrates how verisimilitude captures cumulative progress, where new theories build upon and refine rather than wholly replace prior ones. The axiological problem arises in balancing truth and verisimilitude when preferring , as false theories with high truthlikeness may possess greater cognitive value than simplistic true ones due to their informational richness and proximity to . For instance, a highly verisimilar false could outperform a minimally informative true theory in guiding further , raising questions about whether truth alone suffices as an epistemic goal or if truthlikeness better reflects the instrumental aims of . This tension highlights the need for measures that weigh both accuracy and content relevance in theory assessment. Recent discussions, such as those by Darrell Rowbottom, explore verisimilitude's role in , arguing that progress can occur without guaranteed increases in truthlikeness, thereby challenging strict verisimilitudinarian accounts while linking the concept to broader debates on . Rowbottom contends that predictive enhancements and shifts in understanding may drive advancement independently of verisimilitude gains, yet he engages with methods akin to criteria to evaluate theoretical merit. More recent work extends verisimilitude to naturalistic metaphysics, arguing that metaphysical theories can achieve progress by increasing truthlikeness in and alignment with scientific findings. In implications for realism, verisimilitude bolsters structural realism by permitting approximate truth in otherwise false theories, allowing realists to affirm continuity in scientific structures across theory changes without requiring literal truth in every claim. This compatibility addresses pessimistic induction by emphasizing preserved relational truths over discarded entities. Critics contend that verisimilitude remains untestable in practice, as determining a theory's actual proximity to truth requires unattainable of the full truth, rendering it more a guide than a rigorous evaluative tool. Without reliable epistemic access, comparisons risk subjectivity, limiting its utility beyond motivational rhetoric in scientific .

References

Add your contribution
Related Hubs
User Avatar
No comments yet.