Recent from talks
Nothing was collected or created yet.
Law of thought
View on WikipediaIntroduction
[edit]The laws of thought as described by logicians are the basic rules that guide each individual's thinking and all rational discussion. These rules have been clarified and formulated by philosophers over a long period of time. In recent times, these classical ideologies have been rejected and questioned throughout society.
According to the 1999 Cambridge Dictionary of Philosophy,[1] laws of thought are laws by which valid thought proceeds, or that justify valid inference, or to which all valid deduction is reducible. These laws of thought are to be applied without exception to any subject matter. The term, rarely used in exactly the same sense by different authors, has long been associated with three equally ambiguous expressions: the law of identity (ID), the law of contradiction (or non-contradiction; NC), and the law of excluded middle (EM). Sometimes, these three expressions are taken as propositions of formal ontology having the widest possible subject matter.
Beginning in the middle to late 1800s, these expressions have been used to denote propositions of Boolean algebra about classes: (ID) every class includes itself; (NC) every class is such that its intersection ("product") with its own complement is the null class; (EM) every class is such that its union ("sum") with its own complement is the universal class. More recently, the last two of the three expressions have been used in connection with the classical propositional logic and with quantified propositional logic. In both cases the law of non-contradiction involves the negation of the conjunction of something with its own negation, ¬(A∧¬A), and the law of excluded middle involves the disjunction of something with its own negation, A∨¬A. The expressions "law of non-contradiction" and "law of excluded middle" are also used for semantic principles of model theory concerning sentences and interpretations: (NC) under no interpretation is a given sentence both true and false, (EM) under any interpretation, a given sentence is either true or false.
The expression "laws of thought" gained added prominence through its use by Boole (1815–64) to denote theorems of his "algebra of logic." In fact, Boole named his second logic book An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities (1854). Modern logicians, in almost unanimous disagreement with Boole, take this expression to be false; none of the above propositions classed under "laws of thought" are explicitly about thought per se, a mental phenomenon studied by psychology, nor do they involve explicit reference to a thinker or knower as would be the case in pragmatics or in epistemology. The distinction between psychology (as a study of mental phenomena) and logic (as a study of valid inference) is widely accepted.
Three traditional laws
[edit]History
[edit]Hamilton offers a history of the three traditional laws that begins with Plato, proceeds through Aristotle, and ends with the schoolmen of the Middle Ages; in addition he offers a fourth law (see entry below, under Hamilton):
- The Principles of Contradiction: The principles of Contradiction and of Excluded Middle can both be traced back to Plato, by whom they were enounced and frequently applied; though it was not till long after, that either of them obtained a distinctive appellation. To take the principle of Contradiction first. This law Plato frequently employs, but the most remarkable passages are found in the Phœdo, in the Sophista, and in the fourth and seventh books of the Republic. [Hamilton LECT. V. LOGIC. 62]
- Law of Excluded Middle: The law of Excluded Middle between two contradictories remounts, as I have said, also to Plato, though the Second Alcibiades, the dialogue in which it is most clearly expressed, must be admitted to be spurious. It is also in the fragments of Pseudo-Archytas, to be found in Stobæus. [Hamilton LECT. V. LOGIC. 65]
- Hamilton further observes that "It is explicitly and emphatically enounced by Aristotle in many passages both of his Metaphysics (l. iii. (iv.) c.7.) and of his Analytics, both Prior (l. i. c. 2) and Posterior (1. i. c. 4). In the first of these, he says: "It is impossible that there should exist any medium between contradictory opposites, but it is necessary either to affirm or to deny everything of everything." [Hamilton LECT. V. LOGIC. 65]
- Law of Identity: The law of Identity was not explicated as a coordinate principle till a comparatively recent period. The earliest author in whom I have found this done, is Antonius Andreas, a scholar of Scotus, who flourished at the end of the thirteenth and beginning of the fourteenth century. The schoolman, in the fourth book of his Commentary of Aristotle's Metaphysics – a commentary which is full of the most ingenious and original views – not only asserts to the law of Identity a coordinate dignity with the law of Contradiction, but against Aristotle, he maintains that the principle of Identity is the one absolutely first. The formula in which Andreas expressed it was Ens est ens. Subsequently to this author, the question concerning the relative priority of the two laws of Identity and of Contradiction became one much agitated in the schools; though there were also found some who asserted to the law of Excluded Middle this supreme rank." [From Hamilton LECT. V. LOGIC. 65–66]
Law of identity
[edit]The law of identity: 'Whatever is, is.'[2]
For all a: a = a.
Regarding this law, Aristotle wrote:
First then this at least is obviously true, that the word "be" or "not be" has a definite meaning, so that not everything will be "so and not so". Again, if "man" has one meaning, let this be "two-footed animal"; by having one meaning I understand this:—if "man" means "X", then if A is a man "X" will be what "being a man" means for him. (It makes no difference even if one were to say a word has several meanings, if only they are limited in number; for to each definition there might be assigned a different word. For instance, we might say that "man" has not one meaning but several, one of which would have one definition, viz. "two-footed animal", while there might be also several other definitions if only they were limited in number; for a peculiar name might be assigned to each of the definitions. If, however, they were not limited but one were to say that the word has an infinite number of meanings, obviously reasoning would be impossible; for not to have one meaning is to have no meaning, and if words have no meaning our reasoning with one another, and indeed with ourselves, has been annihilated; for it is impossible to think of anything if we do not think of one thing; but if this is possible, one name might be assigned to this thing.)
— Aristotle, Metaphysics, Book IV, Part 4 (translated by W.D. Ross)[3]
Law of non-contradiction
[edit]The law of non-contradiction (alternately the 'law of contradiction'[2]): 'Nothing can both be and not be.'[2]
In other words: "two or more contradictory statements cannot both be true in the same sense at the same time": ¬(A∧¬A).
In the words of Aristotle, that "one cannot say of something that it is and that it is not in the same respect and at the same time". As an illustration of this law, he wrote:
It is impossible, then, that "being a man" should mean precisely not being a man, if "man" not only signifies something about one subject but also has one significance ... And it will not be possible to be and not to be the same thing, except in virtue of ambiguity, just as if one whom we call "man", and others were to call "not-man"; but the point in question is not this, whether the same thing can at the same time be and not be a man in name, but whether it can be in fact.
— Aristotle, Metaphysics, Book IV, Part 4 (translated by W.D. Ross)[3]
Law of excluded middle
[edit]The law of excluded middle: 'Everything must either be or not be.'[2]
In accordance with the law of excluded middle or excluded third, for every proposition, either its positive or negative form is true: A∨¬A.
Regarding the law of excluded middle, Aristotle wrote:
But on the other hand there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate. This is clear, in the first place, if we define what the true and the false are. To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true; so that he who says of anything that it is, or that it is not, will say either what is true or what is false.
— Aristotle, Metaphysics, Book IV, Part 7 (translated by W.D. Ross)[3]
Rationale
[edit]As the quotations from Hamilton above indicate, in particular the "law of identity" entry, the rationale for and expression of the "laws of thought" have been fertile ground for philosophic debate since Plato. Today the debate—about how we "come to know" the world of things and our thoughts—continues; for examples of rationales see the entries, below.
Philosophers
[edit]Plato
[edit]In one of Plato's Socratic dialogues, Socrates described three principles derived from introspection:
First, that nothing can become greater or less, either in number or magnitude, while remaining equal to itself ... Secondly, that without addition or subtraction there is no increase or diminution of anything, but only equality ... Thirdly, that what was not before cannot be afterwards, without becoming and having become.
— Plato, Theaetetus, 155[4]
Indian logic
[edit]The law of non-contradiction is found in ancient Indian logic as a meta-rule in the Shrauta Sutras, the grammar of Pāṇini,[5] and the Brahma Sutras attributed to Vyasa. It was later elaborated on by medieval commentators such as Madhvacharya.[6]
Locke
[edit]John Locke claimed that the principles of identity and contradiction (i.e. the law of identity and the law of non-contradiction) were general ideas and only occurred to people after considerable abstract, philosophical thought. He characterized the principle of identity as "Whatsoever is, is." He stated the principle of contradiction as "It is impossible for the same thing to be and not to be." To Locke, these were not innate or a priori principles.[7]
Leibniz
[edit]Gottfried Leibniz formulated three additional principles, either or both of which may sometimes be counted as a law of thought:
In Leibniz's thought, as well as generally in the approach of rationalism, the latter two principles are regarded as clear and incontestable axioms. They were widely recognized in European thought of the 17th, 18th, and 19th centuries, although they were subject to greater debate in the 19th century. As turned out to be the case with the law of continuity, these two laws involve matters which, in contemporary terms, are subject to much debate and analysis (respectively on determinism and extensionality[clarification needed]). Leibniz's principles were particularly influential in German thought. In France, the Port-Royal Logic was less swayed by them. Hegel quarrelled with the identity of indiscernibles in his Science of Logic (1812–1816).
Schopenhauer
[edit]Four laws
[edit]"The primary laws of thought, or the conditions of the thinkable, are four: – 1. The law of identity [A is A]. 2. The law of contradiction. 3. The law of exclusion; or excluded middle. 4. The law of sufficient reason." (Thomas Hughes, The Ideal Theory of Berkeley and the Real World, Part II, Section XV, Footnote, p. 38)
Arthur Schopenhauer discussed the laws of thought and tried to demonstrate that they are the basis of reason. He listed them in the following way in his On the Fourfold Root of the Principle of Sufficient Reason, §33:
- A subject is equal to the sum of its predicates, or a = a.
- No predicate can be simultaneously attributed and denied to a subject, or a ≠ ~a.
- Of every two contradictorily opposite predicates one must belong to every subject.
- Truth is the reference of a judgment to something outside it as its sufficient reason or ground.
Also:
The laws of thought can be most intelligibly expressed thus:
- Everything that is, exists.
- Nothing can simultaneously be and not be.
- Each and every thing either is or is not.
- Of everything that is, it can be found why it is.
There would then have to be added only the fact that once for all in logic the question is about what is thought and hence about concepts and not about real things.
— Schopenhauer, Manuscript Remains, Vol. 4, "Pandectae II", §163
To show that they are the foundation of reason, he gave the following explanation:
Through a reflection, which I might call a self-examination of the faculty of reason, we know that these judgments are the expression of the conditions of all thought and therefore have these as their ground. Thus by making vain attempts to think in opposition to these laws, the faculty of reason recognizes them as the conditions of the possibility of all thought. We then find that it is just as impossible to think in opposition to them as it is to move our limbs in a direction contrary to their joints. If the subject could know itself, we should know those laws immediately, and not first through experiments on objects, that is, representations (mental images).
— Schopenhauer, On the Fourfold Root of the Principle of Sufficient Reason, §33
Schopenhauer's four laws can be schematically presented in the following manner:
- A is A.
- A is not not-A.
- X is either A or not-A.
- If A then B (A implies B).
Two laws
[edit]Later, in 1844, Schopenhauer claimed that the four laws of thought could be reduced to two. In the ninth chapter of the second volume of The World as Will and Representation, he wrote:
It seems to me that the doctrine of the laws of thought could be simplified if we were to set up only two, the law of excluded middle and that of sufficient reason. The former thus: "Every predicate can be either confirmed or denied of every subject." Here it is already contained in the "either, or" that both cannot occur simultaneously, and consequently just what is expressed by the laws of identity and contradiction. Thus these would be added as corollaries of that principle which really says that every two concept-spheres must be thought either as united or as separated, but never as both at once; and therefore, even although words are joined together which express the latter, these words assert a process of thought which cannot be carried out. The consciousness of this infeasibility is the feeling of contradiction. The second law of thought, the principle of sufficient reason, would affirm that the above attributing or refuting must be determined by something different from the judgment itself, which may be a (pure or empirical) perception, or merely another judgment. This other and different thing is then called the ground or reason of the judgment. So far as a judgment satisfies the first law of thought, it is thinkable; so far as it satisfies the second, it is true, or at least in the case in which the ground of a judgment is only another judgment it is logically or formally true.[8]
Boole
[edit]The title of George Boole's 1854 treatise on logic, An Investigation on the Laws of Thought, indicates an alternate path. The laws are now incorporated into an algebraic representation of his "laws of the mind", honed over the years into modern Boolean algebra.
Rationale: How the "laws of the mind" are to be distinguished
[edit]Boole begins his chapter I "Nature and design of this Work" with a discussion of what characteristic distinguishes, generally, "laws of the mind" from "laws of nature":
- "The general laws of Nature are not, for the most part, immediate objects of perception. They are either inductive inferences from a large body of facts, the common truth in which they express, or, in their origin at least, physical hypotheses of a causal nature. ... They are in all cases, and in the strictest sense of the term, probable conclusions, approaching, indeed, ever and ever nearer to certainty, as they receive more and more of the confirmation of experience. ..."
Contrasted with this are what he calls "laws of the mind": Boole asserts these are known in their first instance, without need of repetition:
- "On the other hand, the knowledge of the laws of the mind does not require as its basis any extensive collection of observations. The general truth is seen in the particular instance, and it is not confirmed by the repetition of instances. ... we not only see in the particular example the general truth, but we see it also as a certain truth – a truth, our confidence in which will not continue to increase with increasing experience of its practical verification." (Boole 1854:4)
Boole's signs and their laws
[edit]Boole begins with the notion of "signs" representing "classes", "operations" and "identity":
- "All the signs of Language, as an instrument of reasoning may be conducted by a system of signs composed of the following elements
- "1st Literal symbols as x, y, etc representing things as subjects of our conceptions,
- "2nd Signs of operation, as +, −, x standing for those operations of the mind by which conceptions of things are combined or resolved so as to form new conceptions involving the same elements,
- "3rd The sign of identity, =.
- And these symbols of Logic are in their use subject to definite laws, partly agreeing with and partly differing from the laws of the corresponding symbols in the science of Algebra. (Boole 1854:27)
Boole then clarifies what a "literal symbol" e.g. x, y, z,... represents—a name applied to a collection of instances into "classes". For example, "bird" represents the entire class of feathered winged warm-blooded creatures. For his purposes he extends the notion of class to represent membership of "one", or "nothing", or "the universe" i.e. totality of all individuals:
- "Let us then agree to represent the class of individuals to which a particular name or description is applicable, by a single letter, as z. ... By a class is usually meant a collection of individuals, to each of which a particular name or description may be applied; but in this work the meaning of the term will be extended so as to include the case in which but a single individual exists, answering to the required name or description, as well as the cases denoted by the terms "nothing" and "universe," which as "classes" should be understood to comprise respectively 'no beings,' 'all beings.'" (Boole 1854:28)
He then defines what the string of symbols e.g. xy means [modern logical &, conjunction]:
- "Let it further be agreed, that by the combination xy shall be represented that class of things to which the names or descriptions represented by x and y are simultaneously, applicable. Thus, if x alone stands for "white things," and y for "sheep," let xy stand for 'white Sheep;'" (Boole 1854:28)
Given these definitions he now lists his laws with their justification plus examples (derived from Boole):
- (1) xy = yx [commutative law]
- "x represents 'estuaries,' and y 'rivers,' the expressions xy and yx will indifferently represent" 'rivers that are estuaries,' or 'estuaries that are rivers,'"
- (2) xx = x, alternately x2 = x [Absolute identity of meaning, Boole's "fundamental law of thought" cf page 49]
- "Thus 'good, good' men, is equivalent to 'good' men".
Logical OR: Boole defines the "collecting of parts into a whole or separate a whole into its parts" (Boole 1854:32). Here the connective "and" is used disjunctively, as is "or"; he presents a commutative law (3) and a distributive law (4) for the notion of "collecting". The notion of separating a part from the whole he symbolizes with the "-" operation; he defines a commutative (5) and distributive law (6) for this notion:
- (3) y + x = x + y [commutative law]
- "Thus the expression 'men and women' is ... equivalent with the expression" women and men. Let x represent 'men,' y, 'women' and let + stand for 'and' and 'or' ..."
- (4) z(x + y) = zx + zy [distributive law]
- z = European, (x = "men, y = women): European men and women = European men and European women
- (5) x − y = −y + x [commutation law: separating a part from the whole]
- "All men (x) except Asiatics (y)" is represented by x − y. "All states (x) except monarchical states (y)" is represented by x − y
- (6) z(x − y) = zx − zy [distributive law]
Lastly is a notion of "identity" symbolized by "=". This allows for two axioms: (axiom 1): equals added to equals results in equals, (axiom 2): equals subtracted from equals results in equals.
- (7) Identity ("is", "are") e.g. x = y + z, "stars" = "suns" and "the planets"
Nothing "0" and Universe "1": He observes that the only two numbers that satisfy xx = x are 0 and 1. He then observes that 0 represents "Nothing" while "1" represents the "Universe" (of discourse).
The logical NOT: Boole defines the contrary (logical NOT) as follows (his Proposition III):
- "If x represent any class of objects, then will 1 − x represent the contrary or supplementary class of objects, i.e. the class including all objects which are not comprehended in the class x" (Boole 1854:48)
- If x = "men" then "1 − x" represents the "universe" less "men", i.e. "not-men".
The notion of a particular as opposed to a universal: To represent the notion of "some men", Boole writes the small letter "v" before the predicate-symbol "vx" some men.
Exclusive- and inclusive-OR: Boole does not use these modern names, but he defines these as follows x(1-y) + y(1-x) and x + y(1-x), respectively; these agree with the formulas derived by means of the modern Boolean algebra.[9]
Boole derives the law of contradiction
[edit]Armed with his "system" he derives the "principle of [non]contradiction" starting with his law of identity: x2 = x. He subtracts x from both sides (his axiom 2), yielding x2 − x = 0. He then factors out the x: x(x − 1) = 0. For example, if x = "men" then 1 − x represents NOT-men. So we have an example of the "Law of Contradiction":
- "Hence: x(1 − x) will represent the class whose members are at once "men," and" not men," and the equation [x(1 − x)=0] thus express the principle, that a class whose members are at the same time men and not men does not exist. In other words, that it is impossible for the same individual to be at the same time a man and not a man. ... this is identically that "principle of contradiction" which Aristotle has described as the fundamental axiom of all philosophy. ... what has been commonly regarded as the fundamental axiom of metaphysics is but the consequence of a law of thought, mathematical in its form." (with more explanation about this "dichotomy" comes about cf Boole 1854:49ff)
Boole defines the notion "domain (universe) of discourse"
[edit]This notion is found throughout Boole's "Laws of Thought" e.g. 1854:28, where the symbol "1" (the integer 1) is used to represent "Universe" and "0" to represent "Nothing", and in far more detail later (pages 42ff):
- " Now, whatever may be the extent of the field within which all the objects of our discourse are found, that field may properly be termed the universe of discourse. ... Furthermore, this universe of discourse is in the strictest sense the ultimate subject of the discourse."
In his chapter "The Predicate Calculus" Kleene observes that the specification of the "domain" of discourse is "not a trivial assumption, since it is not always clearly satisfied in ordinary discourse ... in mathematics likewise, logic can become pretty slippery when no D [domain] has been specified explicitly or implicitly, or the specification of a D [domain] is too vague (Kleene 1967:84).
Hamilton
[edit]Hamilton specifies four laws—the three traditional plus the fourth "Law of Reason and Consequent"—as follows:
- "XIII. The Fundamental Laws of Thought, or the conditions of the thinkable, as commonly received, are four: – 1. The Law of Identity; 2. The Law of Contradiction; 3. The Law of Exclusion or of Excluded Middle; and, 4. The Law of Reason and Consequent, or of Sufficient Reason."[10]
Rationale: "Logic is the science of the Laws of Thought as Thought"
[edit]Hamilton opines that thought comes in two forms: "necessary" and "contingent" (Hamilton 1860:17). With regards the "necessary" form he defines its study as "logic": "Logic is the science of the necessary forms of thought" (Hamilton 1860:17). To define "necessary" he asserts that it implies the following four "qualities":[11]
- (1) "determined or necessitated by the nature of the thinking subject itself ... it is subjectively, not objectively, determined;
- (2) "original and not acquired;
- (3) "universal; that is, it cannot be that it necessitates on some occasions, and does not necessitate on others.
- (4) "it must be a law; for a law is that which applies to all cases without exception, and from which a deviation is ever, and everywhere, impossible, or, at least, unallowed. ... This last condition, likewise, enables us to give the most explicit enunciation of the object-matter of Logic, in saying that Logic is the science of the Laws of Thought as Thought, or the science of the Formal Laws of Thought, or the science of the Laws of the Form of Thought; for all these are merely various expressions of the same thing."
Hamilton's 4th law: "Infer nothing without ground or reason"
[edit]Here's Hamilton's fourth law from his LECT. V. LOGIC. 60–61:
- "Par. XVII. Law of Sufficient Reason, or of Reason and Consequent:
- "XVII. The thinking of an object, as actually characterized by positive or by negative attributes, is not left to the caprice of Understanding – the faculty of thought; but that faculty must be necessitated to this or that determinate act of thinking by a knowledge of something different from, and independent of; the process of thinking itself. This condition of our understanding is expressed by the law, as it is called, of Sufficient Reason (principium Rationis Sufficientis); but it is more properly denominated the law of Reason and Consequent (principium Rationis et Consecutionis). That knowledge by which the mind is necessitated to affirm or posit something else, is called the logical reason ground, or antecedent; that something else which the mind is necessitated to affirm or posit, is called the logical consequent; and the relation between the reason and consequent, is called the logical connection or consequence. This law is expressed in the formula – Infer nothing without a ground or reason.1
- Relations between Reason and Consequent: The relations between Reason and Consequent, when comprehended in a pure thought, are the following:
- 1. When a reason is explicitly or implicitly given, then there must ¶ exist a consequent; and, vice versa, when a consequent is given, there must also exist a reason.
- 1 See Schulze, Logik, §19, and Krug, Logik, §20, – ED.
- 2. Where there is no reason there can be no consequent; and, vice versa, where there is no consequent (either implicitly or explicitly) there can be no reason. That is, the concepts of reason and of consequent, as reciprocally relative, involve and suppose each other.
- The logical significance of this law: The logical significance of the law of Reason and Consequent lies in this, – That in virtue of it, thought is constituted into a series of acts all indissolubly connected; each necessarily inferring the other. Thus it is that the distinction and opposition of possible, actual and necessary matter, which has been introduced into Logic, is a doctrine wholly extraneous to this science.
Welton
[edit]In the 19th century, the Aristotelian laws of thoughts, as well as sometimes the Leibnizian laws of thought, were standard material in logic textbooks, and J. Welton described them in this way:
The Laws of Thought, Regulative Principles of Thought, or Postulates of Knowledge, are those fundamental, necessary, formal and a priori mental laws in agreement with which all valid thought must be carried on. They are a priori, that is, they result directly from the processes of reason exercised upon the facts of the real world. They are formal; for as the necessary laws of all thinking, they cannot, at the same time, ascertain the definite properties of any particular class of things, for it is optional whether we think of that class of things or not. They are necessary, for no one ever does, or can, conceive them reversed, or really violate them, because no one ever accepts a contradiction which presents itself to his mind as such.
— Welton, A Manual of Logic, 1891, Vol. I, p. 30.
Russell
[edit]The sequel to Bertrand Russell's 1903 "The Principles of Mathematics" became the three-volume work named Principia Mathematica (hereafter PM), written jointly with Alfred North Whitehead. Immediately after he and Whitehead published PM he wrote his 1912 "The Problems of Philosophy". His "Problems" reflects "the central ideas of Russell's logic".[12]
The Principles of Mathematics (1903)
[edit]In his 1903 "Principles" Russell defines Symbolic or Formal Logic (he uses the terms synonymously) as "the study of the various general types of deduction" (Russell 1903:11). He asserts that "Symbolic Logic is essentially concerned with inference in general" (Russell 1903:12) and with a footnote indicates that he does not distinguish between inference and deduction; moreover he considers induction "to be either disguised deduction or a mere method of making plausible guesses" (Russell 1903:11). This opinion will change by 1912, when he deems his "principle of induction" to be par with the various "logical principles" that include the "Laws of Thought".
In his Part I "The Indefinables of Mathematics" Chapter II "Symbolic Logic" Part A "The Propositional Calculus" Russell reduces deduction ("propositional calculus") to 2 "indefinables" and 10 axioms:
- "17. We require, then, in the propositional calculus, no indefinable except the two kinds of implication [simple aka "material"[13] and "formal"]-- remembering, however, that formal implication is a complex notion, whose analysis remains to be undertaken. As regards our two indefinables, we require certain indemonstrable propositions, which hitherto I have not succeeded in reducing to less ten (Russell 1903:15).
From these he claims to be able to derive the law of excluded middle and the law of contradiction but does not exhibit his derivations (Russell 1903:17). Subsequently, he and Whitehead honed these "primitive principles" and axioms into the nine found in PM, and here Russell actually exhibits these two derivations at ❋1.71 and ❋3.24, respectively.
The Problems of Philosophy (1912)
[edit]By 1912 Russell in his "Problems" pays close attention to "induction" (inductive reasoning) as well as "deduction" (inference), both of which represent just two examples of "self-evident logical principles" that include the "Laws of Thought."[2]
Induction principle: Russell devotes a chapter to his "induction principle". He describes it as coming in two parts: firstly, as a repeated collection of evidence (with no failures of association known) and therefore increasing probability that whenever A happens B follows; secondly, in a fresh instance when indeed A happens, B will indeed follow: i.e. "a sufficient number of cases of association will make the probability of a fresh association nearly a certainty, and will make it approach certainty without limit."[14]
He then collects all the cases (instances) of the induction principle (e.g. case 1: A1 = "the rising sun", B1 = "the eastern sky"; case 2: A2 = "the setting sun", B2 = "the western sky"; case 3: etc.) into a "general" law of induction which he expresses as follows:
- "(a) The greater the number of cases in which a thing of the sort A has been found associated with a thing of the sort B, the more probable it is (if cases of failure of association are known) that A is always associated with B;
- "(b) Under the same circumstances, a sufficient number of cases of the association of A with B will make it nearly certain that A is always associated with B, and will make this general law approach certainty without limit."[15]
He makes an argument that this induction principle can neither be disproved or proved by experience,[16] the failure of disproof occurring because the law deals with probability of success rather than certainty; the failure of proof occurring because of unexamined cases that are yet to be experienced, i.e. they will occur (or not) in the future. "Thus we must either accept the inductive principle on the ground of its intrinsic evidence, or forgo all justification of our expectations about the future".[17]
Inference principle: Russell then offers an example that he calls a "logical" principle. Twice previously he has asserted this principle, first as the 4th axiom in his 1903[18] and then as his first "primitive proposition" of PM: "❋1.1 Anything implied by a true elementary proposition is true".[19] Now he repeats it in his 1912 in a refined form: "Thus our principle states that if this implies that, and this is true, then that is true. In other words, 'anything implied by a true proposition is true', or 'whatever follows from a true proposition is true'.[20] This principle he places great stress upon, stating that "this principle is really involved – at least, concrete instances of it are involved – in all demonstrations".[2]
He does not call his inference principle modus ponens, but his formal, symbolic expression of it in PM (2nd edition 1927) is that of modus ponens; modern logic calls this a "rule" as opposed to a "law".[21] In the quotation that follows, the symbol "⊦" is the "assertion-sign" (cf PM:92); "⊦" means "it is true that", therefore "⊦p" where "p" is "the sun is rising" means "it is true that the sun is rising", alternately "The statement 'The sun is rising' is true". The "implication" symbol "⊃" is commonly read "if p then q", or "p implies q" (cf PM:7). Embedded in this notion of "implication" are two "primitive ideas", "the Contradictory Function" (symbolized by NOT, "~") and "the Logical Sum or Disjunction" (symbolized by OR, "⋁"); these appear as "primitive propositions" ❋1.7 and ❋1.71 in PM (PM:97). With these two "primitive propositions" Russell defines "p ⊃ q" to have the formal logical equivalence "NOT-p OR q" symbolized by "~p ⋁ q":
- "Inference. The process of inference is as follows: a proposition "p" is asserted, and a proposition "p implies q" is asserted, and then as a sequel the proposition "q" is asserted. The trust in inference is the belief that if the two former assertions are not in error, the final assertion is not in error. Accordingly, whenever, in symbols, where p and q have of course special determination
- " "⊦p" and "⊦(p ⊃ q)"
- " have occurred, then "⊦q" will occur if it is desired to put it on record. The process of the inference cannot be reduced to symbols. Its sole record is the occurrence of "⊦q". ... An inference is the dropping of a true premiss; it is the dissolution of an implication".[22]
In other words, in a long "string" of inferences, after each inference we can detach the "consequent" "⊦q" from the symbol string "⊦p, ⊦(p⊃q)" and not carry these symbols forward in an ever-lengthening string of symbols.
The three traditional "laws" (principles) of thought: Russell goes on to assert other principles, of which the above logical principle is "only one". He asserts that "some of these must be granted before any argument or proof becomes possible. When some of them have been granted, others can be proved." Of these various "laws" he asserts that "for no very good reason, three of these principles have been singled out by tradition under the name of 'Laws of Thought'.[2] And these he lists as follows:
- "(1) The law of identity: 'Whatever is, is.'
- (2) The law of contradiction: 'Nothing can both be and not be.'
- (3) The law of excluded middle: 'Everything must either be or not be.'"[2]
Rationale: Russell opines that "the name 'laws of thought' is ... misleading, for what is important is not the fact that we think in accordance with these laws, but the fact that things behave in accordance with them; in other words, the fact that when we think in accordance with them we think truly."[23] But he rates this a "large question" and expands it in two following chapters where he begins with an investigation of the notion of "a priori" (innate, built-in) knowledge, and ultimately arrives at his acceptance of the Platonic "world of universals". In his investigation he comes back now and then to the three traditional laws of thought, singling out the law of contradiction in particular: "The conclusion that the law of contradiction is a law of thought is nevertheless erroneous ... [rather], the law of contradiction is about things, and not merely about thoughts ... a fact concerning the things in the world."[24]
His argument begins with the statement that the three traditional laws of thought are "samples of self-evident principles". For Russell the matter of "self-evident"[25] merely introduces the larger question of how we derive our knowledge of the world. He cites the "historic controversy ... between the two schools called respectively 'empiricists' [ Locke, Berkeley, and Hume ] and 'rationalists' [ Descartes and Leibniz]" (these philosophers are his examples).[26] Russell asserts that the rationalists "maintained that, in addition to what we know by experience, there are certain 'innate ideas' and 'innate principles', which we know independently of experience";[26] to eliminate the possibility of babies having innate knowledge of the "laws of thought", Russell renames this sort of knowledge a priori. And while Russell agrees with the empiricists that "Nothing can be known to exist except by the help of experience",[27] he also agrees with the rationalists that some knowledge is a priori, specifically "the propositions of logic and pure mathematics, as well as the fundamental propositions of ethics".[28]
This question of how such a priori knowledge can exist directs Russell to an investigation into the philosophy of Immanuel Kant, which after careful consideration he rejects as follows:
- "... there is one main objection which seems fatal to any attempt to deal with the problem of a priori knowledge by his method. The thing to be accounted for is our certainty that the facts must always conform to logic and arithmetic. ... Thus Kant's solution unduly limits the scope of a priori propositions, in addition to failing in the attempt at explaining their certainty".[29]
His objections to Kant then leads Russell to accept the 'theory of ideas' of Plato, "in my opinion ... one of the most successful attempts hitherto made.";[30] he asserts that " ... we must examine our knowledge of universals ... where we shall find that [this consideration] solves the problem of a priori knowledge.".[30]
Principia Mathematica (Part I: 1910 first edition, 1927 2nd edition)
[edit]Russell's "Problems" does not offer an example of a "minimum set" of principles that would apply to human reasoning, both inductive and deductive. PM does provide an example set (but not the minimum; see Post below) that is sufficient for deductive reasoning by means of the propositional calculus (as opposed to reasoning by means of the more-complicated predicate calculus)—a total of 8 principles at the start of "Part I: Mathematical Logic". Each of the formulas :❋1.2 to :❋1.6 is a tautology (true no matter what the truth-value of p, q, r ... is). What is missing in PM’s treatment is a formal rule of substitution;[31] in his 1921 PhD thesis Emil Post fixes this deficiency (see Post below). In what follows the formulas are written in a more modern format than that used in PM; the names are given in PM).
- ❋1.1 Anything implied by a true elementary proposition is true.
- ❋1.2 Principle of Tautology: (p ⋁ p) ⊃ p
- ❋1.3 Principle of [logical] Addition: q ⊃ (p ⋁ q)
- ❋1.4 Principle of Permutation: (p ⋁ q) ⊃ (q ⋁ p)
- ❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) [redundant]
- ❋1.6 Principle of [logical] Summation: (q ⊃ r) ⊃ ((p ⋁ q) ⊃ (p ⋁ r))
- ❋1.7 [logical NOT]: If p is an elementary proposition, ~p is an elementary proposition.
- ❋1.71 [logical inclusive OR]: If p and q are elementary propositions, (p ⋁ q) is an elementary proposition.
Russell sums up these principles with "This completes the list of primitive propositions required for the theory of deduction as applied to elementary propositions" (PM:97).
Starting from these eight tautologies and a tacit use of the "rule" of substitution, PM then derives over a hundred different formulas, among which are the Law of Excluded Middle ❋1.71, and the Law of Contradiction ❋3.24 (this latter requiring a definition of logical AND symbolized by the modern ⋀: (p ⋀ q) =def ~(~p ⋁ ~q). (PM uses the "dot" symbol ▪ for logical AND)).
Ladd-Franklin
[edit]At about the same time (1912) that Russell and Whitehead were finishing the last volume of their Principia Mathematica, and the publishing of Russell's "The Problems of Philosophy" at least two logicians (Louis Couturat, Christine Ladd-Franklin) were asserting that two "laws" (principles) of contradiction" and "excluded middle" are necessary to specify "contradictories"; Ladd-Franklin renamed these the principles of exclusion and exhaustion. The following appears as a footnote on page 23 of Couturat 1914:
- "As Mrs. LADD·FRANKLlN has truly remarked (BALDWIN, Dictionary of Philosophy and Psychology, article "Laws of Thought"), the principle of contradiction is not sufficient to define contradictories; the principle of excluded middle must be added which equally deserves the name of principle of contradiction. This is why Mrs. LADD-FRANKLIN proposes to call them respectively the principle of exclusion and the principle of exhaustion, inasmuch as, according to the first, two contradictory terms are exclusive (the one of the other); and, according to the second, they are exhaustive (of the universe of discourse)."
In other words, the creation of "contradictories" represents a dichotomy, i.e. the "splitting" of a universe of discourse into two classes (collections) that have the following two properties: they are (i) mutually exclusive and (ii) (collectively) exhaustive.[32] This means, no one thing (drawn from the universe of discourse) can simultaneously be a member of both classes (law of non-contradiction), but [and] every single thing (in the universe of discourse) must be a member of one class or the other (law of excluded middle).
Post
[edit]As part of his PhD thesis "Introduction to a general theory of elementary propositions" Emil Post proved "the system of elementary propositions of Principia [PM]" i.e. its "propositional calculus"[33] described by PM's first 8 "primitive propositions" to be consistent. The definition of "consistent" is this: that by means of the deductive "system" at hand (its stated axioms, laws, rules) it is impossible to derive (display) both a formula S and its contradictory ~S (i.e. its logical negation) (Nagel and Newman 1958:50). To demonstrate this formally, Post had to add a primitive proposition to the 8 primitive propositions of PM, a "rule" that specified the notion of "substitution" that was missing in the original PM of 1910.[34]
Given PM's tiny set of "primitive propositions" and the proof of their consistency, Post then proves that this system ("propositional calculus" of PM) is complete, meaning every possible truth table can be generated in the "system":
- "...every truth system has a representation in the system of Principia while every complete system, that is one having all possible truth tables, is equivalent to it. ... We thus see that complete systems are equivalent to the system of Principia not only in the truth table development but also postulationally. As other systems are in a sense degenerate forms of complete systems we can conclude that no new logical systems are introduced."[35]
A minimum set of axioms? The matter of their independence
[edit]Then there is the matter of "independence" of the axioms. In his commentary before Post 1921, van Heijenoort states that Paul Bernays solved the matter in 1918 (but published in 1926) – the formula ❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) can be proved with the other four. As to what system of "primitive-propositions" is the minimum, van Heijenoort states that the matter was "investigated by Zylinski (1925), Post himself (1941), and Wernick (1942)" but van Heijenoort does not answer the question.[36]
Model theory versus proof theory: Post's proof
[edit]Kleene (1967:33) observes that "logic" can be "founded" in two ways, first as a "model theory", or second by a formal "proof" or "axiomatic theory"; "the two formulations, that of model theory and that of proof theory, give equivalent results"(Kleene 1967:33). This foundational choice, and their equivalence also applies to predicate logic (Kleene 1967:318).
In his introduction to Post 1921, van Heijenoort observes that both the "truth-table and the axiomatic approaches are clearly presented".[37] This matter of a proof of consistency both ways (by a model theory, by axiomatic proof theory) comes up in the more-congenial version of Post's consistency proof that can be found in Nagel and Newman 1958 in their chapter V "An Example of a Successful Absolute Proof of Consistency". In the main body of the text they use a model to achieve their consistency proof (they also state that the system is complete but do not offer a proof) (Nagel & Newman 1958:45–56). But their text promises the reader a proof that is axiomatic rather than relying on a model, and in the Appendix they deliver this proof based on the notions of a division of formulas into two classes K1 and K2 that are mutually exclusive and exhaustive (Nagel & Newman 1958:109–113).
Gödel
[edit]The (restricted) "first-order predicate calculus" is the "system of logic" that adds to the propositional logic (cf Post, above) the notion of "subject-predicate" i.e. the subject x is drawn from a domain (universe) of discourse and the predicate is a logical function f(x): x as subject and f(x) as predicate (Kleene 1967:74). Although Gödel's proof involves the same notion of "completeness" as does the proof of Post, Gödel's proof is far more difficult; what follows is a discussion of the axiom set.
Completeness
[edit]Kurt Gödel in his 1930 doctoral dissertation "The completeness of the axioms of the functional calculus of logic" proved that in this "calculus" (i.e. restricted predicate logic with or without equality) that every valid formula is "either refutable or satisfiable"[38] or what amounts to the same thing: every valid formula is provable and therefore the logic is complete. Here is Gödel's definition of whether or not the "restricted functional calculus" is "complete":
- "... whether it actually suffices for the derivation of every logico-mathematical proposition, or where, perhaps, it is conceivable that there are true propositions (which may be provable by means of other principles) that cannot be derived in the system under consideration."[39]
The first-order predicate calculus
[edit]This particular predicate calculus is "restricted to the first order". To the propositional calculus it adds two special symbols that symbolize the generalizations "for all" and "there exists (at least one)" that extend over the domain of discourse. The calculus requires only the first notion "for all", but typically includes both: (1) the notion "for all x" or "for every x" is symbolized in the literature as variously as (x), ∀x, Πx etc., and the (2) notion of "there exists (at least one x)" variously symbolized as Ex, ∃x.
The restriction is that the generalization "for all" applies only to the variables (objects x, y, z etc. drawn from the domain of discourse) and not to functions, in other words the calculus will permit ∀xf(x) ("for all creatures x, x is a bird") but not ∀f∀x(f(x)) [but if "equality" is added to the calculus it will permit ∀f:f(x); see below under Tarski]. Example:
- Let the predicate "function" f(x) be "x is a mammal", and the subject-domain (or universe of discourse) (cf Kleene 1967:84) be the category "bats":
- The formula ∀xf(x) yields the truth value "truth" (read: "For all instances x of objects 'bats', 'x is a mammal'" is a truth, i.e. "All bats are mammals");
- But if the instances of x are drawn from a domain "winged creatures" then ∀xf(x) yields the truth value "false" (i.e. "For all instances x of 'winged creatures', 'x is a mammal'" has a truth value of "falsity"; "Flying insects are mammals" is false);
- However over the broad domain of discourse "all winged creatures" (e.g. "birds" + "flying insects" + "flying squirrels" + "bats") we can assert ∃xf(x) (read: "There exists at least one winged creature that is a mammal'"; it yields a truth value of "truth" because the objects x can come from the category "bats" and perhaps "flying squirrels" (depending on how we define "winged"). But the formula yields "falsity" when the domain of discourse is restricted to "flying insects" or "birds" or both "insects" and "birds".
Kleene remarks that "the predicate calculus (without or with equality) fully accomplishes (for first order theories) what has been conceived to be the role of logic" (Kleene 1967:322).
A new axiom: Aristotle's dictum – "the maxim of all and none"
[edit]This first half of this axiom – "the maxim of all" will appear as the first of two additional axioms in Gödel's axiom set. The "dictum of Aristotle" (dictum de omni et nullo) is sometimes called "the maxim of all and none" but is really two "maxims" that assert: "What is true of all (members of the domain) is true of some (members of the domain)", and "What is not true of all (members of the domain) is true of none (of the members of the domain)".
The "dictum" appears in Boole 1854 a couple places:
- "It may be a question whether that formula of reasoning, which is called the dictum of Aristotle, de Omni et nullo, expresses a primary law of human reasoning or not; but it is no question that it expresses a general truth in Logic" (1854:4)
But later he seems to argue against it:[40]
- "[Some principles of] general principle of an axiomatic nature, such as the "dictum of Aristotle:" Whatsoever is affirmed or denied of the genus may in the same sense be affirmed or denied of any species included under that genus. ... either state directly, but in an abstract form, the argument which they are supposed to elucidate, and, so stating that argument, affirm its validity; or involve in their expression technical terms which, after definition, conduct us again to the same point, viz. the abstract statement of the supposed allowable forms of inference."
But the first half of this "dictum" (dictum de omni) is taken up by Russell and Whitehead in PM, and by Hilbert in his version (1927) of the "first order predicate logic"; his (system) includes a principle that Hilbert calls "Aristotle's dictum" [41]
- (x)f(x) → f(y)
This axiom also appears in the modern axiom set offered by Kleene (Kleene 1967:387), as his "∀-schema", one of two axioms (he calls them "postulates") required for the predicate calculus; the other being the "∃-schema" f(y) ⊃ ∃xf(x) that reasons from the particular f(y) to the existence of at least one subject x that satisfies the predicate f(x); both of these requires adherence to a defined domain (universe) of discourse.
Gödel's restricted predicate calculus
[edit]To supplement the four (down from five; see Post) axioms of the propositional calculus, Gödel 1930 adds the dictum de omni as the first of two additional axioms. Both this "dictum" and the second axiom, he claims in a footnote, derive from Principia Mathematica. Indeed, PM includes both as
- ❋10.1 ⊦ ∀xf(x) ⊃ f(y) ["I.e. what is true in all cases is true in any one case"[42] ("Aristotle's dictum", rewritten in more-modern symbols)]
- ❋10.2 ⊦∀x(p ⋁ f(x)) ⊃ (p ⋁ ∀xf(x)) [rewritten in more-modern symbols]
The latter asserts that the logical sum (i.e. ⋁, OR) of a simple proposition p and a predicate ∀xf(x) implies the logical sum of each separately. But PM derives both of these from six primitive propositions of ❋9, which in the second edition of PM is discarded and replaced with four new "Pp" (primitive principles) of ❋8 (see in particular ❋8.2, and Hilbert derives the first from his "logical ε-axiom" in his 1927 and does not mention the second. How Hilbert and Gödel came to adopt these two as axioms is unclear.
Also required are two more "rules" of detachment ("modus ponens") applicable to predicates.
Tarski
[edit]Alfred Tarski in his 1946 (2nd edition) "Introduction to Logic and to the Methodology of the Deductive Sciences" cites a number of what he deems "universal laws" of the sentential calculus, three "rules" of inference, and one fundamental law of identity (from which he derives four more laws). The traditional "laws of thought" are included in his long listing of "laws" and "rules". His treatment is, as the title of his book suggests, limited to the "Methodology of the Deductive Sciences".
Rationale: In his introduction (2nd edition) he observes that what began with an application of logic to mathematics has been widened to "the whole of human knowledge":
- "[I want to present] a clear idea of that powerful trend of contemporary thought which is concentrated about modern logic. This trend arose originally from the somewhat limited task of stabilizing the foundations of mathematics. In its present phase, however, it has much wider aims. For it seeks to create a unified conceptual apparatus which would supply a common basis for the whole of human knowledge.".[43]
Law of identity (Leibniz's law, equality)
[edit]To add the notion of "equality" to the "propositional calculus" (this new notion not to be confused with logical equivalence symbolized by ↔, ⇄, "if and only if (iff)", "biconditional", etc.) Tarski (cf p54-57) symbolizes what he calls "Leibniz's law" with the symbol "=". This extends the domain (universe) of discourse and the types of functions to numbers and mathematical formulas (Kleene 1967:148ff, Tarski 1946:54ff).
In a nutshell: given that "x has every property that y has", we can write "x = y", and this formula will have a truth value of "truth" or "falsity". Tarski states this Leibniz's law as follows:
- I. Leibniz' Law: x = y, if, and only if, x has every property which y has, and y has every property which x has.
He then derives some other "laws" from this law:
- II. Law of Reflexivity: Everything is equal to itself: x = x. [Proven at PM ❋13.15]
- III. Law of Symmetry: If x = y, then y = x. [Proven at PM ❋13.16]
- IV. Law of Transitivity: If x = y and y = z, then x = z. [Proven at PM ❋13.17]
- V. If x = z and y = z, then x = y. [Proven at PM ❋13.172]
Principia Mathematica defines the notion of equality as follows (in modern symbols); note that the generalization "for all" extends over predicate-functions f( ):
- ❋13.01. x = y =def ∀f:(f(x) → f(y)) ("This definition states that x and y are to be called identical when every predicate function satisfied by x is satisfied by y"[44]
Hilbert 1927:467 adds only two axioms of equality, the first is x = x, the second is (x = y) → ((f(x) → f(y)); the "for all f" is missing (or implied). Gödel 1930 defines equality similarly to PM :❋13.01. Kleene 1967 adopts the two from Hilbert 1927 plus two more (Kleene 1967:387).
George Spencer-Brown
[edit]George Spencer-Brown in his 1969 "Laws of Form" (LoF) begins by first taking as given that "we cannot make an indication without drawing a distinction". This, therefore, presupposes the law of excluded middle. He then goes on to define two axioms, which describe how distinctions (a "boundary") and indications (a "call") work:
- Axiom 1. The law of calling: The value of a call made again is the value of the call.
- Axiom 2. The law of crossing: The value of a (boundary) crossing made again is not the value of the crossing.
These axioms bear a resemblance to the "law of identity" and the "law of non-contradiction" respectively. However, the law of identity is proven as a theorem (Theorem 4.5 in "Laws of Form") within the framework of LoF. In general, LoF can be reinterpreted as First-order logic, propositional logic, and second-order logic by assigning specific interpretations to the symbols and values of LoF.
Contemporary developments
[edit]All of the above "systems of logic" are considered to be "classical" meaning propositions and predicate expressions are two-valued, with either the truth value "truth" or "falsity" but not both(Kleene 1967:8 and 83). While intuitionistic logic falls into the "classical" category, it objects to extending the "for all" operator to the Law of Excluded Middle; it allows instances of the "Law", but not its generalization to an infinite domain of discourse.
Intuitionistic logic
[edit]'Intuitionistic logic', sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.
Paraconsistent logic
[edit]'Paraconsistent logic' refers to so-called contradiction-tolerant logical systems in which a contradiction does not necessarily result in trivialism. In other words, the principle of explosion is not valid in such logics. Some (namely the dialetheists) argue that the law of non-contradiction is denied by dialetheic logic. They are motivated by certain paradoxes which seem to imply a limit of the law of non-contradiction, namely the liar paradox. In order to avoid a trivial logical system and still allow certain contradictions to be true, dialetheists will employ a paraconsistent logic of some kind.
Three-valued logic
[edit]This section needs expansion. You can help by adding to it. (December 2017) |
TBD cf Three-valued logic try this A Ternary Arithmetic and Logic – Semantic Scholar[45]
Modal propositional calculi
[edit](cf Kleene 1967:49): These "calculi" include the symbols ⎕A, meaning "A is necessary" and ◊A meaning "A is possible". Kleene states that:
- "These notions enter in domains of thinking where there are understood to be two different kinds of "truth", one more universal or compelling than the other ... A zoologist might declare that it is impossible that salamanders or any other living creatures can survive fire; but possible (though untrue) that unicorns exist, and possible (though improbable) that abominable snowmen exist."
Fuzzy logic
[edit]This section needs expansion. You can help by adding to it. (December 2017) |
'Fuzzy logic' is a form of many-valued logic; it deals with reasoning that is approximate rather than fixed and exact.
See also
[edit]References
[edit]- ^ "Laws of thought". The Cambridge Dictionary of Philosophy. Robert Audi, Editor, Cambridge: Cambridge UP. p. 489.
- ^ a b c d e f g h Russell 1912:72, 1997 edition.
- ^ a b c "Aristotle – Metaphysics – Book 4".
- ^ "Theaetetus, by Plato". The University of Adelaide Library. November 10, 2012. Archived from the original on 16 January 2014. Retrieved 14 January 2014.
- ^ Frits Staal (1988), Universals: Studies in Indian Logic and Linguistics, Chicago, pp. 109–28 (cf. Bull, Malcolm (1999), Seeing Things Hidden, Verso, p. 53, ISBN 1-85984-263-1)
- ^ Dasgupta, Surendranath (1991), A History of Indian Philosophy, Motilal Banarsidass, p. 110, ISBN 81-208-0415-5
- ^ "An Essay concerning Human Understanding". Retrieved January 14, 2014.
- ^ "The Project Gutenberg EBook of The World As Will And Idea (Vol. 2 of 3) by Arthur Schopenhauer". Project Gutenberg. June 27, 2012. Retrieved January 14, 2014.
- ^ cf Boole 1842:55–57. The modern definition of logical OR(x, y) in terms of logical AND &, and logical NOT ~ is: ~(~x & ~y). In Boolean algebra this is represented by: 1-((1-x)*(1-y)) = 1 – (1 – 1*x – y*1 + x*y) = x + y – x*y = x + y*(1-x), which is Boole's expression. The exclusive-OR can be checked in a similar manner.
- ^ William Hamilton, (Henry L. Mansel and John Veitch, ed.), 1860 Lectures on Metaphysics and Logic, in Two Volumes. Vol. II. Logic, Boston: Gould and Lincoln. Hamilton died in 1856, so this is an effort of his editors Mansel and Veitch. Most of the footnotes are additions and emendations by Mansel and Veitch – see the preface for background information.
- ^ Lecture II LOGIC-I. ITS DEFINITION -HISTORICAL NOTICES OF OPINIONS REGARDING ITS OBJECT AND DOMAIN-II. ITS UTILITY Hamilton 1860:17–18
- ^ Commentary by John Perry in Russell 1912, 1997 edition page ix
- ^ The "simple" type of implication, aka material implication, is the logical connective commonly symbolized by → or ⊃, e.g. p ⊃ q. As a connective it yields the truth value of "falsity" only when the truth value of statement p is "truth" when the truth value of statement q is "falsity"; in 1903 Russell is claiming that "A definition of implication is quite impossible" (Russell 1903:14). He will overcome this problem in PM with the simple definition of (p ⊃ q) =def (NOT-p OR q).
- ^ Russell 1912:66, 1997 edition
- ^ Russell 1912:67, 1997 edition
- ^ name="Russell 1912:70, 1997
- ^ name="Russell 1912:69, 1997
- ^ (4) A true hypothesis in an implication may be dropped, and the consequent asserted. This is a principle incapable of formal symbolic statement ..." (Russell 1903:16)
- ^ Principia Mathematica 1962 edition:94
- ^ Russell 1912:71, 1997 edition
- ^ For example, Alfred Tarski (Tarski 1946:47) distinguishes modus ponens as one of three "rules of inference" or "rules of proof", and he asserts that these "must not be mistaken for logical laws". The two other such "rules" are that of "definition" and "substitution"; see the entry under Tarski.
- ^ Principia Mathematica 2nd edition (1927), pages 8 and 9.
- ^ Russell 1997:73 reprint of Russell 1912
- ^ Russell 1997:88–89 reprint of Russell 1912
- ^ Russell asserts they are "self-evident" a couple times, at Russell 1912, 1967:72
- ^ a b Russell 1912, 1967:73
- ^ "That is to say, if we wish to prove that something of which we have no direct experience exists, we must have among our premises the existence of one or more things of which we have direct experience"; Russell 1912, 1967:75
- ^ Russell 1912, 1967:80–81
- ^ Russell 1912, 1967:87,88
- ^ a b Russell 1912, 1967:93
- ^ In his 1944 Russell's mathematical logic, Gödel observes that "What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs ... The matter is especially doubtful for the rule of substitution and of replacing defined symbols by their definiens ... it is chiefly the rule of substitution which would have to be proved" (Gödel 1944:124)
- ^ Cf Nagel and Newman 1958:110; in their treatment they apply this dichotomy to the collection of "sentences" (formulas) generated by a logical system such as that used by Kurt Gödel in his paper "On Formally Undecidable Propositions of Principia Mathematical and Related Systems". They call the two classes K1 and K2 and define logical contradiction ~S as follows: "A formula having the form ~S is placed in [class] K2, if S is in K1; otherwise, it is placed in K1
- ^ In the introductory comments to Post 1921 written by van Heijenoort page 264, van H observes that "The propositional calculus, carved out of the system of Principia Mathematica, is systematically studied in itself, as a well-defined fragment of logic".
- ^ In a footnote he stated "This operation is not explicitly stated in Principia but is pointed out to be necessary by Russell (1919, p. 151). Indeed: "The legitimacy of substitutions of this kind has to be insured by means of a non-formal principle of inference.1. This footnote 1 states: "1 No such principle is enunciated in Principia Mathematica or in M. Nicod's article mentioned above. But this would seem to be an omission". cf Russell 1919:151 referenced by Post 1921 in van Heijenoort 1967:267)
- ^ Post 1921 in van Heijenoort 1967:267)
- ^ van Heijenoort's commentary before Post 1921 in van Heijenoort:264–265
- ^ van Heijenoort:264
- ^ cf introduction to Gödel 1930 by van Heijenoort 1967:582
- ^ Gödel 1930 in van Heijenoort 1967:582
- ^ cf Boole 1854:226 ARISTOTELIAN LOGIC. CHAPTER XV. [CHAP. XV. THE ARISTOTELIAN LOGIC AND ITS MODERN EXTENSIONS, EXAMINED BY THE METHOD OF THIS TREATISE
- ^ He derives this and a "principle of the excluded middle" ~((x)f(x))→(Ex)~f(x) from his "ε-axiom" cf Hilbert 1927 "The Foundations of Mathematics", cf van Heijenoort 1967:466
- ^ 1962 edition of PM 2nd edition 1927:139
- ^ Tarski 1946:ix, 1995 edition
- ^ cf PM ❋13 IDENTITY, "Summary of ❋13" PM 1927 edition 1962:168
- ^ "A Ternary Arithmetic and Logic" (PDF). Archived from the original (PDF) on 2011-03-04.
Further reading
[edit]- Aristotle, "The Categories", Harold P. Cooke (trans.), pp. 1–109 in Aristotle, Vol. 1, Loeb Classical Library, William Heinemann, London, UK, 1938.
- Aristotle, "On Interpretation", Harold P. Cooke (trans.), pp. 111–179 in Aristotle, Vol. 1, Loeb Classical Library, William Heinemann, London, UK, 1938.
- Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181–531 in Aristotle, Vol. 1, Loeb Classical Library, William Heinemann, London, UK, 1938.
- Boole, George, An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, Macmillan, 1854. Reprinted with corrections, Dover Publications, New York, NY, 1958.
- Louis Couturat, translated by Lydia Gillingham Robinson, 1914, The Algebra of Logic, The Open Court Publishing Company, Chicago and London. Downloaded via googlebooks.
- Gödel 1944 Russell's mathematical logic in Kurt Gödel: Collected Works Volume II, Oxford University Press, New York, NY, ISBN 978-0-19-514721-6
- Sir William Hamilton, 9th Baronet, (Henry L. Mansel and John Veitch, ed.), 1860 Lectures on Metaphysics and Logic, in Two Volumes. Vol. II. Logic, Boston: Gould and Lincoln. Downloaded via googlebooks.
- Stephen Cole Kleene, 1967, Mathematical Logic reprint 2002, Dover Publications, Inc., Mineola, NY, ISBN 0-486-42533-9 (pbk.)
- Ernest Nagel, James R. Newman, 1958, Gödel's Proof, New York University Press, LCCCN: 58-5610.
- Bertrand Russell, The Problems of Philosophy (1912), Oxford University Press, New York, 1997, ISBN 0-19-511552-X.
- Arthur Schopenhauer, The World as Will and Representation, Volume 2, Dover Publications, Mineola, New York, 1966, ISBN 0-486-21762-0
- Alfred Tarski, 1946 (second edition), republished 1995, Introduction to Logic and to the Methodology of Deductive Sciences translated by Olaf Helmer, Dover Publications, Inc., New York, ISBN 0-486-28462-X (pbk.)
- Jean van Heijenoort, 1967, From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931, Harvard University Press, Cambridge, MA, ISBN 978-0-674-32449-7 (pbk)
- Emil Post, 1921, Introduction to a general theory of elementary propositions with commentary by van Heijenoort, pages 264ff
- David Hilbert, 1927, The foundations of mathematics with commentary by van Heijenoort, pages 464ff
- Kurt Gödel, 1930a, The completeness of the axioms of the functional calculus of logic with commentary by van Heijenoort, pages 592ff.
- Alfred North Whitehead, Bertrand Russell. Principia Mathematica, 3 vols, Cambridge University Press, 1910, 1912, and 1913. Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3). Abridged as Principia Mathematica to *56 (2nd edition), Cambridge University Press, 1962, no LCCCN or ISBN
External links
[edit]- James Danaher, "The Laws of Thought", The Philosopher, Volume LXXXXII No. 1
- Peter Suber, "Non-Contradiction and Excluded Middle Archived 2011-08-04 at the Wayback Machine", Earlham College
Law of thought
View on GrokipediaFundamental Laws
Law of Identity
The law of identity states that every entity is identical to itself, expressed verbally as "A is A" or, in more formal terms, that for any entity , .[5] This principle asserts the self-sameness of objects or propositions, serving as a foundational axiom in classical logic where it manifests symbolically as in first-order logic or for propositions.[1] It establishes that no entity can differ from itself, forming the basis for consistent predication and reference in reasoning.[6] This law is traditionally associated with Aristotle, who presupposed it in his discussions of substance and predication, noting in Metaphysics (Book VII, chapter 17) that "a thing is itself" as a basic aspect of being (1041a16-18).[1] Although not stated explicitly as a standalone axiom in his texts, it emerges as essential to his ontology, where being is what it is independently of contradictory alterations.[6] In logical arguments, the law of identity plays a crucial role in preventing equivocation by ensuring that terms maintain fixed meanings throughout discourse, allowing for reliable substitution and avoidance of ambiguous shifts in reference.[7] For instance, the statement "a horse is a horse" illustrates how the principle affirms the inherent sameness of an entity, rejecting any notion that it could simultaneously be something else in the same respect.[1] Similarly, no entity, such as a particular tree, can be non-identical to itself, as this would undermine the stability required for any assertion or inference.[6] This self-referential consistency complements the law of non-contradiction by grounding the possibility of uniform truth values in propositions.[8] An early precursor to more nuanced developments of identity appears in Gottfried Wilhelm Leibniz's principle of the indiscernibility of identicals, which posits that if two entities share all properties, they are identical, extending the basic law toward qualitative equivalence.[9]Law of Non-Contradiction
The law of non-contradiction asserts that contradictory statements cannot both be true simultaneously, specifically that it is impossible for the same attribute to belong and not to belong at the same time to the same thing in the same respect.[10] In modern logical notation, this principle is formalized as , where is a proposition and its negation cannot coexist in truth.[8] Aristotle provided the foundational formulation and defense of this law in Book IV (Gamma) of his Metaphysics, declaring it the most certain principle of all inquiry, self-evident and known per se without need for demonstration.[10] He argued that it serves as the bedrock for all scientific knowledge, as any attempt to refute it presupposes its validity—since denying it requires asserting something definite, which relies on distinguishing affirmation from negation.[8] Without this principle, Aristotle contended, no coherent thought or discourse could occur, as opposites would collapse into indistinguishability.[10] The implications of the law extend to the structure of rational argumentation, where its rejection would trivialize all claims by allowing any assertion and its opposite to hold equally, rendering meaningful debate impossible.[8] Aristotle illustrated this through his refutation of the Heraclitean doctrine of flux, which suggested that all things are in perpetual motion and thus both exist and do not exist simultaneously; he countered that such a view erodes the foundations of truth and stable reality, making knowledge unattainable.[10] In the development of symbolic logic, George Boole represented the law algebraically in his 1854 work An Investigation of the Laws of Thought as , capturing the idea that a class or proposition cannot both include and exclude the same element. This formulation underscores the law's role in excluding inconsistency from mental operations. The law of non-contradiction complements the law of excluded middle by prohibiting the joint truth of opposites while the latter ensures no third option exists between a statement and its denial.[8]Law of Excluded Middle
The law of excluded middle, also known as the principle of excluded middle, states that for any proposition , either is true or its negation is true, expressed formally as .[11] This axiom asserts the exhaustiveness of truth values in classical logic, leaving no third possibility for propositions to be neither true nor false.[12] A representative example is the proposition "It is raining," where either it is raining or it is not raining, with no intermediate state in the classical framework.[13] Aristotle articulated this principle in Metaphysics Gamma (Book IV), specifically in chapter 7, as "Of any one subject, one thing must be either asserted or denied" (1011b24).[11] He presented it as following from the law of non-contradiction, arguing that since a proposition and its negation cannot both be true simultaneously, one must hold to avoid indeterminacy in predication.[8] This formulation underpins binary decision-making in Aristotelian logic, relying on the clarity of propositional terms established by the law of identity for meaningful assertion or denial.[8] The law plays a central role in classical proof techniques, such as proof by cases, where a conclusion is derived by considering the exhaustive alternatives or and showing the result holds in either scenario.[12] It also supports reductio ad absurdum, or proof by contradiction, by enabling the inference that if assuming leads to an absurdity (contradicting non-contradiction), then must be true, as the alternatives are exhaustive.[13]Ancient Origins
Aristotelian Formulation
Aristotle first systematically articulated the fundamental laws of thought in his Metaphysics, particularly in Book IV (Gamma), where he presents them as indispensable axioms for investigating being qua being.[14] These principles—the law of identity, the law of non-contradiction, and the law of excluded middle—serve as the foundational conditions for rational discourse and ontological inquiry, asserting that reality and thought must conform to non-negotiable structures.[15] In the Prior Analytics, Aristotle presupposes these laws as the bedrock of his syllogistic system, enabling valid deductive inferences about substances and their attributes without explicitly restating them.[1] Central to Book Gamma is the law of non-contradiction, which Aristotle declares the most certain of all principles: "It is impossible for the same attribute to belong and not to belong at the same time to the same thing and in the same relation."[14] He treats these laws not as derived theorems but as self-evident axioms of being, known immediately to anyone capable of thought and impossible to deny without self-contradiction, as any attempt to refute them presupposes their validity.[15] The law of identity is implied in the stability of substances, where a thing's essence remains what it is, while the law of excluded middle ensures that between two contradictories, there lies no middle ground—one must hold or the other.[1] Aristotle situates these laws within a metaphysical defense against pre-Socratic paradoxes, particularly the Eleatic denial of plurality and change (as in Parmenides' monism) and the Heraclitean doctrine of perpetual flux, which he argues undermine coherent thought by allowing contradictions to coexist.[16] By refuting such views—showing, for instance, that claiming everything is in flux leads to absurdities like simultaneous affirmation and denial—he establishes the laws as safeguards for a stable ontology centered on substances.[15] These principles thus resolve paradoxes by affirming that being is determinate and knowable. The Aristotelian formulation profoundly shaped medieval scholasticism, where thinkers like Thomas Aquinas integrated them into Christian theology through extensive commentaries, viewing them as divine imprints on rational creation and essential for reconciling faith with reason.[17] Scholastics adopted these axioms in dialectical methods, ensuring logical rigor in disputations and metaphysical treatises throughout the period.[18]Platonic Foundations
Plato's theory of Forms, as articulated in dialogues such as the Republic, implicitly embodies the law of identity by positing eternal, unchanging entities that are self-identical and serve as the paradigmatic essences of all particulars. The Form of the Good, for instance, is described as inherently good and unified, possessing its own essence without variation or division, such that it cannot be other than itself.[19] This self-predication—where the Form exemplifies its own attribute—ensures that true reality maintains a stable identity, contrasting with the flux of the sensible world and laying groundwork for logical principles that demand consistency in predication.[19] In the Sophist, Plato advances a conception of non-contradiction through his refutation of Parmenides' monistic doctrine that "all is one," introducing the possibility of "not-being" as a distinct category interwoven with being, rather than its outright denial. By distinguishing between different kinds of not-being—such as otherness or difference—Plato argues that a thing can be not-F without implying contradiction, provided it does not simultaneously assert and deny the same predicate in the same respect.[20] This analysis preserves logical coherence by rejecting absolute unity that would preclude multiplicity or negation, thereby prefiguring the principle that contradictory statements cannot both be true.[21] Plato's Theaetetus further anticipates the law of excluded middle in its exploration of knowledge, where definitions are tested exhaustively without allowance for intermediate states between true and false belief. Theaetetus proposes that knowledge is true belief, but Socrates demonstrates its inadequacy by showing that false beliefs exist as alternatives, with no third option possible: a belief is either true or false, and only the former, when justified, constitutes knowledge.[22] This binary framework underscores that for any proposition, it must be wholly affirmed or denied, excluding a middle ground of partial truth.[22] Central to Plato's prefiguration of these laws is his dialectical method, a rigorous process of question-and-answer that enforces logical consistency by systematically examining hypotheses and eliminating inconsistencies. In works like the Republic and Philebus, dialectic ascends from sensory opinions to the unchanging Forms through division and collection, ensuring that arguments cohere without self-contradiction or ambiguity.[23] This method prioritizes the ideal realm of Forms over empirical observation, distinguishing Plato's approach from later empirical emphases, as his logic derives from rational intuition of eternal truths rather than inductive generalization from particulars.[24] Plato's implicit logical foundations directly influenced Aristotle's more explicit formulations of the laws in the Metaphysics.[24]Indian Logic Traditions
In ancient Indian philosophical traditions, particularly within the Nyāya school, the foundational text known as the Nyāya-sūtras, attributed to Akṣapāda Gautama and dated to approximately the 2nd century CE, systematizes principles of inference and epistemology that parallel aspects of the laws of thought without explicitly formulating them as such.[25] The concept of anupalabdhi, or non-apprehension, serves as a key pramāṇa (means of valid knowledge) in Nyāya logic, where the absence of perception implies the non-existence of an object under appropriate conditions, thereby upholding a functional equivalent to the law of non-contradiction by ensuring that contradictory apprehensions (such as perceiving and not perceiving the same entity simultaneously) are resolved through evidential absence.[26] This principle avoids logical inconsistency in knowledge validation, as Gautama's formulations emphasize that valid cognition must be free from internal contradiction, with anupalabdhi providing the inferential basis for negations that prevent paradoxical claims.[25] The Nyāya tradition integrates these ideas into its theory of pramāṇa, where valid knowledge presupposes coherence and identity in cognitions, though not as a rigid absolute. In parallel, Jainism's doctrine of syādvāda, articulated in texts like the Tattvārtha-sūtra (c. 2nd–5th century CE), introduces a nuanced approach to identity by positing that reality is multifaceted (anekāntavāda), allowing statements of identity to be conditionally true from specific viewpoints without violating overall logical consistency.[27] Under syādvāda, an entity can be described as "is" (asti), "is not" (nāsti), "may be" (syāt), or indescribable (avaktavya) in different contexts, thus refining the law of identity to accommodate perspectival relativity while maintaining non-contradictory discourse across viewpoints.[28] Buddhist Madhyamaka philosophy, as developed by Nāgārjuna in his Mūlamadhyamakakārikā (c. 2nd century CE), critiques binary logics akin to the law of excluded middle through the tetralemma (catuṣkoṭi), which exhausts and rejects four possibilities: a proposition is true, false, both, or neither.[29] This framework challenges the assumption that every statement must be either affirmed or denied without remainder, arguing that such extremes lead to conceptual proliferation (prapañca) and fail to capture the empty (śūnya) nature of phenomena, thereby offering a deconstructive alternative to strict bivalence.[30] While Indian logics do not enumerate an explicit triad of laws mirroring Western formulations, functional equivalents emerge in Nyāya's inferential structure, particularly through hetu (reason or middle term) and vyāpti (pervasion or invariable concomitance), which ensure that arguments are non-contradictory and exhaustive in their coverage of possibilities.[31] Hetu provides the logical ground linking subject to predicate, while vyāpti guarantees universal relation without exceptions, effectively embodying principles of identity (in consistent relational terms), non-contradiction (via absence of counterexamples), and excluded middle (through comprehensive pervasion). These elements in Gautama's Nyāya-sūtras facilitate debate (vāda) and refutation while preserving epistemological rigor.[25] Such developments in Indian traditions exhibit structural parallels to Aristotelian laws, though without evidence of direct historical influence.[26]Early Modern Developments
Lockean Interpretation
In An Essay Concerning Human Understanding (1689), Book IV, John Locke addresses the laws of thought as "maxims" that are self-evident propositions perceived immediately by the mind without requiring proof, forming the basis of intuitive knowledge.[32] He identifies key examples such as the law of identity ("whatsoever is, is") and the law of non-contradiction ("it is impossible for the same thing to be and not to be"), which the mind grasps through direct perception of agreement or disagreement among ideas.[32] These maxims, Locke argues, apply universally to all distinct ideas, enabling the mind to recognize sameness in itself and difference between ideas without mediation.[32] Building briefly on Aristotelian roots, Locke reframes these principles within an empiricist lens, emphasizing their discovery through mental reflection on experience rather than abstract deduction. Locke's treatment of the law of identity particularly stresses the role of clear and distinct ideas in preventing conceptual confusion. In Book II, Chapter XXVII ("Of Identity and Diversity"), he explains that identity consists in the continued existence of a thing's substance or organization, such as the same continued life in an animal, and that perceiving this requires distinguishing ideas precisely to avoid errors in application.[33] For instance, he notes that "the mind... distinguishes them into different ideas," ensuring that vague or mixed notions do not lead to mistaken identifications, like confusing personal identity (tied to consciousness) with mere bodily substance.[33] This clarity, Locke maintains, is essential for accurate reasoning, as "if this had been more carefully attended to, it might have prevented a great deal of that confusion" in philosophical disputes.[33] The laws of non-contradiction and excluded middle serve as foundational supports for demonstrative knowledge, where certainty arises from chains of intuitive steps. In Book IV, Chapter II ("Of the Degrees of our Knowledge"), Locke describes demonstration as relying on intermediate ideas, with each link verified intuitively, implicitly drawing on these laws to ensure logical consistency and exclude contradictory possibilities.[34] He affirms that "certainty depends so wholly on this intuition," positioning non-contradiction as prohibiting the same thing from being and not being simultaneously, while the excluded middle ensures that for any proposition, it must be true or false.[34] These principles underpin all valid proofs, as without them, no chain of reasoning could yield assured knowledge.[34] Despite critiquing the notion of innate ideas in Book I, Locke affirms the universality of these maxims, arguing they emerge from experience and reflection rather than being imprinted at birth.[35] He rejects innatism by observing that children and those with limited faculties grasp particular truths (e.g., recognizing a stranger is not their mother) before general maxims, yet he upholds the laws' self-evidence as applicable to all rational minds once ideas are formed.[32] This empiricist reconciliation influenced British empiricism profoundly, shaping successors like George Berkeley and David Hume in prioritizing experiential origins for logical principles while retaining their necessity for knowledge.[36]Leibnizian Contributions
Gottfried Wilhelm Leibniz significantly refined the law of identity by articulating the principle of the identity of indiscernibles, which posits that no two distinct substances can share all their properties exactly. In his Monadology (1714), Leibniz argued that if two things possess identical attributes, they must be identical, as any difference would require a distinguishing feature; otherwise, they would violate the rational order of the universe.[37] This principle extends the classical law of identity beyond mere self-sameness (A = A) to a metaphysical criterion for distinguishing entities, implying that true diversity arises from unique qualitative differences.[38] Leibniz further developed this through the principle of sufficient reason, which he presented as an extension of the law of non-contradiction, stating that nothing occurs without a reason why it is so and not otherwise. He maintained that truths of reason, including identities, are grounded in the principle of contradiction—concepts that imply contradictions are impossible—while the sufficient reason ensures that contingent identities in the actual world align with divine rationality.[39] This linkage underscores Leibniz's view that the laws of thought govern not only logical consistency but also the explanatory structure of reality, where indiscernible identities would lack justification for their distinction.[38] In the context of possible worlds, Leibniz applied the contradiction principle to identity by defining impossible concepts as those containing inherent contradictions, such as a square circle, which cannot exist in any coherent world. He contended that identities across possible worlds must preserve essential properties without contradiction, ensuring that what is possible is compossible with the world's overall harmony; thus, indiscernibles cannot coexist in distinct forms without violating this modal consistency.[40] This framework prefigures later formalizations of identity in logic, emphasizing its role in demarcating possible from impossible configurations.[41] Leibniz's visionary work laid precursors to symbolic logic through his concept of the calculus ratiocinator, a universal method for mechanical reasoning using symbols to represent concepts and operations, akin to algebraic manipulation of identities. He envisioned this as a tool to resolve disputes by reducing logical identities to computable forms, where the law of identity would function as a foundational axiom in a characteristica universalis, enabling precise deductions without ambiguity.[42] Unlike John Locke's empiricist approach, which derived ideas of identity from sensory experience, Leibniz viewed the laws of thought, including identity, as innate dispositions activated through reason rather than purely discovered via observation. In his New Essays on Human Understanding (written 1704, published 1765), he critiqued Locke's tabula rasa by arguing that principles like identity are predisposed in the mind, unfolding through rational reflection to grasp necessary truths independent of empirical input.[43]19th Century Expansions
Schopenhauer's Laws
Arthur Schopenhauer articulated a philosophical understanding of the laws of thought in his dissertation On the Fourfold Root of the Principle of Sufficient Reason (1813, revised 1847), proposing a fourfold formulation that includes the principles of identity, contradiction, excluded middle, and sufficient reason of knowledge. These form the bedrock of logical validity and "absolutely no other perfectly pure rational knowledge," as Schopenhauer described. The law of identity asserts that "everything is what it is," or more formally, that a subject equals the sum of its predicates (A = A), ensuring the consistency of concepts in thought.[44] The law of contradiction prohibits the simultaneous affirmation and negation of the same predicate in the same respect (A ≠ –A), preventing logical incoherence. The excluded middle states that every subject must possess one of two contradictory predicates (either A or not-A), while the principle of sufficient reason demands that every judgment requires a ground for its truth, linking it to the forms of the principle of sufficient reason—becoming (causality), knowing (logical inference), being (space and time), and acting (motivation). He classified judgments accordingly into a fourfold division: affirmative (corresponding to identity, where predicates are positively ascribed), negative (aligned with contradiction, denying predicates), exhaustive (tied to excluded middle, covering all possibilities without remainder), and exclusive (related to sufficient reason, ensuring no overlap in disjunctive grounds).[44] Schopenhauer integrated these ideas into his broader idealistic framework in The World as Will and Representation (1818), viewing the laws as core metalogical truths governing abstract reasoning, independent of empirical content.[45] Later, in Parerga and Paralipomena (1851), he suggested simplifying the doctrine by reducing the laws to two: the law of excluded middle and the principle of sufficient reason of knowledge. Schopenhauer critiqued Aristotle's presentation of the laws, arguing that the ancient philosopher treated them as empirical generalizations derived from observation rather than as a priori forms inherent to the principle of sufficient reason. Aristotle's approach, in Schopenhauer's view, inadequately distinguished the "what" of logical structure from the "why" of its necessity, particularly in mathematics and metaphysics, where intuitive insight into synthetic a priori truths is essential. By reframing the laws as expressions of reason's subjective forms, Schopenhauer elevated them within his idealistic framework, where thought serves the will rather than constituting reality itself.[46] This perspective influenced later idealist thinkers, such as those exploring the subjective foundations of logic in post-Kantian philosophy, by emphasizing the primacy of sufficient reason over mere formal consistency.[47]Boole's Algebraic Derivation
In his seminal 1854 work An Investigation of the Laws of Thought, George Boole formulated an algebraic framework for symbolic logic by deriving the fundamental laws governing logical operations from the observed principles of human reasoning, or "laws of the mind." These laws include properties such as commutativity (the order of combining classes does not affect the result) and associativity (grouping of operations is irrelevant), which Boole adapted from arithmetic to the context of logical classes. By treating logical symbols as representatives of classes of objects, he established a system where contradictions arise naturally from these principles, culminating in the equation , which encapsulates the impossibility of an object both belonging and not belonging to the same class. This derivation positions the laws of thought as formal conditions for consistent mental operations rather than empirical observations of external reality.[48] Boole defined the core operations in his system explicitly: the symbol denotes the union of classes (logical "or," where the result includes all objects in either class), (or juxtaposition) denotes the intersection (logical "and," where the result includes only objects common to both classes), and denotes the complement of class (objects in the universe not in ). These operations are interpreted within a "universe of discourse," a delimited aggregate of objects under consideration, which restricts the scope of variables to avoid meaningless generalizations and ensures the system's applicability to specific problems in reasoning. For instance, in analyzing a proposition about "men," the universe might be confined to human beings, preventing extraneous interpretations. This framework allows algebraic manipulation of logical expressions while preserving the intuitive meaning of class relations. A key aspect of Boole's derivation involves obtaining the law of non-contradiction from the idempotence law (, reflecting that a class united with itself remains unchanged) and the absorption law (, indicating that adding a subclass to a class yields the original class). Starting from idempotence, which aligns with the mental operation of non-redundant classification, and absorption, which mirrors the irrelevance of subsets in inclusive operations, Boole demonstrates that any violation of non-contradiction would lead to inconsistent class formations incompatible with these principles. Thus, emerges as a necessary consequence, enforcing that no class can overlap with its complement. This algebraic reduction highlights how the classical laws of thought can be expressed as identities in a binary system, bridging mental processes with mathematical rigor.[49] Central to Boole's approach is the distinction between the laws of thought, which prescribe the valid forms of reasoning independent of content, and the laws of things, which describe objective properties of the external world. The former are universal conditions for the mind's operations, derived from introspection and formal analysis, while the latter depend on empirical investigation and may not conform to logical ideals. Boole emphasized that his symbolic method investigates the former to enable precise treatment of probabilities and deductive chains, without presuming to dictate the latter. This separation ensures the system's neutrality, applicable to diverse domains from philosophy to science.[50]Hamilton's Fourth Law
Sir William Hamilton, in his Lectures on Logic delivered during the 1837–1838 academic session and published posthumously in 1860, proposed a fourth fundamental law of thought to supplement the traditional triad of identity, non-contradiction, and excluded middle. This law, known as the principle of the consequent or the law of reason and consequent, is articulated as: "Infer nothing without ground or reason." Hamilton positioned logic as the science of the laws governing thought, arguing that the fourth law establishes the necessary connection between a determining reason (the ground or premise) and its determined consequent (the conclusion), thereby forming the basis for all valid inference.[51] Hamilton critiqued the traditional three laws as insufficient to account for the validity of syllogistic reasoning, which requires not only the avoidance of contradiction but also a grounded transition from premises to conclusions. Without this fourth law, he contended, the structure of thought remains disconnected, unable to sustain the chain of deductive or inductive processes essential to logical discourse. For instance, in deductive syllogisms, the law ensures that attributes of the whole necessarily apply to its parts when grounded in reason, while in inductive ones, it warrants generalizing from particulars to universals based on evidential support.[51] This principle thus extends the scope of non-contradiction by demanding sufficient reason for every inference, preventing arbitrary leaps in thought. In integrating the fourth law with his innovative quantitative logic, Hamilton employed the metaphor of "spheres of thought" to represent concepts by their extension (quantity) rather than merely their intension (quality), as in traditional Aristotelian logic. The spheres visualize concepts as overlapping or contained volumes, where the law of reason and consequent governs valid inferences by ensuring that any overlap or inclusion is justified by a rational ground, such as empirical analogy or formal necessity. This approach allowed Hamilton to quantify logical relations more precisely, for example, distinguishing between universal affirmative and particular negative judgments through spatial containment rules.[51] Historically, Hamilton's formulation emerged as a reaction to the emerging algebraization of logic, particularly George Boole's symbolic methods, which Hamilton viewed as overly mathematical and detached from the philosophical nature of thought. By emphasizing grounded inference as a law of thought itself, Hamilton sought to preserve logic's roots in metaphysics and human cognition, countering what he saw as an reductive formalization that neglected the qualitative depth of reasoning.[52]20th Century Formalizations
Russell's Principia Approach
In The Principles of Mathematics (1903), Bertrand Russell conceptualized the traditional laws of thought—identity, non-contradiction, and excluded middle—as tautologies within the framework of propositional logic, where they emerge as formal implications that hold universally due to their structure rather than empirical content.[53] He treated identity as the tautological assertion that every term is identical with itself, expressible as symmetrical and transitive relations in the calculus of classes.[53] The law of non-contradiction appears as the principle that a proposition implies its double negation, ensuring no proposition can both hold and fail to hold, while the excluded middle requires that for any proposition p, either p or its negation must obtain, foundational to the binary truth-values in propositional functions.[53] These laws, for Russell, were not psychological regulations of thought but logical necessities derivable from the primitive ideas of implication and variables, building briefly on George Boole's algebraic treatment of logic.[53] In The Problems of Philosophy (1912), Russell further emphasized the law of non-contradiction as a self-evident principle safeguarding reasoning against error, identifying it alongside identity and excluded middle as archetypal laws of thought that underpin a priori knowledge and coherent belief systems.[54] He argued that this law prevents contradictions in propositions, such as affirming and denying the same property of an object simultaneously, thereby ensuring logical consistency in philosophical inquiry and critiquing idealist views that tolerated apparent incoherences.[54] By rejecting the notion that contradictions could be resolved dialectically, as in absolute idealism, Russell positioned non-contradiction as an indispensable bulwark against erroneous metaphysics, where violating it leads to absurdities like the coherence of mutually exclusive beliefs.[54] The Principia Mathematica (1910–1913, with revised editions in 1925–1927), co-authored with Alfred North Whitehead, formalized these laws through a rigorous axiomatization of logicism, treating identity as a primitive idea and deriving non-contradiction via rules of negation and implication.[55] In this system, propositional logic's axioms—built on negation (∼) and disjunction (v)—yield the law of non-contradiction as the impossibility of a proposition and its negation (∼(p · ∼p)), while identity serves as the foundational relation for equating terms across types.[55] To circumvent paradoxes like Russell's own set-theoretic antinomies, the work introduced ramified type theory, which hierarchically restricts predicates to avoid self-reference and implicitly upholds the excluded middle by assuming classical bivalence in well-formed formulas.[55] This approach reinforced Russell's critique of absolute idealism, portraying its holistic internal relations and dialectical contradictions as violations of these axiomatic laws, incompatible with the precise, extensional logic required for mathematics.[55]Post's Propositional Completeness
In 1921, Emil L. Post published his doctoral dissertation, "Introduction to a General Theory of Elementary Propositions," in which he proved that the propositional calculus is both consistent and complete. This work established that the formal system, when restricted to elementary propositions, allows derivation of every valid formula while avoiding contradictions.[56] Post examined the propositional fragment of the axiomatic system from Russell and Whitehead's Principia Mathematica, employing negation (∼) and disjunction (∨) as primitive connectives, alongside a minimal set of five primitive assertions (such as and ) and rules of inference including modus ponens (detachment).[56] He demonstrated the independence of these axioms and rules, proving that each is essential and cannot be derived from the remaining ones, thereby confirming a parsimonious foundation for the calculus.[57] Central to Post's proof of completeness was his independent invention of truth tables, matrices assigning truth values (+ for true, − for false) to propositions across all possible combinations of atomic truth assignments.[58] Using these, he showed that any tautological formula—true in every possible valuation—can be systematically converted to an equivalent full disjunctive normal form and then derived from the axioms via the rules, ensuring every semantically valid proposition is syntactically provable.[56] For consistency, Post argued that since not every formula is a tautology (e.g., contradictions like are false in some valuations), the system does not derive all possible formulas. Post's analysis underscored the distinction between proof theory (syntactic manipulation of symbols according to formal rules) and model theory (semantic evaluation through truth valuations), revealing their alignment in propositional logic: the axioms capture all semantic truths.[56] This completeness theorem affirms the laws of thought—identity, non-contradiction, and excluded middle—as embodied in the propositional axioms, providing a rigorous deductive foundation for all classical propositional inferences without omission or excess.Gödel's Predicate Completeness
Kurt Gödel's completeness theorem for first-order logic, published in 1930, establishes that every semantically valid formula in the predicate calculus is syntactically provable within a suitable formal axiomatic system. In his paper "Die Vollständigkeit der Axiome des logischen Funktionenkalküls," Gödel demonstrated that if a first-order sentence is true in every model (i.e., semantically valid), then it can be derived from the axioms using the rules of inference.[59] This result builds upon earlier work in propositional logic, such as Emil Post's 1921 completeness proof for the propositional calculus, extending it to handle quantified statements involving predicates and variables.[59] Gödel's proof applies to first-order logic with a countable language, initially formulated without function symbols or the equality predicate, though these were later incorporated without altering the core result. The system relies on the axiomatization developed by David Hilbert and Wilhelm Ackermann, which includes propositional axioms, quantifier axioms (such as universal instantiation and existential generalization), and rules for modus ponens and generalization. By showing completeness, Gödel confirmed that the laws of thought—identity, non-contradiction, and excluded middle—operate equivalently in both semantic interpretations and syntactic derivations for predicate logic.[59] The proof technique combines elements now recognized as precursors to the compactness theorem and Henkin constructions. Gödel first showed that any consistent set of first-order sentences has a model, using an inductive construction over a well-ordering of the sentences and adding fresh constants (Skolem constants) to instantiate existentials. To ensure the resulting infinite structure satisfies all sentences, he applied König's lemma—a form of compactness for infinite trees—to select a consistent finite path, yielding a countable model via the axiom of choice. This non-constructive step relies on the law of excluded middle for infinite collections.[59] The implications of Gödel's theorem are profound for the foundations of logic and mathematics. It guarantees that first-order logic is semantically complete, meaning the syntactic rules capture all valid inferences, including those involving quantifiers that extend the classical laws of thought to relational and predicative reasoning. This equivalence between proof and truth in all models underpins model theory, automated theorem proving, and the study of formal systems, affirming that the core principles of logical validity are fully realizable in predicate extensions of propositional logic.[59]Tarski's Identity Law
Alfred Tarski's semantic approach to truth provided a rigorous framework for understanding identity through the lens of substitution in sentences, building on earlier philosophical ideas. In his 1944 paper, Tarski invoked Leibniz's law to justify replacing identical expressions within true sentences, particularly in analyzing self-referential paradoxes like the liar paradox.[60] Specifically, if two expressions denote the same entity, substituting one for the other in a true sentence preserves its truth value, as illustrated when replacing a descriptive phrase with a symbolic equivalent to derive a contradiction.[61] This expanded in his 1956 collection of works, where the semantic conception underscores identity's role in maintaining equivalence under substitution. Formally, Tarski aligned identity with the condition that if , then for any formula , , ensuring that identical objects satisfy the same predicates in the language.[60] This captures Leibniz's principle of the indiscernibility of identicals within a semantic setting, where truth is defined relative to models. In first-order structures, equality functions as a congruence relation, partitioning the domain such that equivalent elements behave identically under all operations and relations defined in the structure.[62] Tarski's truth definitions made this explicit by tying satisfaction of formulas to interpretations where equality enforces uniform behavior across atomic and complex formulas. Tarski distinguished this semantic notion of identity—grounded in logical substitutivity—from metaphysical indiscernibility, which concerns intrinsic properties beyond formal languages. The former is verifiable within a theory's models, while the latter involves philosophical commitments about reality. Following Gödel's completeness theorem, Tarski's framework enabled key advances in model theory, such as constructing non-isomorphic models and studying elementary equivalence, where structures share the same first-order truths but may differ in equality interpretations.[62]Alternative Formulations
Ladd-Franklin's Principles
Christine Ladd-Franklin, a pioneering figure in symbolic logic, proposed the principles of exclusion and exhaustion as refined formulations of the classical laws of non-contradiction and the excluded middle, respectively, in her entries for The Dictionary of Philosophy and Psychology edited by James Mark Baldwin.[63] These principles emphasize the mutual exclusivity and comprehensive coverage of contradictory propositions within a defined universe of discourse: exclusion asserts that no entity can simultaneously possess contradictory attributes (formalized as , where and are contradictories and their intersection is empty), while exhaustion states that every entity must possess at least one of the contradictory attributes (formalized as , encompassing the entire universe).[63] Ladd-Franklin viewed exhaustion as the exhaustive counterpart to the excluded middle, highlighting how the latter's binary structure ensures complete partitioning of possibilities without overlap, thereby providing a clearer algebraic basis for negation in logical systems.[63] Building briefly on George Boole's foundational algebra, she integrated these principles into broader symbolic frameworks, demonstrating their utility through formal proofs involving Charles Sanders Peirce's existential graphs.[64] In her seminal 1883 dissertation "On the Algebra of Logic," Ladd-Franklin extended Boolean methods to algebras of relations, introducing the antilogism—a triadic inconsistent form that serves as a universal test for syllogistic validity by reducing all valid inferences to a single contradictory triad.[65] This relational extension allowed for the manipulation of multi-term connections, moving beyond monadic classes to handle dyadic and higher-order relations systematically.[66] As one of the earliest women to make substantial contributions to formal logic—despite facing significant gender-based barriers, including delayed recognition of her 1882 doctoral work until her PhD was awarded in 1926—Ladd-Franklin's innovations occurred amid a male-dominated field where women were often excluded from academic positions and publications.[67] Her relational algebras influenced subsequent developments in polyadic logics, providing algebraic tools for analyzing multi-place predicates and quantifiers that became central to modern predicate calculus.[68]Welton's Critique
James Welton, in his A Manual of Logic (1896–1905), characterized the traditional laws of thought—identity, non-contradiction, and excluded middle—as descriptive accounts of the structure of rational judgment rather than prescriptive imperatives for thinking. He emphasized that these laws emerge from the actual processes of human cognition and language use, serving to clarify how consistent thought is possible without imposing absolute commands on the mind. This perspective positioned the laws as tools for analyzing judgments, where meaning is preserved through consistent predication, rather than abstract universals detached from concrete reasoning.[69] Welton's critique specifically targeted the law of identity as tautological, arguing that formulations like "A is A" hold primarily as a practical necessity for using terms with fixed meanings in discourse, not as a profound metaphysical truth. Similarly, he viewed the law of non-contradiction as assumptive, presupposing the impossibility of contradictory attributes in judgments without fully justifying why such opposition is inherently unthinkable beyond the conventions of logical form. He distinguished non-contradiction from mere contrariety, underscoring that the law limits what can be coherently affirmed in a single judgment but does not preclude all forms of opposition in broader inquiry. Throughout, Welton stressed the primacy of judgment over these abstract laws, insisting that logic's value lies in guiding the formation and evaluation of judgments in real argumentative contexts.[69] Published across the late 19th and early 20th centuries, Welton's examinations bridged 19th-century traditional logic with emerging 20th-century formalizations by integrating psychological insights into logical analysis. His views echoed Schopenhauer's earlier skepticism toward the laws' foundational status. While Welton's influence remained limited compared to contemporaries like Russell, his work was noted in analytic philosophy for questioning the primacy of these laws and advocating a judgment-centered approach.[70][71]Spencer-Brown's Laws of Form
G. Spencer-Brown introduced a minimalist formal system in his 1969 book Laws of Form, reinterpreting the laws of thought through the primitive act of distinction, which he represents with a simple "mark" dividing the void into marked and unmarked states.[72] This primary arithmetic treats the mark as the fundamental operator, where the unmarked void symbolizes the absence of distinction, and the mark itself enacts a boundary that creates form from nothingness.[73] Spencer-Brown's approach echoes the algebraic structure of Boole's laws by deriving logical operations from these basic indications, though grounded in perceptual and observational primitives rather than set-theoretic assumptions.[73] The system's core consists of two axioms: the law of calling and the law of crossing. The law of calling states that a mark can be repeated or condensed without altering its value, such that two adjacent marks equate to a single mark, reflecting the identity principle in a boundary-crossing context.[72] The law of crossing posits that entering and exiting a boundary returns to the unmarked state, effectively implementing negation and leading to the non-contradiction of a form with its double negation.[73] From these, Spencer-Brown re-derives classical laws of thought—such as identity, non-contradiction, and excluded middle—starting from the void and building through nested boundaries, demonstrating their emergence as theorems within this calculus of indications.[73] This framework has significant implications for cybernetics, where the mark's re-entry into itself enables modeling of self-referential systems, such as feedback loops in organizational or computational processes.[74] In cybernetic theory, Spencer-Brown's distinctions facilitate analysis of observer-dependent realities, influencing second-order cybernetics by emphasizing how forms construct their own universes through recursive boundaries.[75] The system's handling of self-reference, via re-entrant forms, provides a tool for resolving paradoxes in information processing and systemic observation.[73] Critics note the non-standard notation of marks and boundaries as initially opaque, diverging from conventional symbolic logic, yet it lays foundational groundwork for constructivist philosophies by prioritizing the observer's act of distinction in generating logical structures.[76] Despite accessibility challenges, Laws of Form remains influential in formalizing how thought arises from primitive separations, bridging mathematics and epistemology.[73]Contemporary Logics
Intuitionistic Logic
Intuitionistic logic emerged as a foundational alternative to classical logic through the work of L.E.J. Brouwer in the early 20th century, emphasizing mathematics as a product of mental constructions rather than objective truths. Brouwer developed these ideas primarily between 1907 and the 1920s, beginning with his 1907 doctoral dissertation Over de Grondslagen der Wiskunde, where he argued that mathematical reasoning originates in the intuitive grasp of time and continuity in human experience.[77] In this framework, logical laws hold only for propositions that are decidable through finite mental constructions; undecidable or infinite domains cannot be assumed to obey classical principles without constructive justification. Brouwer's intuitionism thus prioritizes constructive proofs, viewing non-constructive existence proofs as invalid for establishing mathematical reality. A core tenet of intuitionistic logic is the rejection of the law of excluded middle, , for arbitrary propositions unless a construction can decide between and . While the law of non-contradiction remains intact—asserting that no proposition can be both true and false—Brouwer contended that assuming bivalence for all statements leads to unverifiable claims about the infinite, undermining the constructive basis of mathematics. This rejection applies specifically to undecidable propositions, where neither affirmation nor denial can be constructed; for instance, statements about infinite sequences may lack a definitive truth value until explicitly built.[78] Brouwer's approach thus challenges the classical rationale by insisting on mental verifiability over abstract assumption.[79] Arend Heyting formalized intuitionistic logic in 1930, providing an axiomatic system that captures Brouwer's informal ideas without relying on classical metalogic. In Heyting's Die formalen Regeln der intuitionistischen Logik, implication is interpreted as the existence of a construction that transforms any proof of into a proof of , effectively modeling logical relations as a partial order on proofs rather than truth values.[78] This system excludes the excluded middle axiom but includes rules for conjunction, disjunction, and negation that align with constructive requirements, such as requiring explicit evidence for disjunctive cases. In mathematical applications, intuitionistic logic supports Brouwer's concept of choice sequences, which represent real numbers as potentially infinite, lawless processes of successive choices rather than fixed sets. Introduced in the 1920s, choice sequences allow the intuitionistic continuum to be viewed as a dynamic entity generated by human intuition, avoiding classical assumptions of completed infinities.[79] For example, a choice sequence might define a real number through ongoing decisions at each decimal place, without presupposing the entire infinite expansion exists independently. Philosophically, intuitionism embodies an anti-realist stance toward the infinite, positing that mathematical objects and truths exist only insofar as they can be mentally constructed by the individual. Brouwer argued in his 1920s writings that classical realism treats infinities as pre-existing entities, leading to paradoxes resolvable only through constructivism; thus, laws of thought must be justified by introspective experience rather than external ontology. This basis underscores intuitionism's role in reorienting logic toward verifiable human activity.Paraconsistent Logic
Paraconsistent logic emerged as a response to the limitations of classical logic in handling inconsistencies, with its foundational developments attributed to Stanisław Jaśkowski in 1948 and Newton C. A. da Costa in the 1960s. Jaśkowski introduced discussive logic, a propositional system where contradictions are tolerated by modeling reasoning as a discourse among multiple perspectives, preventing the derivation of arbitrary conclusions from inconsistent premises.[80] Independently, da Costa developed hierarchical systems known as C-logics starting in 1963, which systematically restrict inference rules to avoid the explosion of contradictions while maintaining deductive structure. Central to both approaches is the rejection of the principle ex falso quodlibet, the classical rule that allows any statement to be derived from a contradiction, thereby enabling reasoning with inconsistent information without triviality.[81] In paraconsistent logics, the law of non-contradiction is weakened such that the formula does not imply the explosion of the system into inconsistency. This modification permits the coexistence of contradictory statements without collapsing the entire logical framework, distinguishing paraconsistent systems from classical logic where any contradiction leads to universal derivability.[82] Such logics often retain the law of identity () and may preserve the law of excluded middle () in core formulations, selectively adhering to the traditional laws of thought while prioritizing tolerance for inconsistency.[6] Paraconsistent logic finds practical applications in managing inconsistent databases, where conflicting data entries can be queried without system failure; for instance, paraconsistent query languages like P-Datalog allow deductive inference over relational databases containing errors or updates.[83] It also addresses vagueness in natural language predicates, modeling borderline cases as overdetermined rather than indeterminate, thus avoiding the sorites paradox by permitting glutty truth values without explosion.[84] A philosophical extension of paraconsistent logic is dialetheism, championed by Graham Priest, which posits that certain contradictions are genuinely true, or dialetheia. Priest argues that semantic paradoxes, such as the liar paradox ("This sentence is false"), exemplify true contradictions that classical logic cannot resolve without ad hoc restrictions. This view builds on paraconsistent frameworks to defend a metaphysical acceptance of inconsistencies, selectively challenging the law of non-contradiction while engaging with the other laws of thought in limited contexts.[85]Multi-Valued and Fuzzy Logics
Multi-valued logics extend classical binary logic by incorporating more than two truth values, allowing for intermediate states that challenge the strict bivalence of truth and falsity in the laws of thought. These systems, including three-valued logics and fuzzy logics, provide frameworks for handling uncertainty, vagueness, and partial information, where the law of excluded middle (every proposition is either true or false) and the law of non-contradiction (a proposition cannot be both true and false) do not hold in their classical forms.[86] One of the earliest multi-valued logics is Jan Łukasiewicz's three-valued system, introduced in 1920, which assigns truth values of true (1), false (0), and indeterminate (½) to propositions. This logic was motivated by philosophical concerns over future contingents, such as statements about undetermined events, where neither full truth nor falsity applies. In Łukasiewicz's framework, negation is defined as ¬u = 1 - u, and implication as u → v = min(1, 1 - u + v), enabling the indeterminate value to propagate through connectives without forcing bivalence.[86] Stephen Cole Kleene developed a strong three-valued logic in 1938, primarily to model partial recursive functions in computability theory, where the third value represents undefined or non-terminating computations. Kleene's system uses truth values {false (0), undefined (U), true (1)}, with connectives like conjunction (min(u, v)) and disjunction (max(u, v)) that preserve the undefined value when at least one operand is undefined. This approach differs from Łukasiewicz's by emphasizing computational indeterminacy over philosophical contingency, while similarly allowing intermediate values to model incomplete information.[86] Fuzzy logic, pioneered by Lotfi A. Zadeh in 1965, generalizes multi-valued logics to an infinite continuum of truth values in the interval [0, 1], where 0 denotes full falsity and 1 full truth. Zadeh's framework treats truth as a degree of membership in fuzzy sets, addressing vagueness in natural language and real-world phenomena. Conjunction in fuzzy logic is often realized via t-norms, such as the Łukasiewicz t-norm defined as T(u, v) = max(0, u + v - 1), which ensures that the intersection of fuzzy sets reflects partial overlap without requiring crisp boundaries.[87][86] In these logics, the law of non-contradiction holds only weakly: a proposition and its negation can both have non-zero truth degrees, but their conjunction typically evaluates to zero under standard t-norms, avoiding explosion while permitting graded inconsistencies. Similarly, the law of identity adapts to a fuzzy form, where equality between objects is a matter of degree rather than absolute sameness, as in fuzzy similarity relations that measure resemblance on a [0,1] scale. These modifications enable reasoning under uncertainty without the rigidity of classical principles.[86][88][89] Multi-valued and fuzzy logics find broad applications in artificial intelligence for tasks like natural language processing and decision-making under imprecision, as well as in control systems for managing nonlinear dynamics in engineering, such as adaptive cruise control in vehicles. In the 2020s, extensions integrating fuzzy logic with quantum computing have emerged, leveraging quantum superposition to efficiently implement fuzzy inference engines on platforms like quantum annealers, potentially enhancing optimization in high-dimensional uncertain environments. Modal variants of these logics briefly incorporate necessity and possibility operators to handle epistemic uncertainty in multi-valued settings.[90][91]Philosophical Rationale
Classical Justifications
The classical justifications for the laws of thought—identity, non-contradiction, and excluded middle—root their necessity in the foundational structures of rational inquiry and human cognition, predating modern formal developments. Aristotle positioned these laws as indemonstrable first principles essential for any scientific knowledge, known per se through direct apprehension rather than empirical derivation. In his Metaphysics, he argues that the principle of non-contradiction, in particular, is the most certain of all principles, serving as the bedrock for asserting anything about reality, since denying it undermines all discourse and thought.[8] Similarly, the law of identity ensures that what is, is, while the excluded middle precludes indeterminate states in categorical judgments, forming the axioms from which all deductive reasoning proceeds.[1] Immanuel Kant further elevated these laws to transcendental status in his Critique of Pure Reason, viewing them as a priori conditions for the possibility of experience and objective knowledge. Kant distinguishes general logic, which abstracts from content to formal rules like non-contradiction—the highest principle of analytic judgments—from transcendental logic, which applies these rules to the conditions of cognition, ensuring the unity of apperception and the coherence of phenomena.[92] Without these laws, he contends, no synthetic judgments could connect intuitions to concepts, rendering experience impossible; they thus constitute the subjective framework that makes objective reality intelligible to the mind.[93] George Boole provided a psychological justification in his seminal work The Laws of Thought, deriving the principles from the observed operations of the human intellect rather than purely metaphysical grounds. Boole posits that the laws emerge from two fundamental mental acts: conception (forming ideas) and judgment (affirming or denying relations), where non-contradiction arises as the mind cannot simultaneously affirm and deny the same predicate of a subject. This approach frames logic as an empirical science of mental processes, with the laws expressing invariant psychological regularities that govern rational association and inference. Ontologically, these laws mirror the structure of being itself, distinguishing existence from non-existence in a way that precludes contradiction in reality. Aristotle's defense extends here, arguing in Metaphysics that a thing cannot both be and not be in the same respect, as this would collapse the distinction between being (on) and non-being, eroding the very possibility of substantive predication about the world.[8] This metaphysical commitment underscores the laws' universality: they do not merely regulate thought but reflect the non-contradictory nature of entities, ensuring that rational discourse aligns with the fabric of existence.Modern Interpretations
In the 20th century, Alfred Tarski's semantic conception of truth provided a formal foundation for understanding the laws of thought as principles that preserve truth across interpretations in classical logic. Tarski defined truth in terms of satisfaction for sentences in formalized languages, emphasizing that logical consequence—central to the laws of identity, non-contradiction, and excluded middle—occurs when the truth of premises guarantees the truth of the conclusion under all possible models.[94] This semantic approach reframed the laws not as metaphysical absolutes but as structural features ensuring valid inference, influencing modern logic by prioritizing material adequacy and formal correctness in truth definitions.[95] Willard Van Orman Quine's 1951 essay "Two Dogmas of Empiricism" challenged the a priori necessity of the laws of thought by questioning the analytic-synthetic distinction, particularly targeting the law of identity as a paradigmatic analytic truth. Quine argued that no clear criterion exists to demarcate analytic statements—those true by meaning alone, like "A is A"—from synthetic ones, rendering the law's status as an unbreakable foundation untenable without empirical grounding.[96] This critique shifted interpretations toward a holistic view of knowledge, where logical laws are revisable components of scientific theories rather than immutable dogmas.[97] From a cognitive science perspective, the laws of thought are increasingly viewed as heuristics for deliberate reasoning rather than inviolable absolutes, with human cognition often deviating due to biases identified in Daniel Kahneman's dual-process theory. Kahneman and Amos Tversky's work on heuristics, such as representativeness and availability, demonstrates systematic violations of probabilistic logic in intuitive judgments under uncertainty, as seen in conjunction fallacy experiments where participants judge the probability of a conjunction as higher than one of its components.[98] In the 2020s, integrations of this framework with computational models portray the laws as tools of System 2 (slow, effortful thinking), approximated heuristically in System 1 but overridden by cognitive shortcuts, informing behavioral economics and decision theory.[99] Logical pluralism, as defended by J.C. Beall and Greg Restall in the 2000s, posits multiple valid consequence relations tailored to contexts, challenging the universality of the classical laws of thought. Their view holds that logics like classical, intuitionistic, and relevant each capture genuine validity for different inquiries—e.g., excluded middle holds in mathematical proofs but not in relevance-sensitive reasoning—without one set dominating all domains.[100] This pluralistic stance reevaluates the laws as context-dependent tools rather than singular truths.[101] Recent AI research since 2020 highlights cognitive and computational links by developing neural networks that approximate or embed the laws of thought through neurosymbolic approaches, bridging symbolic logic with machine learning. For instance, logical neural networks integrate differentiable logic gates to enforce truth preservation akin to the law of non-contradiction during training, enabling interpretable reasoning in tasks like theorem proving. Such advancements underscore the laws' pragmatic role in scalable, human-like cognition.[102]References
- https://en.wikisource.org/wiki/Logic_Taught_by_Love/Chapter_11
