Hubbry Logo
LogicLogicMain
Open search
Logic
Community hub
Logic
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Logic
Logic
from Wikipedia

Depiction of inference using modus ponens
Logic studies valid forms of inference like modus ponens.

Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term "a logic" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.

Logic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises "it's Sunday" and "if it's Sunday then I don't have to work" leading to the conclusion "I don't have to work."[1] Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like "Sunday" or "work" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts.

Arguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalizations, such as inferring that all ravens are black based on many individual observations of black ravens.[2] Abductive arguments are inferences to the best explanation, for example, when a doctor concludes that a patient has a certain disease which explains the symptoms they suffer.[3] Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments.

Logic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic.

Definition

[edit]

The word "logic" originates from the Greek word logos, which has a variety of translations, such as reason, discourse, or language.[4] Logic is traditionally defined as the study of the laws of thought or correct reasoning,[5] and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences.[6] An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion.[7] These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments.[8] Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic.[9]

Formal logic

[edit]

Formal logic (also known as symbolic logic) is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content.[10]

Formal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false.[11] For valid arguments, the logical structure that leads from the premises to the conclusion follows a pattern called a rule of inference.[12] For example, modus ponens is a rule of inference according to which all arguments of the form "(1) p, (2) if p then q, (3) therefore q" are valid, independent of what the terms p and q stand for.[13] In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths.[14] A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim "either it is raining, or it is not".[15] These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim "if p then q" is a logical truth.[16]

Visualization of how to translate an English sentence into first-order logic
Formal logic needs to translate natural language arguments into a formal language, like first-order logic, to assess whether they are valid. In this example, the letter "c" represents Carmen while the letters "M" and "T" stand for "Mexican" and "teacher". The symbol "∧" has the meaning of "and".

Formal logic uses formal languages to express, analyze, and clarify arguments.[17] They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas.[18] This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid.[19] Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed.[20]

The term "logic" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them.[21] Starting in the late 19th century, many new formal systems have been proposed. There are disagreements about what makes a formal system a logic.[22] For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense.[23]

Informal logic

[edit]

When understood in a wide sense, logic encompasses both formal and informal logic.[24] Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse.[25] Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments.[26] In this regard, it considers problems that formal logic on its own is unable to address.[27] Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies.[28]

Many characterizations of informal logic have been suggested but there is no general agreement on its precise definition.[29] The most literal approach sees the terms "formal" and "informal" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language.[30] Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form.[31] On this view, the argument "Birds fly. Tweety is a bird. Therefore, Tweety flies." belongs to natural language and is examined by informal logic. But the formal translation "(1) ; (2) ; (3) " is studied by formal logic.[32] The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent.[33] Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation.[34]

Another characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic.[35] Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that "all ravens I have seen so far are black" to the conclusion "all ravens are black".[36]

A further approach is to define informal logic as the study of informal fallacies.[37] Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument.[38] A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy "you are either with us or against us; you are not with us; therefore, you are against us".[39] Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of logical constants for correct inferences while informal logic also takes the meaning of substantive concepts into account. Further approaches focus on the discussion of logical topics with or without formal devices and on the role of epistemology for the assessment of arguments.[40]

Basic concepts

[edit]

Premises, conclusions, and truth

[edit]

Premises and conclusions

[edit]

Premises and conclusions are the basic parts of inferences or arguments and therefore play a central role in logic. In the case of a valid inference or a correct argument, the conclusion follows from the premises, or in other words, the premises support the conclusion.[41] For instance, the premises "Mars is red" and "Mars is a planet" support the conclusion "Mars is a red planet". For most types of logic, it is accepted that premises and conclusions have to be truth-bearers.[41][a] This means that they have a truth value: they are either true or false. Contemporary philosophy generally sees them either as propositions or as sentences.[43] Propositions are the denotations of sentences and are usually seen as abstract objects.[44] For example, the English sentence "the tree is green" is different from the German sentence "der Baum ist grün" but both express the same proposition.[45]

Propositional theories of premises and conclusions are often criticized because they rely on abstract objects. For instance, philosophical naturalists usually reject the existence of abstract objects. Other arguments concern the challenges involved in specifying the identity criteria of propositions.[43] These objections are avoided by seeing premises and conclusions not as propositions but as sentences, i.e. as concrete linguistic objects like the symbols displayed on a page of a book. But this approach comes with new problems of its own: sentences are often context-dependent and ambiguous, meaning an argument's validity would not only depend on its parts but also on its context and on how it is interpreted.[46] Another approach is to understand premises and conclusions in psychological terms as thoughts or judgments. This position is known as psychologism. It was discussed at length around the turn of the 20th century but it is not widely accepted today.[47]

Internal structure

[edit]

Premises and conclusions have an internal structure. As propositions or sentences, they can be either simple or complex.[48] A complex proposition has other propositions as its constituents, which are linked to each other through propositional connectives like "and" or "if...then". Simple propositions, on the other hand, do not have propositional parts. But they can also be conceived as having an internal structure: they are made up of subpropositional parts, like singular terms and predicates.[49][48] For example, the simple proposition "Mars is red" can be formed by applying the predicate "red" to the singular term "Mars". In contrast, the complex proposition "Mars is red and Venus is white" is made up of two simple propositions connected by the propositional connective "and".[49]

Whether a proposition is true depends, at least in part, on its constituents. For complex propositions formed using truth-functional propositional connectives, their truth only depends on the truth values of their parts.[49][50] But this relation is more complicated in the case of simple propositions and their subpropositional parts. These subpropositional parts have meanings of their own, like referring to objects or classes of objects.[51] Whether the simple proposition they form is true depends on their relation to reality, i.e. what the objects they refer to are like. This topic is studied by theories of reference.[52]

Logical truth

[edit]

Some complex propositions are true independently of the substantive meanings of their parts.[53] In classical logic, for example, the complex proposition "either Mars is red or Mars is not red" is true independent of whether its parts, like the simple proposition "Mars is red", are true or false. In such cases, the truth is called a logical truth: a proposition is logically true if its truth depends only on the logical vocabulary used in it.[54] This means that it is true under all interpretations of its non-logical terms. In some modal logics, this means that the proposition is true in all possible worlds.[55] Some theorists define logic as the study of logical truths.[16]

Truth tables

[edit]

Truth tables can be used to show how logical connectives work or how the truth values of complex propositions depends on their parts. They have a column for each input variable. Each row corresponds to one possible combination of the truth values these variables can take; for truth tables presented in the English literature, the symbols "T" and "F" or "1" and "0" are commonly used as abbreviations for the truth values "true" and "false".[56] The first columns present all the possible truth-value combinations for the input variables. Entries in the other columns present the truth values of the corresponding expressions as determined by the input values. For example, the expression "" uses the logical connective (and). It could be used to express a sentence like "yesterday was Sunday and the weather was good". It is only true if both of its input variables, ("yesterday was Sunday") and ("the weather was good"), are true. In all other cases, the expression as a whole is false. Other important logical connectives are (not), (or), (if...then), and (Sheffer stroke).[57] Given the conditional proposition , one can form truth tables of its converse , its inverse (), and its contrapositive (). Truth tables can also be defined for more complex expressions that use several propositional connectives.[58]

Truth table of various expressions
p q pq pq pq ¬p¬q p q
T T T T T T F
T F F T F T T
F T F T T F T
F F F F T T T

Arguments and inferences

[edit]

Logic is commonly defined in terms of arguments or inferences as the study of their correctness.[59] An argument is a set of premises together with a conclusion.[60] An inference is the process of reasoning from these premises to the conclusion.[43] But these terms are often used interchangeably in logic. Arguments are correct or incorrect depending on whether their premises support their conclusion. Premises and conclusions, on the other hand, are true or false depending on whether they are in accord with reality. In formal logic, a sound argument is an argument that is both correct and has only true premises.[61] Sometimes a distinction is made between simple and complex arguments. A complex argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful.[43]

Diagram of argument terminology used in logic
Argument terminology used in logic

Arguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning.[62] The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer non-deductive support to their conclusions. For such cases, the term ampliative or inductive reasoning is used.[63] Deductive arguments are associated with formal logic in contrast to the relation between ampliative arguments and informal logic.[64]

Deductive

[edit]

A deductively valid argument is one whose premises guarantee the truth of its conclusion.[11] For instance, the argument "(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument "(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs" is also valid because the conclusion follows necessarily from the premises.[65]

According to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances.[66]

Because of the first feature, the focus on formality, deductive inference is usually identified with rules of inference.[67] Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid.[68] The modus ponens is a prominent rule of inference. It has the form "p; if p, then q; therefore q".[69] Knowing that it has just rained () and that after rain the streets are wet (), one can use modus ponens to deduce that the streets are wet ().[70]

The third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false.[71] Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises.[72] But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes between surface and depth information. The surface information of a sentence is the information it presents explicitly. Depth information is the totality of the information contained in the sentence, both explicitly and implicitly. According to this view, deductive inferences are uninformative on the depth level. But they can be highly informative on the surface level by making implicit information explicit. This happens, for example, in mathematical proofs.[73]

Ampliative

[edit]

Ampliative arguments are arguments whose conclusions contain additional information not found in their premises. In this regard, they are more interesting since they contain information on the depth level and the thinker may learn something genuinely new. But this feature comes with a certain cost: the premises support the conclusion in the sense that they make its truth more likely but they do not ensure its truth.[74] This means that the conclusion of an ampliative argument may be false even though all its premises are true. This characteristic is closely related to non-monotonicity and defeasibility: it may be necessary to retract an earlier conclusion upon receiving new information or in light of new inferences drawn.[75] Ampliative reasoning plays a central role in many arguments found in everyday discourse and the sciences. Ampliative arguments are not automatically incorrect. Instead, they just follow different standards of correctness. The support they provide for their conclusion usually comes in degrees. This means that strong ampliative arguments make their conclusion very likely while weak ones are less certain. As a consequence, the line between correct and incorrect arguments is blurry in some cases, such as when the premises offer weak but non-negligible support. This contrasts with deductive arguments, which are either valid or invalid with nothing in-between.[76]

The terminology used to categorize ampliative arguments is inconsistent. Some authors, like James Hawthorne, use the term "induction" to cover all forms of non-deductive arguments.[77] But in a more narrow sense, induction is only one type of ampliative argument alongside abductive arguments.[78] Some philosophers, like Leo Groarke, also allow conductive arguments[b] as another type.[79] In this narrow sense, induction is often defined as a form of statistical generalization.[80] In this case, the premises of an inductive argument are many individual observations that all show a certain pattern. The conclusion then is a general law that this pattern always obtains.[81] In this sense, one may infer that "all elephants are gray" based on one's past observations of the color of elephants.[78] A closely related form of inductive inference has as its conclusion not a general law but one more specific instance, as when it is inferred that an elephant one has not seen yet is also gray.[81] Some theorists, like Igor Douven, stipulate that inductive inferences rest only on statistical considerations. This way, they can be distinguished from abductive inference.[78]

Abductive inference may or may not take statistical observations into consideration. In either case, the premises offer support for the conclusion because the conclusion is the best explanation of why the premises are true.[82] In this sense, abduction is also called the inference to the best explanation.[83] For example, given the premise that there is a plate with breadcrumbs in the kitchen in the early morning, one may infer the conclusion that one's house-mate had a midnight snack and was too tired to clean the table. This conclusion is justified because it is the best explanation of the current state of the kitchen.[78] For abduction, it is not sufficient that the conclusion explains the premises. For example, the conclusion that a burglar broke into the house last night, got hungry on the job, and had a midnight snack, would also explain the state of the kitchen. But this conclusion is not justified because it is not the best or most likely explanation.[82][83]

Fallacies

[edit]

Not all arguments live up to the standards of correct reasoning. When they do not, they are usually referred to as fallacies. Their central aspect is not that their conclusion is false but that there is some flaw with the reasoning leading to this conclusion.[84] So the argument "it is sunny today; therefore spiders have eight legs" is fallacious even though the conclusion is true. Some theorists, like John Stuart Mill, give a more restrictive definition of fallacies by additionally requiring that they appear to be correct.[85] This way, genuine fallacies can be distinguished from mere mistakes of reasoning due to carelessness. This explains why people tend to commit fallacies: because they have an alluring element that seduces people into committing and accepting them.[86] However, this reference to appearances is controversial because it belongs to the field of psychology, not logic, and because appearances may be different for different people.[87]

Poster from 1901
Young America's dilemma: Shall I be wise and great, or rich and powerful? (poster from 1901). This is an example of a false dilemma: an informal fallacy using a disjunctive premise that excludes viable alternatives.

Fallacies are usually divided into formal and informal fallacies.[38] For formal fallacies, the source of the error is found in the form of the argument. For example, denying the antecedent is one type of formal fallacy, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore Othello is not male".[88] But most fallacies fall into the category of informal fallacies, of which a great variety is discussed in the academic literature. The source of their error is usually found in the content or the context of the argument.[89] Informal fallacies are sometimes categorized as fallacies of ambiguity, fallacies of presumption, or fallacies of relevance. For fallacies of ambiguity, the ambiguity and vagueness of natural language are responsible for their flaw, as in "feathers are light; what is light cannot be dark; therefore feathers cannot be dark".[90] Fallacies of presumption have a wrong or unjustified premise but may be valid otherwise.[91] In the case of fallacies of relevance, the premises do not support the conclusion because they are not relevant to it.[92]

Definitory and strategic rules

[edit]

The main focus of most logicians is to study the criteria according to which an argument is correct or incorrect. A fallacy is committed if these criteria are violated. In the case of formal logic, they are known as rules of inference.[93] They are definitory rules, which determine whether an inference is correct or which inferences are allowed. Definitory rules contrast with strategic rules. Strategic rules specify which inferential moves are necessary to reach a given conclusion based on a set of premises. This distinction does not just apply to logic but also to games. In chess, for example, the definitory rules dictate that bishops may only move diagonally. The strategic rules, on the other hand, describe how the allowed moves may be used to win a game, for instance, by controlling the center and by defending one's king.[94] It has been argued that logicians should give more emphasis to strategic rules since they are highly relevant for effective reasoning.[93]

Formal systems

[edit]

A formal system of logic consists of a formal language together with a set of axioms and a proof system used to draw inferences from these axioms.[95] In logic, axioms are statements that are accepted without proof. They are used to justify other statements.[96] Some theorists also include a semantics that specifies how the expressions of the formal language relate to real objects.[97] Starting in the late 19th century, many new formal systems have been proposed.[98]

A formal language consists of an alphabet and syntactic rules. The alphabet is the set of basic symbols used in expressions. The syntactic rules determine how these symbols may be arranged to result in well-formed formulas.[99] For instance, the syntactic rules of propositional logic determine that "" is a well-formed formula but "" is not since the logical conjunction requires terms on both sides.[100]

A proof system is a collection of rules to construct formal proofs. It is a tool to arrive at conclusions from a set of axioms. Rules in a proof system are defined in terms of the syntactic form of formulas independent of their specific content. For instance, the classical rule of conjunction introduction states that follows from the premises and . Such rules can be applied sequentially, giving a mechanical procedure for generating conclusions from premises. There are different types of proof systems including natural deduction and sequent calculi.[101]

A semantics is a system for mapping expressions of a formal language to their denotations. In many systems of logic, denotations are truth values. For instance, the semantics for classical propositional logic assigns the formula the denotation "true" whenever and are true. From the semantic point of view, a premise entails a conclusion if the conclusion is true whenever the premise is true.[102]

A system of logic is sound when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is complete when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly.[103]

Systems of logic

[edit]

Systems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. For over two thousand years, Aristotelian logic was treated as the canon of logic in the Western world,[104] but modern developments in this field have led to a vast proliferation of logical systems.[105] One prominent categorization divides modern formal logical systems into classical logic, extended logics, and deviant logics.[106]

Aristotelian

[edit]

Aristotelian logic encompasses a great variety of topics. They include metaphysical theses about ontological categories and problems of scientific explanation. But in a more narrow sense, it is identical to term logic or syllogistics. A syllogism is a form of argument involving three propositions: two premises and a conclusion. Each proposition has three essential parts: a subject, a predicate, and a copula connecting the subject to the predicate.[107] For example, the proposition "Socrates is wise" is made up of the subject "Socrates", the predicate "wise", and the copula "is".[108] The subject and the predicate are the terms of the proposition. Aristotelian logic does not contain complex propositions made up of simple propositions. It differs in this aspect from propositional logic, in which any two propositions can be linked using a logical connective like "and" to form a new complex proposition.[109]

Diagram of the square of opposition
The square of opposition is often used to visualize the relations between the four basic categorical propositions in Aristotelian logic. It shows, for example, that the propositions "All S are P" and "Some S are not P" are contradictory, meaning that one of them has to be true while the other is false.

In Aristotelian logic, the subject can be universal, particular, indefinite, or singular. For example, the term "all humans" is a universal subject in the proposition "all humans are mortal". A similar proposition could be formed by replacing it with the particular term "some humans", the indefinite term "a human", or the singular term "Socrates".[110]

Aristotelian logic only includes predicates for simple properties of entities. But it lacks predicates corresponding to relations between entities.[111] The predicate can be linked to the subject in two ways: either by affirming it or by denying it.[112] For example, the proposition "Socrates is not a cat" involves the denial of the predicate "cat" to the subject "Socrates". Using combinations of subjects and predicates, a great variety of propositions and syllogisms can be formed. Syllogisms are characterized by the fact that the premises are linked to each other and to the conclusion by sharing one term in each case.[113] Thus, these three propositions contain three terms, referred to as major term, minor term, and middle term.[114] The central aspect of Aristotelian logic involves classifying all possible syllogisms into valid and invalid arguments according to how the propositions are formed.[112][115] For example, the syllogism "all men are mortal; Socrates is a man; therefore Socrates is mortal" is valid. The syllogism "all cats are mortal; Socrates is mortal; therefore Socrates is a cat", on the other hand, is invalid.[116]

Classical

[edit]

Classical logic is distinct from traditional or Aristotelian logic. It encompasses propositional logic and first-order logic. It is "classical" in the sense that it is based on basic logical intuitions shared by most logicians.[117] These intuitions include the law of excluded middle, the double negation elimination, the principle of explosion, and the bivalence of truth.[118] It was originally developed to analyze mathematical arguments and was only later applied to other fields as well. Because of this focus on mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future.[119] Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics.[120]

Propositional logic

[edit]

Propositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component.[121] Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition.[122]

First-order logic

[edit]
Symbol introduced by Gottlob Frege for the universal quantifier
Gottlob Frege's Begriffsschrift introduced the notion of quantifier in a graphical notation, which here represents the judgment that is true.

First-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like "some" and "all".[123] For example, to express the proposition "this raven is black", one may use the predicate for the property "black" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined with the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from .[124]

Extended

[edit]

Extended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology.[125]

[edit]

Modal logic is an extension of classical logic. In its original form, sometimes called "alethic modal logic", it introduces two new symbols: expresses that something is possible while expresses that something is necessary.[126] For example, if the formula stands for the sentence "Socrates is a banker" then the formula articulates the sentence "It is possible that Socrates is a banker".[127] To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to .[128]

Other forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it.[129] The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time.[129] In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case.[130]

Higher order logic

[edit]

Higher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification.[131] Quantifiers correspond to terms like "all" or "some". In classical first-order logic, quantifiers are only applied to individuals. The formula "" (some apples are sweet) is an example of the existential quantifier "" applied to the individual variable "". In higher-order logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula "". In this case, the existential quantifier is applied to the predicate variable "".[132] The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories.[43] But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used.[133]

Deviant

[edit]

Deviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue.[134]

Intuitionistic logic is a restricted version of classical logic.[135] It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true.[135] These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence.[136]

Multi-valued logics depart from classicality by rejecting the principle of bivalence, which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate.[137] These logics have been applied in the field of linguistics. Fuzzy logics are multivalued logics that have an infinite number of "degrees of truth", represented by a real number between 0 and 1.[138]

Paraconsistent logics are logical systems that can deal with contradictions. They are formulated to avoid the principle of explosion: for them, it is not the case that anything follows from a contradiction.[139] They are often motivated by dialetheism, the view that contradictions are real or that reality itself is contradictory. Graham Priest is an influential contemporary proponent of this position and similar views have been ascribed to Georg Wilhelm Friedrich Hegel.[140]

Informal

[edit]

Informal logic is usually carried out in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some frameworks of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments.[141]

The pragmatic or dialogical approach to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion.[142] As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments.[143] A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion.[144] This is achieved by making arguments: arguments are the moves of the game.[145] They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion.[144] Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules.[146] These standards also depend on the type of dialogue. For example, the standards governing the scientific discourse differ from the standards in business negotiations.[147]

The epistemic approach to informal logic, on the other hand, focuses on the epistemic role of arguments.[148] It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified.[149] Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion.[150] For example, the fallacy of begging the question is a fallacy because it fails to provide independent justification for its conclusion, even though it is deductively valid.[151] In this sense, logical normativity consists in epistemic success or rationality.[149] The Bayesian approach is one example of an epistemic approach.[152] Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called credence. Degrees of belief are seen as subjective probabilities in the believed proposition, i.e. how certain the agent is that the proposition is true.[153] On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new incoming information.[154] Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws.[155]

Areas of research

[edit]

Logic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science.[156] In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems.[157]

Philosophy of logic and philosophical logic

[edit]

Philosophy of logic is the philosophical discipline studying the scope and nature of logic.[59] It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them.[158] It is also concerned with how to classify logical systems and considers the ontological commitments they incur.[159] Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology.[160] This application usually happens in the form of extended or deviant logical systems.[161]

Metalogic

[edit]

Metalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics.[162]

Mathematical logic

[edit]
Photograph of Bertrand Russell
Bertrand Russell made various contributions to mathematical logic.[163]

The term "mathematical logic" is sometimes used as a synonym of "formal logic". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory.[164] Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts to use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics.[165] The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopher-logicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems.[166]

Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms.[167]

Computability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of its main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue.[168]

Computational logic

[edit]
Diagram of an AND gate using transistors
Conjunction (AND) is one of the basic operations of Boolean logic. It can be electronically implemented in several ways, for example, by using two transistors.

Computational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention.[169] Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic.[170] Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits.[171] This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output.[172]

Formal semantics of natural language

[edit]

Formal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase "walk and sing" depends on the meanings of the individual expressions "walk" and "sing". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term "walk" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language.[173]

Epistemology of logic

[edit]

The epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true.[174] This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false.[175] The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori.[176] In this regard, it is often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths.[177] A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary.[178]

Some theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic.[179]

History

[edit]
Bust of Aristotle
Portrait of Avicenna
Portrait of William of Ockham
Bust showing Gottlob Frege
Top row: Aristotle, who established the canon of western philosophy;[108] and Avicenna, who replaced Aristotelian logic in Islamic discourse.[180] Bottom row: William of Ockham, a major figure of medieval scholarly thought;[181] and Gottlob Frege, one of the founders of modern symbolic logic.[182]

Logic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics.[183] He was responsible for the introduction of the hypothetical syllogism[184] and temporal modal logic.[185] Further innovations include inductive logic[186] as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century.[187] It has now been superseded by later work, though many of its key insights are still present in modern systems of logic.[188]

Ibn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world.[189] It influenced Western medieval writers such as Albertus Magnus and William of Ockham.[190] Ibn Sina wrote on the hypothetical syllogism[191] and on the propositional calculus.[192] He developed an original "temporally modalized" syllogistic theory, involving temporal logic and modal logic.[193] He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method.[191] Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill.[194]

During the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic.[195] Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential Summa Logicae was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions.[196]

In Chinese philosophy, the School of Names and Mohism were particularly influential. The School of Names focused on the use of language and on paradoxes. For example, Gongsun Long proposed the white horse paradox, which defends the thesis that a white horse is not a horse. The school of Mohism also acknowledged the importance of language for logic and tried to relate the ideas in these fields to the realm of ethics.[197]

In India, the study of logic was primarily pursued by the schools of Nyaya, Buddhism, and Jainism. It was not treated as a separate academic discipline and discussions of its topics usually happened in the context of epistemology and theories of dialogue or argumentation.[198] In Nyaya, inference is understood as a source of knowledge (pramāṇa). It follows the perception of an object and tries to arrive at conclusions, for example, about the cause of this object.[199] A similar emphasis on the relation to epistemology is also found in Buddhist and Jainist schools of logic, where inference is used to expand the knowledge gained through other sources.[200] Some of the later theories of Nyaya, belonging to the Navya-Nyāya school, resemble modern forms of logic, such as Gottlob Frege's distinction between sense and reference and his definition of number.[201]

The syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of modern symbolic logic.[202] Many see Gottlob Frege's Begriffsschrift as the birthplace of modern logic. Gottfried Wilhelm Leibniz's idea of a universal formal language is often considered a forerunner. Other pioneers were George Boole, who invented Boolean algebra as a mathematical system of logic, and Charles Peirce, who developed the logic of relatives. Alfred North Whitehead and Bertrand Russell, in turn, condensed many of these insights in their work Principia Mathematica. Modern logic introduced novel concepts, such as functions, quantifiers, and relational predicates. A hallmark of modern symbolic logic is its use of formal language to precisely codify its insights. In this regard, it departs from earlier logicians, who relied mainly on natural language.[203] Of particular influence was the development of first-order logic, which is usually treated as the standard system of modern logic.[204] Its analytical generality allowed the formalization of mathematics and drove the investigation of set theory. It also made Alfred Tarski's approach to model theory possible and provided the foundation of modern mathematical logic.[205]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Logic is the systematic study of the principles of valid inference and correct reasoning, serving as a non-empirical science akin to that evaluates arguments through their structure rather than content. Originating in , logic was pioneered by in the 4th century BCE through his development of syllogistic reasoning, a deductive method analyzing categorical propositions in works collectively known as the , which laid the foundation for evaluating the validity of arguments based on premises and conclusions. This Aristotelian framework dominated Western thought for over two millennia, influencing medieval and until the 19th century, when advancements in symbolic notation transformed the field. Key figures like introduced algebraic approaches to logic in his 1847 work The Mathematical Analysis of Logic, while Gottlob Frege's 1879 established modern quantificational logic, enabling precise formalization of mathematical proofs and paving the way for and Alfred North Whitehead's (1910–1913), which sought to ground in logic. In the 20th century, Kurt Gödel's (1931) revealed fundamental limits to formal systems, profoundly impacting . Contemporary logic encompasses diverse branches, including formal logic, which uses symbolic languages to assess deductive validity, and , which examines everyday argumentation and fallacies. Within formal logic, propositional logic deals with truth-functional connectives like conjunction and , while predicate logic (or ) incorporates quantifiers to handle relations and variables, forming the basis for . Specialized areas such as explore necessity and possibility, addresses time-dependent statements, and rejects the , aligning with constructive mathematics. Beyond theory, logic underpins critical disciplines: in , it clarifies concepts like truth and knowledge; in , it supports and ; in , it drives programming languages, , and ; and in , it models semantics. These applications highlight logic's enduring role in advancing human understanding and technological innovation.

Definition

Formal Logic

Formal logic is a branch of logic that examines the validity of inferences based on their structural form rather than their specific content, employing symbolic languages to represent statements and formal rules to derive conclusions from . This approach abstracts away from the particular meanings of words or propositions, focusing instead on patterns of reasoning that guarantee truth preservation. By using symbols—such as variables for objects and predicates for properties—formal logic enables the precise of arguments, ensuring that conclusions follow necessarily if the premises are true. Key characteristics of formal logic include its emphasis on precision, deductivity, and the elimination of . Precision arises from a strictly defined that specifies how symbols combine to form valid expressions, preventing misinterpretation. Deductivity refers to the use of rules, such as , which allow step-by-step derivations within a proof system, ensuring that every conclusion is logically entailed by the premises. Ambiguity is avoided through the interplay of and semantics: syntax governs the form of expressions, while semantics assigns interpretations to those forms, clarifying truth conditions across possible worlds or models. These features make formal logic a rigorous tool for evaluating argument validity independently of empirical content. In formal systems, involves the recursive construction of well-formed formulas (wffs), starting from basic atomic formulas (e.g., predicate applied to terms) and building compound expressions according to precise rules. Semantics, in turn, provides interpretations—mappings of symbols to domains and relations—and models, which are structures where a formula holds true if it is satisfied under the interpretation for all relevant assignments. For instance, a classic can be symbolized to highlight its deductive structure: premises stating that all members of one category possess a and that all in that category belong to another lead formally to the conclusion that the first category shares the second property, derivable via rules without regard to the categories' content. Formal logic thus contrasts with , its counterpart in analyzing everyday discourse, by prioritizing symbolic rigor over contextual nuances.

Informal Logic

Informal logic is the branch of logic whose task is to develop non-formal standards, criteria, and procedures for the , interpretation, , , and of argumentation in everyday . It centers on the study of everyday reasoning in , with a particular emphasis on detecting weaknesses in arguments related to (whether bear on the conclusion) and (whether are plausible or justified). This approach addresses arguments as they appear in ordinary communication, including public debates, editorials, and casual discussions, rather than idealized or symbolic forms. Key techniques in informal logic include argument reconstruction, which entails clarifying the structure of an argument by identifying its explicit components and uncovering any unstated elements. A central part of this process is the identification of implicit premises—unstated propositions required to connect stated to the conclusion, essential for fully understanding and critiquing the argument's logic. relies on criteria such as , , and sufficiency (whether the provide enough support for the conclusion), applied contextually to determine an argument's overall strength. Informal logic differs from in its focus on truth-seeking through normative standards for rational argumentation, rather than on effective or audience influence. While prioritizes communicative strategies to sway opinions, promotes critical scrutiny to advance understanding and resolve disputes on evidential grounds. Practical examples of informal analysis include diagramming arguments to map their components visually, such as James Freeman's model, which adapts Stephen Toulmin's layout of claims, data, warrants, and backings for evaluation. Another approach involves assessing dialectical exchanges, as in pragma-dialectics, where arguments are examined within structured discussions to ensure adherence to rules for orderly resolution of differences of opinion. For instance, in analyzing a on policy, one might reconstruct implicit assumptions about societal values and evaluate their sufficiency against counterarguments.

Basic Concepts

Propositions and Truth Values

In logic, a is the abstract content or meaning expressed by a declarative sentence, which can be evaluated as either true or false but not both. This distinguishes propositions from sentences themselves, which are concrete linguistic forms varying by or phrasing, whereas propositions capture the invariant semantic content that bears a . For instance, the English sentence "The is " and its French equivalent "Le ciel est bleu" express the same , which is true under conditions where the appears due to atmospheric of . Central to classical logic is the principle of bivalence, which asserts that every meaningful is exactly true or exactly false, excluding any third value, indeterminacy, or gap in truth assignment. This principle underpins the semantic framework of classical systems, ensuring that truth evaluations are exhaustive and mutually exclusive for all propositions. Truth values thus function as semantic assignments, reflecting whether the proposition corresponds to : an atomic proposition like "Paris is the capital of " receives the value true because it accurately states a geographical fact, while "Paris is the capital of Germany" is false. Propositions serve as the foundational elements in , where their truth values enable the construction of by evaluating and conclusions.

Arguments and Inference

In logic, an is defined as a set of statements, known as , intended to provide reasons for accepting another statement, called the conclusion, through a process of . The are propositions that offer support or , while the conclusion is the claim that follows from them. This structure allows for the evaluation of reasoning by examining whether the adequately justify the conclusion. Arguments can be explicit, where all and the conclusion are fully stated, or implicit, where some elements are omitted under the assumption that they are understood by the . A classic example of an explicit is the : "All men are mortal; is a man; therefore, is mortal," in which the explicitly lead to the conclusion. Implicit arguments often take the form of , which are arguments with one or more suppressed that the is expected to supply based on shared . For instance, the " is a man, therefore he is mortal" implicitly relies on the that all men are mortal. Inference refers to the reasoning process by which a conclusion is drawn from given , aiming to extend or apply the information provided. Within arguments, inferences connect to conclusions, often with the goal of preserving truth: if the are true, the conclusion should follow as true./05:_What_is_Logic/5.01:_Core_Concepts) This truth-preserving aspect underscores the reliability of the inference in logical . Propositions in arguments carry truth values—true or false—that influence the overall assessment of the inference's strength.

Validity, Soundness, and Logical Truth

In logic, an argument is valid if, in every possible interpretation or scenario, the truth of all its guarantees the truth of its conclusion, irrespective of whether the themselves are actually true in the real world. This semantic notion of validity, formalized model-theoretically by , emphasizes preservation of truth across all models where the hold, ensuring no exists where are true but the conclusion false. For instance, the argument "All humans are mortal; is human; therefore, is mortal" is valid because its structure ensures the conclusion follows necessarily from the , though the actual truth of the depends on empirical facts. Soundness builds upon validity by requiring not only that the argument's form preserves truth but also that all are factually true in the given context, thereby guaranteeing the conclusion's truth. In proof-theoretic terms, a deductive is if every provable is semantically valid, meaning derivations from true yield true conclusions without error. Thus, the aforementioned is only if "All humans are mortal" and " is human" are indeed true, distinguishing from mere validity by incorporating empirical verification of . Logical truth pertains to statements that are necessarily true due to their alone, holding in all possible interpretations or models, such as the tautology "If P, then P" or "Either it is raining or it is not raining." These are often called tautologies in propositional logic or theorems derivable without in formal systems, reflecting their a priori necessity as articulated by philosophers like and Leibniz. Unlike factual truths, which are contingent and empirically verifiable (e.g., " boils at 100°C at "), logical truths depend solely on syntactic structure and semantic rules, independent of worldly content or observation. This distinction underscores that logical truths are formal necessities, not discoverable through experience but through analysis of form. These concepts—validity, , and —form the foundation for evaluating arguments in , ensuring reliable from to conclusions.

Types of Reasoning

Deductive Reasoning

is a form of in which the truth of the conclusion is guaranteed by the truth of its , meaning that if the are true, the conclusion must necessarily be true. This non-ampliative process ensures that the conclusion does not introduce new information beyond what is already entailed by the , distinguishing it from forms of reasoning that extend probabilistically. Key characteristics of deductive reasoning include its , monotonicity, and analytic nature. arises because the preserves truth: a valid deductive cannot lead from true to a false conclusion. Monotonicity refers to the property that adding further to a valid cannot invalidate the conclusion; the entailment remains intact or strengthens. The analytic nature means that the conclusion is logically contained within the , deriving its truth solely from their meanings and logical relations rather than empirical observation. Classic examples of deductive reasoning include categorical syllogisms and hypothetical reasoning. A categorical syllogism, such as "All A are B; all B are C; therefore, all A are C," demonstrates how universal premises lead to a necessary conclusion about categories. Hypothetical reasoning involves conditional statements, where premises establish a necessary connection, such as deriving an outcome from an antecedent and its condition, ensuring the conclusion follows inescapably. Deductive reasoning forms the foundation for proofs in formal logical systems, where arguments are constructed and verified to establish entailments rigorously. In contrast to ampliative reasoning, which allows for conclusions that go beyond the premises with some uncertainty, deductive methods provide conclusive certainty when premises hold.

Ampliative Reasoning

Ampliative reasoning refers to forms of inference in which the conclusion extends beyond the information strictly contained in the premises, introducing new content or generalizations that are not deductively entailed but are supported to varying degrees of probability or plausibility. Unlike deductive reasoning, which preserves truth from premises to conclusion with certainty, ampliative inference allows for the expansion of knowledge while acknowledging uncertainty, making it essential for scientific discovery, everyday decision-making, and hypothesis formation. This type of reasoning, often contrasted with explicative or analytic inference, amplifies the scope of beliefs by drawing conclusions that add substantive information not explicitly present in the initial data. Inductive reasoning, a primary form of ampliative , involves generalizing from specific observations to broader principles or predictions, where the conclusion goes beyond the observed instances but gains strength from the size and relevance of the sample. For example, repeatedly observing white swans in various locations might lead to the generalization that all swans are white, though this remains probabilistic and vulnerable to counterexamples like black swans discovered later. The justification for such inferences traces back to , who highlighted the "problem of induction" by questioning how past regularities can reliably project to unobserved cases without circular assumptions. In practice, the strength of an inductive argument depends on factors such as sample size, diversity of evidence, and absence of bias, enabling applications in fields like and empirical . Abductive reasoning, another key ampliative process, consists of inferring the most plausible that explains given , often termed " to the best ." Introduced by in the late , it posits that when multiple hypotheses could account for data—such as unusual symptoms suggesting a specific —the one offering the simplest, most comprehensive is preferred. A classic example is inferring that a kitchen mess results from a late-night snack rather than a , based on contextual clues like open snack packages. In scientific contexts, abductive steps have driven discoveries, such as hypothesizing Neptune's existence to explain irregularities in Uranus's orbit. Unlike induction's focus on patterns, abduction emphasizes explanatory power, though it too involves uncertainty since alternative explanations may emerge. The evaluation of ampliative reasoning relies on measures of evidential support, such as probabilistic confirmation and Bayesian updating, which assess how evidence increases the likelihood of a hypothesis relative to alternatives. Confirmation theory, developed by philosophers like Rudolf Carnap, quantifies support through likelihood ratios, where evidence confirms a hypothesis if it is more probable under that hypothesis than under rivals. Bayesian approaches conceptualize this via prior beliefs updated by new evidence to yield posterior probabilities, as in Bayes' theorem, which formally balances initial plausibility with evidential fit without guaranteeing truth. These methods provide a framework for weighing inductive generalizations or abductive hypotheses, though challenges like the choice of priors persist. Fallacies, such as hasty generalization in induction or overlooking rival explanations in abduction, can undermine these inferences.

Fallacies and Errors

Fallacies and errors in logic refer to flawed patterns of reasoning that undermine the validity of deductive arguments or the strength of ampliative ones, leading to conclusions that do not logically follow from the premises. These errors are broadly classified into formal fallacies, which arise from structural defects in the regardless of content, and informal fallacies, which stem from issues in the argument's content, , or . Such flaws can occur across deductive and ampliative reasoning, compromising the reliability of inferences in both. Formal fallacies involve invalid logical structures that fail to preserve truth from premises to conclusion, detectable through analysis of the argument's form. A classic example is , where one argues: "If P, then Q; not P; therefore, not Q." This is invalid because the absence of P does not preclude Q from occurring through other means. Other formal fallacies include ("If P, then Q; Q; therefore, P"), which similarly overlooks alternative causes for Q. These errors highlight the importance of ensuring that the logical form guarantees the conclusion's truth when premises are true. Informal fallacies, by contrast, depend on the specific content or context of the argument rather than its , often involving irrelevance, , or insufficient evidence. The fallacy occurs when an arguer attacks the character, motives, or circumstances of the opponent instead of addressing the argument itself, such as dismissing a proposal by claiming the proponent is untrustworthy due to personal flaws. Another common type is the fallacy, where a minor action is claimed to inevitably lead to a chain of extreme, undesirable consequences without supporting evidence for the causal links, for instance, arguing that legalizing a substance will lead to . Hasty represents an inductive error by drawing a broad conclusion from an unrepresentative or insufficient sample, such as concluding that all members of a group share a trait based on one atypical example. Detecting and avoiding fallacies plays a central role in and by promoting rigorous evaluation of arguments. For formal fallacies, one can scrutinize the argument's against valid forms, while informal fallacies require assessing , quality, and potential biases in the content. Avoidance involves constructing arguments with clear , sufficient support, and direct to the conclusion, thereby enhancing the persuasiveness and integrity of discourse in , , and everyday reasoning.

Core Formal Systems

Propositional Logic

Propositional logic, also known as sentential logic, is a branch of logic that deals with the structure of compound statements formed from simpler atomic statements using truth-functional connectives, focusing on their validity without regard to internal content. It provides the foundational framework for analyzing arguments based on how the truth values of components determine the truth value of the whole. Atomic propositions, denoted by uppercase letters such as PP, [Q](/page/Q)[Q](/page/Q), or [R](/page/R)[R](/page/R), represent basic declarative statements that are either true or false, without further in this system. Compound propositions are constructed by applying connectives to atomic or other compound propositions. The standard connectives include (¬P\neg P), which reverses the truth value of PP; conjunction (P[Q](/page/Q)P \land [Q](/page/Q)), true only if both PP and [Q](/page/Q)[Q](/page/Q) are true; disjunction (P[Q](/page/Q)P \lor [Q](/page/Q)), true if at least one of PP or [Q](/page/Q)[Q](/page/Q) is true; implication (P[Q](/page/Q)P \to [Q](/page/Q)), false only if PP is true and [Q](/page/Q)[Q](/page/Q) is false; and biconditional (P[Q](/page/Q)P \leftrightarrow [Q](/page/Q)), true if PP and [Q](/page/Q)[Q](/page/Q) have the same . The semantics of these connectives are defined by truth tables, which enumerate all possible truth value assignments to the atomic propositions and compute the resulting of the compound proposition. The following table presents the truth tables for the connectives, where TT denotes true and FF denotes false:
PPQQ¬P\neg PPQP \land QPQP \lor QPQP \to QPQP \leftrightarrow Q
TTFTTTT
TFFFTFF
FTTFTTF
FFTFFTT
Truth tables are used to identify tautologies, formulas that are true under every possible interpretation, such as the transitivity of implication ((PQ)(QR))(PR)((P \to Q) \land (Q \to R)) \to (P \to R). A key equivalence is that implication is logically equivalent to the disjunction of the of the antecedent and the consequent, i.e., (PQ)(¬PQ)(P \to Q) \leftrightarrow (\neg P \lor Q), which can be verified by the following :
PPQQ¬P\neg P¬PQ\neg P \lor QPQP \to Q(PQ)(¬PQ)(P \to Q) \leftrightarrow (\neg P \lor Q)
TTFTTT
TFFFFT
FTTTTT
FFTTTT
This equivalence holds as a tautology, true in all rows. Semantically, an interpretation (or valuation) is a function that assigns TT or FF to each atomic , which is recursively extended to compound propositions using the truth tables. A is satisfiable if there exists at least one interpretation under which it evaluates to TT; it is valid (a tautology) if it evaluates to TT under every interpretation. Models are the interpretations that satisfy a given or set of formulas, providing the basis for concepts like , where Γϕ\Gamma \models \phi means every model of the premises in Γ\Gamma is also a model of ϕ\phi. Proof systems formalize valid inferences through rules that manipulate s to derive theorems. is a prominent system, featuring introduction and elimination rules for each connective, such as conjunction introduction (P,QPQP, Q \vdash P \land Q) and elimination (PQPP \land Q \vdash P), along with disjunction rules and others. A core rule is for implication: from PQP \to Q and PP, infer QQ. These rules ensure , where every provable formula is semantically valid. The completeness theorem for propositional logic states that the natural deduction system (or equivalent systems like Hilbert-style) is complete: if a is valid (true in all models), then it is provable within the system, and conversely, every provable is valid. This result, established in early foundational work, guarantees that truth-table semantics and syntactic proofs are coextensive for propositional logic.

First-Order Logic

First-order logic, also known as predicate logic, extends the expressive power of propositional logic by incorporating variables, predicates, functions, and quantifiers, enabling the formalization of statements about objects in a domain and their relations. This allows for reasoning over structures with quantifiable elements, such as "all elements satisfy a " or "some element relates to another," building on propositional connectives like , conjunction, and implication to form complex formulas. Unlike propositional logic, which treats propositions as atomic, first-order logic introduces object-level structure to model mathematical and philosophical arguments more precisely. The of is defined over a consisting of a of variables (e.g., x,y,zx, y, z), constant symbols (e.g., a,ba, b), function symbols of various arities (e.g., unary ff, binary gg), and predicate symbols of various arities (e.g., unary PP, binary RR). Terms are built inductively: variables and constants are terms, and if ff is an nn-ary function symbol and t1,,tnt_1, \dots, t_n are terms, then f(t1,,tn)f(t_1, \dots, t_n) is a term. Atomic formulas are formed by applying an nn-ary predicate symbol to nn terms, such as P(t)P(t) or R(t1,t2)R(t_1, t_2), or by equality between terms, t1=t2t_1 = t_2. Well-formed formulas (wffs) are then constructed recursively using propositional connectives—¬ϕ\neg \phi, ϕψ\phi \land \psi, ϕψ\phi \lor \psi, ϕψ\phi \to \psi, ϕψ\phi \leftrightarrow \psi—and quantifiers: if ϕ\phi is a wff and xx a variable, then xϕ\forall x \, \phi () and xϕ\exists x \, \phi () are wffs, with quantifiers binding variables in their scope. Sentences are closed formulas with no free variables, forming the basis for logical assertions. Semantically, is interpreted over s, each comprising a non-empty domain DD (the universe of discourse) and an interpretation function that assigns meanings to non-logical symbols. Constants are mapped to elements of DD, nn-ary functions to functions from DnD^n to DD, and nn-ary predicates to relations on DnD^n. A variable assignment ss maps variables to elements of DD, and satisfaction M,sϕM, s \models \phi (where MM is the ) is defined recursively: for atomic P(t1,,tn)P(t_1, \dots, t_n), it holds if the denotations of the terms under ss lie in the relation PMP^M; for quantified formulas, M,sxϕM, s \models \forall x \, \phi if for every dDd \in D, M,s[d/x]ϕM, s[d/x] \models \phi, where s[d/x]s[d/x] modifies ss to assign dd to xx; and xϕ\exists x \, \phi holds if there exists some dDd \in D such that M,s[d/x]ϕM, s[d/x] \models \phi. For sentences (no free variables), MϕM \models \phi if M,sϕM, s \models \phi for all assignments ss. A MM is a model of a sentence ϕ\phi if MϕM \models \phi, and ϕ\phi is valid if every models it. For example, xP(x)\forall x \, P(x) is true in MM if every element of the domain DD satisfies the unary predicate PP. Inference in often involves transforming into standard forms for . moves all quantifiers to the front of a while preserving , yielding a sequence of quantifiers followed by a quantifier-free matrix, achievable through equivalences like x(ϕψ)xϕψ\forall x \, (\phi \land \psi) \equiv \forall x \, \phi \land \psi (if xx not free in ψ\psi) and pulling quantifiers outward. Skolemization further eliminates existential quantifiers in prenex form by replacing existentially quantified variables with Skolem functions or constants dependent on preceding universal variables; for instance, xyR(x,y)\forall x \, \exists y \, R(x, y) becomes xR(x,f(x))\forall x \, R(x, f(x)), where ff is a new symbol, preserving but not equivalence. These transformations facilitate resolution-based proof procedures. First-order logic proof systems are sound and complete: every provable formula is valid (soundness), and every valid formula is provable (completeness). Kurt Gödel proved completeness in 1930, showing that if a set of sentences Γ\Gamma is consistent (no contradiction derivable), then it has a model; equivalently, every logically valid sentence is provable from no assumptions. This theorem links syntactic provability to semantic truth, foundational for , though proofs rely on the and are non-constructive. Despite its completeness, has limitations in expressiveness: the validity problem—determining whether a sentence is true in all models—is undecidable, meaning no exists to decide validity for arbitrary sentences. demonstrated this in 1936 by reducing the to first-order validity, showing that if validity were decidable, it would solve undecidable problems in arithmetic. This undecidability arises from the logic's ability to encode computations and arithmetic, limiting fully to specific fragments.

Formal Languages and Proof Systems

Formal languages provide the syntactic foundation for logical systems, consisting of a finite of symbols—such as variables, constants, and operation symbols—and a that specifies rules for constructing valid expressions known as well-formed formulas (WFFs). The ensures a precise set of building blocks, while the , often defined recursively, distinguishes meaningful strings from arbitrary ones; for instance, atomic formulas serve as base cases, with compound formulas built via specified operations. This structure abstracts away ambiguities, enabling rigorous analysis in systems like propositional and . Proof systems mechanize the derivation of theorems from s within these formal languages, ensuring derivations follow explicit rules. Axiomatic systems, exemplified by Hilbert-style approaches, rely on a small set of rules—typically just —and a comprehensive list of schemas, such as P(QP)P \to (Q \to P), which capture fundamental logical principles. , developed by , represents proofs as trees of sequents (e.g., multisets of formulas on left and right sides separated by a ), with structural rules for weakening, contraction, and exchange, alongside introduction and elimination rules for connectives that facilitate cut-elimination for normalization. Resolution, introduced by J. A. Robinson, operates on clausal forms and uses a single rule to resolve complementary literals, enabling efficient through refutation by deriving the empty clause from unsatisfiable sets. Key metalogical properties evaluate the reliability of these proof systems relative to their formal languages. Consistency ensures that no contradiction, such as a formula and its negation, is provable, preventing the system from deriving everything trivially. Completeness guarantees that every semantically valid formula (true in all models) is provable syntactically, linking proof-theoretic and model-theoretic notions of truth. Decidability requires an effective algorithm to determine, for any formula, whether it is provable, a property that holds for less expressive systems but fails in more powerful ones due to computational complexity. The expressive power of certain formal languages and proof systems intersects with computation, where sufficiently rich systems—capable of encoding arithmetic and recursion—achieve Turing completeness, simulating any Turing machine and thus encompassing all effectively computable functions.

Extended and Specialized Logics

Modal logic extends classical propositional and first-order logics by incorporating modalities to reason about concepts such as necessity and possibility. It introduces two primary operators: the necessity operator \Box, which asserts that a proposition PP is true in all accessible possible worlds from the current world, and the possibility operator \Diamond, defined as P¬¬P\Diamond P \equiv \neg \Box \neg P, which asserts that PP is true in at least one accessible possible world. These operators allow for the formalization of statements whose truth varies across different scenarios or "possible worlds," providing a framework for analyzing modal notions beyond strict truth or falsity in a single context. The semantics of modal logic is primarily provided by Kripke frames, introduced by in his seminal work. A Kripke frame consists of a set of possible worlds WW and a binary accessibility relation RW×WR \subseteq W \times W, where wRww R w' indicates that world ww' is accessible from ww. A proposition P\Box P is true at world ww if PP holds at every world ww' such that wRww R w', while P\Diamond P is true at ww if there exists at least one such ww' where PP holds. This relational structure enables the evaluation of modal formulas relative to frames, distinguishing modal logic from classical logics that lack such world-relativity. Kripke's approach demonstrated the soundness and completeness of various modal systems with respect to classes of frames defined by properties of RR. Different axiomatic systems correspond to specific properties of the accessibility relation, establishing a duality between syntax and semantics. The basic system K includes the distribution axiom (PQ)(PQ)\Box (P \to Q) \to (\Box P \to \Box Q) and the necessitation rule (if P\vdash P, then P\vdash \Box P), valid on arbitrary frames. System T adds the reflexivity axiom PP\Box P \to P, corresponding to reflexive relations (wRww R w for all ww). System S4 extends T with the transitivity axiom PP\Box P \to \Box \Box P, matching transitive relations (wRww R w' and wRww' R w'' imply wRww R w''). System S5, often used for alethic modalities, incorporates the Euclidean axiom PP\Diamond P \to \Box \Diamond P (or equivalently, PP\Box P \to \Box \Box P alongside T and transitivity), corresponding to equivalence relations that are reflexive, transitive, and symmetric. These correspondences ensure that each axiom schema characterizes a precise class of frames. Modal logic finds applications in several domains by interpreting the operators in context-specific ways. In alethic modal logic, \Box represents metaphysical necessity and \Diamond possibility, as in S5 for analyzing logical truths across all possible worlds. Epistemic logic employs S5-like systems where P\Box P models an agent's knowledge of PP, assuming knowledge is factive (true if known) and distributed across accessible worlds representing the agent's information states. uses systems like KD (K without reflexivity) where P\Box P denotes obligation to perform PP, with accessibility relations linking a current world to ideal or permissible ones, as pioneered in standard deontic frameworks. These applications demonstrate modal logic's versatility in formalizing normative and informational concepts. The completeness of modal logics relies on the correspondence between axioms and frame properties, a result generalized by Henrik Sahlqvist's theorem. For Sahlqvist formulas—a broad class including the axioms of K, T, S4, and S5—there is a first-order correspondence: each axiom is valid precisely on frames satisfying a corresponding first-order condition on RR, such as reflexivity for T. This yields strong completeness theorems: a formula is provable in the axiomatic system if and only if it is valid on the corresponding class of frames. Kripke's original work established completeness for quantified modal logics, while Sahlqvist's 1975 result extended this to many normal modal logics, ensuring decidability and semantic characterization for practical reasoning tasks.

Higher-Order Logic

Higher-order logic (HOL) extends by permitting quantification not only over individuals but also over predicates, functions, and higher-level entities, thereby enhancing expressive power to capture complex mathematical and conceptual structures. This is achieved through a type-theoretic framework, often based on simple type theory, where entities are assigned types corresponding to orders of complexity. The zeroth order consists of individuals (type ι), the first order includes predicates over individuals (type ι → o, where o denotes propositions), the second order predicates over first-order predicates (type (ι → o) → o), and so on, building recursively via function types α → β. Lambda abstraction (λ) allows the formation of functions, such as λx_ι . P(x), which denotes a function mapping individuals to propositions, enabling concise expression of higher-order operations. The syntax of HOL incorporates variables and quantifiers typed according to these orders. For instance, universal quantification over a first-order predicate P (of type ι → o) appears as ∀P φ, where φ is a formula potentially involving P, allowing statements like "for all subsets P of the domain, there exists an element not in P" to express properties such as . A representative example is the second-order definition of an infinite domain: the universe U is infinite if it is not finite, where finiteness is captured by the existence of a relation R that bijects U onto a finite initial segment, formalized as ∃n ∃R (R codes a between U and {0,1,...,n-1}). More precisely, this can be expressed using second-order quantification to assert the absence of any such finite for all possible n, distinguishing infinite structures in a way unattainable in . Semantically, HOL admits two primary interpretations: standard (full higher-order) models and Henkin models. In standard semantics, quantifiers range over all possible subsets and functions on the domain (the full and ), leading to interpretations where higher-order variables denote all mathematically conceivable extensions, as in simple . This aligns with an extensional view where types are interpreted in the full type hierarchy over a base domain. Henkin models, introduced to restore desirable meta-logical properties, restrict quantification to a predefined collection of subsets and functions (a "standard model" in a weaker sense), ensuring that the logic satisfies the completeness theorem—every consistent set of formulas has a model—unlike the standard semantics where completeness fails. simple formalizes this via a with primitive types ι and o, axioms for lambda conversion, and quantification via a typed universal quantifier Λ over function types, providing a foundational system for HOL. Despite its limitations, such as the failure of the compactness theorem—where a theory may be finitely satisfiable but have no model, as demonstrated by the inconsistent set comprising "the domain is finite" alongside axioms forcing arbitrarily large finite sizes—HOL's expressiveness is profound. It can formalize much of set theory, including axioms akin to ZFC, by quantifying over sets of sets and enabling definitions of advanced concepts like continuity in analysis or categoricity of the natural numbers via second-order Peano axioms. This power comes at the cost of undecidability and non-compactness but underpins formal verification systems and theoretical computer science.

Non-Classical Logics

Non-classical logics encompass a diverse family of formal systems that deviate from the principles of , particularly bivalence (every is either true or false) and monotonicity (adding premises does not invalidate inferences). These logics address limitations in classical frameworks by accommodating phenomena such as , inconsistency, or requirements, often without preserving the law of explosion or strict truth-value dichotomies. Intuitionistic logic, developed as a foundation for constructive mathematics, rejects the law of excluded middle, P¬PP \lor \neg P, and the double negation elimination principle, ¬¬PP\neg \neg P \to P. This rejection stems from L.E.J. Brouwer's intuitionism, which emphasizes that mathematical truths must be constructively proven rather than merely assumed via non-constructive principles. Arend Heyting formalized the system in the 1930s, providing axioms and rules that align with constructive validity. The Brouwer-Heyting-Kolmogorov (BHK) interpretation assigns meaning to connectives in terms of proofs: a proof of ABA \land B consists of proofs of both AA and BB, while a proof of ABA \lor B includes a proof of one disjunct with an indicator; for implications ABA \to B, it requires a method to transform any proof of AA into a proof of BB; and negation ¬A\neg A is a proof that AA leads to contradiction. This interpretation, independently proposed by Brouwer, Heyting, and Andrey Kolmogorov in the 1920s and 1930s, underpins the logic's semantics and distinguishes it from classical logic by requiring explicit constructions. Paraconsistent logic allows for the toleration of contradictions without leading to the principle of explosion, where from a contradiction, every proposition follows. In , A¬AA \land \neg A implies any BB via and explosion, but paraconsistent systems block this by weakening rules like or restricting . This approach is particularly useful in handling inconsistent information, such as in databases or theories with unavoidable contradictions. , a philosophical stance associated with , posits that some contradictions (dialetheia) are true, as argued by , who contends that boundaries like the reveal true contradictions without trivializing the system. Priest's work, building on earlier systems by Stanisław Jaśkowski and Newton da Costa in the 1940s–1970s, demonstrates that can maintain nontriviality while accommodating inconsistency. Relevant logic, also known as , enforces a requirement that premises must be relevant to the conclusion, avoiding such as P(QP)P \to (Q \to P), where an unrelated antecedent implies any consequent. Developed by Alan Ross Anderson and Nuel D. Belnap in the 1950s–1970s, the logic rejects classical implications that permit irrelevant premises, instead demanding shared variables or content between antecedent and consequent in implications. Systems like (the basic relevant logic) use routines like contraction and distribution restrictions to ensure relevance, formalized through semantic models with Routley-Meyer frames that track . This addresses fallacies of relevance in , such as or in irrelevant contexts, and finds applications in inference where relevance is intuitive. Fuzzy logic extends classical bivalence to a continuum of truth values, typically in the interval [0,1][0,1], to model and gradual properties. introduced infinite-valued logic in the 1920s, defining conjunction as minimum, disjunction as maximum, and implication via the Łukasiewicz function ¬x=1x\neg x = 1 - x and xy=min(1,1x+y)x \to y = \min(1, 1 - x + y), allowing degrees of truth for propositions like "tall" or "hot." Kurt Gödel's 1932 system used a similar [0,1] scale but with Gödel implication xy=1x \to y = 1 if xyx \leq y, else yy, emphasizing residuated lattices for . Lotfi A. Zadeh's fuzzy set theory popularized the approach, applying it to control systems and approximate reasoning by treating truth as a membership degree rather than binary. These logics handle sorites paradoxes and imprecise predicates effectively, with mathematical fuzzy logics providing complete axiomatizations for based semantics.

Areas of Research

Philosophical Logic

Philosophical logic investigates foundational questions about the nature and status of logic itself, distinct from its formal applications. A central concerns whether logic functions primarily as a descriptive enterprise, capturing patterns in how reasoning actually occurs, or as a normative one, prescribing standards for correct . Traditional accounts, such as those of Kant and Frege, emphasize logic's normative character, viewing it as providing universal rules that govern rational thought without reliance on empirical observation. In this perspective, logical principles are not mere descriptions of psychological processes but imperatives for avoiding error in judgment. However, critics like Gilbert Harman argue that logic more accurately delineates relations among propositions or beliefs, offering descriptive insights into inferential structure rather than direct prescriptions for individual reasoning, which may instead be guided by broader pragmatic or evidential considerations. Willard Van Orman Quine's naturalism further reshapes this discussion by embedding logic within the scientific enterprise, rejecting any privileged a priori foundation. Quine contends that logic, like , forms part of our empirical "web of belief," subject to holistic revision in light of experience rather than insulated as analytic or necessary truth. This naturalized approach dissolves the analytic-synthetic distinction, treating logical truths as empirically informed and revisable, thereby aligning with scientific methodology over traditional metaphysics. Related debates challenge logic's purported a priori status, with rationalists maintaining that justification for logical principles arises from conceptual grasp or rational independent of sensory input. Empiricists, however, including Quine, dispute this, positing that all knowledge, including logical, derives from experiential confirmation. extends this by arguing that logic is empirical in a stronger sense, using to illustrate how classical distributive laws (e.g., P(QR)(PQ)(PR)P \land (Q \lor R) \equiv (P \land Q) \lor (P \land R)) fail in contexts involving superposition, suggesting logical principles are theoretically revisable like those of . Logical pluralism emerges as another key contention, proposing that no single logic holds universal validity but that multiple consequence relations may be correct depending on interpretive frameworks or domains. Proponents J. C. Beall and Greg Restall defend this via a generalized Tarski thesis, where validity is relativized to "cases" (e.g., structures or situations), allowing classical, intuitionistic, and other logics to coexist without contradiction. Critics counter that such pluralism undermines logic's normative force or generality, potentially leading to incoherence in shared reasoning standards. The identification of logical constants—what qualifies as purely logical versus domain-specific—relies on frameworks like Alfred Tarski's Convention T, a material adequacy condition for truth definitions requiring that for every sentence ss, the entails ss ss (in structural form). This convention anchors semantics by fixing logical terms (e.g., , conjunction) across interpretations while permitting non-logical predicates to vary, thus clarifying logic's boundaries without semantic paradoxes. These inquiries intersect with metaphysics, particularly , where logic shapes commitments to what exists and how is structured. For instance, logic's existential assumptions (e.g., non-empty domains) imply ontological restrictions, while free logics accommodate possibilities like empty domains, influencing debates on over nothing. Modal logics extend this to possible worlds, modeling ontological alternatives where necessity and possibility reflect metaphysical structures rather than mere linguistic conventions, as in David Lewis's concrete worlds realism. Such connections underscore logic's role in probing 's modal profile, though they raise questions about whether mirrors ontological categories or merely facilitates description. These foundational issues also bear briefly on the of logic, informing how logical knowledge is acquired and warranted beyond .

Mathematical Logic

Mathematical logic is a branch of logic that studies the foundations of through , focusing on the relationships between mathematical structures, , and the limits of provability. It emerged in the early as mathematicians sought rigorous foundations for , , and arithmetic, leading to key developments in , , , and metamathematical results like incompleteness. These areas reveal deep insights into the consistency and independence of mathematical axioms, showing that no single can capture all mathematical truths. In , the emphasis is on interpreting logical in mathematical structures, where a structure consists of a domain (universe of ) equipped with interpretations for the 's constants, functions, and relations. Two structures are elementarily equivalent if they satisfy exactly the same sentences in the , meaning they agree on all properties expressible by formulas. This notion underpins the Löwenheim-Skolem theorem, which states that if a with a countable has an infinite model, then it has a countable model of the same as the . The theorem, first proved by Leopold Löwenheim in 1915 and refined by in 1920, implies that cannot distinguish between models of different cardinalities in certain ways, highlighting limitations in expressing uncountability. Proof theory investigates the structure and complexity of formal proofs, providing tools to analyze the strength of axiomatic systems. A central result is the , proved by in 1934, which asserts that any proof in classical or intuitionistic using the cut rule (a form of on proof length) can be transformed into an equivalent proof without cuts, reducing proof complexity. This theorem facilitates consistency proofs and , where the proof-theoretic ordinal of a theory measures its strength by the largest ordinal for which is provable within the system. Ordinal analysis, developed from Gentzen's work on Peano arithmetic (yielding the ordinal ε₀), assigns well-founded ordinals to theories to establish their consistency relative to weaker systems. Set theory provides the foundational framework for via axiomatic systems like Zermelo-Fraenkel set theory with the (ZFC), formalized by in 1908 and refined by in 1922. The axioms include extensionality, pairing, union, power set, infinity, foundation, replacement, separation, and choice, ensuring a cumulative of sets that models most . Kurt Gödel's constructible universe L, introduced in 1938, is the innermost model of ZFC, comprising sets definable from ordinals via a of definable levels; it satisfies the and the generalized (GCH). Independence results, such as those for the (CH)—which posits that there is no cardinal between the countable infinite and the continuum—demonstrate that CH is neither provable nor disprovable in ZFC. Gödel showed in 1938 that CH is consistent with ZFC using L, while proved in 1963 its consistency of the negation via forcing, establishing ZFC's inability to settle CH. Gödel's incompleteness theorems, published in 1931, mark a cornerstone of by revealing inherent limitations in s. The first incompleteness theorem states that any consistent capable of expressing basic arithmetic (like Peano arithmetic) is incomplete: there exists a sentence in its language that is true but neither provable nor disprovable within the system. The second theorem asserts that if such a system is consistent, its consistency cannot be proved within itself, implying that stronger systems are needed to affirm the consistency of weaker ones. These results, derived via arithmetization of syntax and self-referential sentences, underscore the undecidability intrinsic to sufficiently powerful axiomatizations.

Computational Logic

Computational logic encompasses the application of logical formalisms to computational problems in , enabling , , and verification through algorithmic methods. It bridges abstract logical theories with practical software tools, facilitating tasks such as proving software correctness and solving problems. Key techniques include rules adapted for efficient computation, often leveraging search strategies to explore proof spaces. Automated theorem proving relies on methods like resolution, introduced by Robinson in 1965 as a complete rule for that generates new clauses from existing ones via unification, reducing the search space for refutations. For propositional logic, SAT solvers based on the , developed by Davis, Logemann, and Loveland in 1962, perform systematic search with unit propagation and pure literal elimination to determine . These solvers form the backbone of modern automated provers, scaling to industrial applications through heuristics and . Logic programming paradigms, exemplified by , treat programs as sets of logical rules and facts, executing queries via declarative specifications rather than imperative instructions. Developed by Colmerauer and colleagues in the early 1970s at the University of Marseille, uses unification to match terms and to explore alternative derivations when a path fails. This approach supports non-deterministic computation, where the system automatically generates solutions by resolving goals against the . In applications, underpins through , which exhaustively verifies temporal properties of systems using logics like CTL, pioneered by Clarke and Emerson in 1981 for synthesizing synchronization skeletons. AI planning employs logical representations to generate action sequences achieving goals, often via or planning domain definition languages. Knowledge representation utilizes ontologies in , a W3C standard since 2004 for defining classes, properties, and axioms in applications, enabling reasoning over structured data. The complexity of logical decision problems is highlighted by Cook's 1971 theorem, proving that SAT is NP-complete, implying that if P = NP, then all NP problems, including many in automated reasoning, could be solved efficiently. Recent advances as of 2025 integrate neural methods into theorem proving; for instance, DeepSeek-Prover-V2 achieves state-of-the-art performance on formal proofs in Lean 4 by combining large language models with recursive subgoal decomposition and Monte Carlo tree search.

Historical Development

Ancient and Medieval Logic

The origins of formal logic trace back to in the 4th century BCE, where systematized deductive reasoning through his syllogistic framework. In works collectively known as the , including the Categories, , , , Topics, and Sophistical Refutations, outlined a method for valid inferences based on categorical propositions, such as "All men are mortal" and "Socrates is a man" yielding "Socrates is mortal." His syllogistic logic emphasized the structure of arguments using terms as subjects and predicates, distinguishing between necessary demonstrations and dialectical reasoning, while also addressing fallacies and categories of being to ensure precise predication. Parallel to Aristotle's term-based approach, the Stoics in the BCE developed an early form of propositional logic, focusing on connectives like conjunction, disjunction, and implication to analyze compound statements. Figures such as and constructed arguments from simple propositions, introducing truth-functional rules where the validity of inferences depended on the overall of sentences rather than individual terms, as in the example of "If it is day, then there is light; it is day; therefore, there is light." This innovation complemented Aristotelian syllogistics by handling hypothetical and disjunctive forms more effectively. In the Hellenistic period, the Megarian school, including Diodorus Cronus and Philo of Megara, advanced discussions on modalities and conditionals, debating concepts like possibility, necessity, and the truth conditions of implications. Their work on the "master argument" explored temporal modalities and the logic of future contingents, influencing Stoic developments by refining conditional statements, such as Philo's material implication where "if P then Q" holds unless P is true and Q false. Chrysippus further integrated these ideas into Stoic logic, emphasizing semantic paradoxes and the role of modalities in propositional inferences. During the early medieval period, the Roman philosopher (c. 480–524 CE) preserved and transmitted Aristotelian logic to the Latin West through his translations of the and Porphyry's Isagoge, along with original commentaries that clarified syllogistic rules and introduced topical arguments. These efforts formed the foundation of scholastic logic, enabling later thinkers to build upon categorical inferences. In the 12th century, advanced , analyzing how terms refer in context—personal, simple, or material supposition—to resolve ambiguities in syllogisms and resolve paradoxes like the "liar" sentence. Robert Kilwardby (c. 1215–1279) refined this theory in his commentaries on Aristotle's , distinguishing types of supposition to handle modal and relational propositions more rigorously, such as in arguments involving relative terms like "larger" and "smaller." By the 14th century, integrated with mental language theory, positing that universals exist only as concepts in the mind, not as real entities, and that logical terms primarily signify through natural mental propositions, simplifying while preserving syllogistic validity. In the , (Ibn Sina, 980–1037 CE) extended Aristotelian syllogistics into , developing a system for necessary, possible, and impossible premises in his (part of al-Shifa), where he introduced "dhati" (essential) modalities to validate mixed modal syllogisms, such as a necessary major premise with a possible minor yielding a possible conclusion. His framework resolved inconsistencies in Aristotle's modal rules by prioritizing temporal aspects of modality. (Ibn Rushd, 1126–1198 CE) provided extensive commentaries on the , critiquing Avicenna's innovations while defending a stricter Aristotelian interpretation, emphasizing the unity of logic as an instrument for philosophy in works like his Middle Commentary on , which influenced both Islamic and Latin traditions. These ancient and medieval developments laid the groundwork for logic's evolution, bridging Greek foundations with scholastic and Islamic refinements that anticipated humanist reevaluations of classical texts.

Modern and Contemporary Logic

The modern era of logic began in the with efforts to formalize using algebraic methods, marking a shift from traditional syllogistic approaches to symbolic and mathematical representations. George Boole's (1854) introduced an algebraic system for propositional logic, treating logical operations as arithmetic manipulations of binary variables (0 for false, 1 for true), which laid the groundwork for as a foundation for digital computation. Concurrently, developed relational logic in works like Formal Logic (1847), extending Boole's framework to handle syllogisms involving relations between classes, introducing laws such as De Morgan's rules for and complementation that emphasized the symmetry of logical connectives. In the early , logic advanced toward predicate calculus and attempts to ground in pure logic. Gottlob Frege's (1879) pioneered modern predicate logic through a two-dimensional notation that captured quantification and inference rules, enabling precise expression of mathematical statements and influencing subsequent formal systems. Building on this, and Bertrand Russell's (1910–1913) aimed to derive all of from logical axioms using to avoid paradoxes like Russell's, though it highlighted the complexity of such reductions through its voluminous proofs. Mid-20th-century developments addressed foundational crises in , with David Hilbert's program (outlined in the 1920s) proposing a finitary consistency proof for arithmetic to secure mathematical foundations via metamathematical methods. Kurt Gödel's incompleteness theorems (1931) shattered this optimism by proving that any consistent formal system capable of basic arithmetic is incomplete, containing true statements unprovable within it, and that such systems cannot prove their own consistency. Alfred Tarski's work on truth semantics (1933), particularly his , provided a rigorous model-theoretic foundation for logical languages, defining truth via satisfaction in structures and resolving antinomies through hierarchical languages. , proposed by and (1936), adapts to by replacing distributive laws with orthomodular lattices to model non-classical propositions in Hilbert spaces, reflecting superposition and effects. Post-1950 innovations expanded logic into computational and specialized domains. The Curry–Howard isomorphism (formalized in the 1960s, with roots in 's 1934 work and 's 1980 correspondence) equates proofs in with programs in typed lambda calculi, bridging logic and to underpin languages. Jean-Yves Girard's (1987) refined by treating resources (propositions) as consumable, introducing modalities for controlled reuse and influencing concurrency models in computing. In the 2020s, integrations of logic with have emerged, particularly in large language models (LLMs), where symbolic reasoning modules enhance probabilistic inference for tasks like theorem proving and , as surveyed in recent works on hybrid symbolic-connectionist systems.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.