Hubbry Logo
CausalityCausalityMain
Open search
Causality
Community hub
Causality
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Causality
Causality
from Wikipedia

Causality is an influence by which one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is at least partly responsible for the effect, and the effect is at least partly dependent on the cause.[1] The cause of something may also be described as the reason for the event or process.[2]

In general, a process can have multiple causes,[1] which are also said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Thus, the distinction between cause and effect either follows from or else provides the distinction between past and future. While the former viewpoint is more prevalent in physics,[3] some writers have held that causality is metaphysically prior to notions of time and space.[4][5][6] Causality is an abstraction that indicates how the world progresses.[7] As such, it is a basic concept, and one might expect it to be more apt as an explanation of other concepts of progression than something to be explained by yet more fundamental ideas. The concept is like those of agency and efficacy. For this reason, a leap of intuition may be needed to grasp it.[8][9] Accordingly, causality is implicit in the structure of ordinary language,[10] as well as explicit in the language of scientific causal notation.

In English studies of Aristotelian philosophy, the word "cause" is used as a specialized technical term, the translation of Aristotle's term αἰτία, by which Aristotle meant "explanation" or "answer to a 'why' question". Aristotle categorized the four types of answers as material, formal, efficient, and final "causes". In this case, the "cause" is the explanans for the explanandum, and failure to recognize that different kinds of "cause" are being considered can lead to futile debate. Of Aristotle's four explanatory modes, the one nearest to the concerns of the present article is the "efficient" one.

David Hume, as part of his opposition to rationalism, argued that pure reason alone cannot prove the reality of efficient causality; instead, he appealed to custom and mental habit, observing that all human knowledge derives solely from experience.

The topic of causality remains a staple in contemporary philosophy.

Concept

[edit]

Metaphysics

[edit]

The nature of cause and effect is a concern of the subject known as metaphysics. Kant thought that time and space were notions prior to human understanding of the progress or evolution of the world, and he also recognized the priority of causality. But he did not have the understanding that came with knowledge of Minkowski geometry and the special theory of relativity, that the notion of causality can be used as a prior foundation from which to construct notions of time and space.[4][5][6]

Ontology

[edit]

A general metaphysical question about cause and effect is: "what kind of entity can be a cause, and what kind of entity can be an effect?"

One viewpoint on this question is that cause and effect are of one and the same kind of entity, causality being an asymmetric relation between them. That is to say, it would make good sense grammatically to say either "A is the cause and B the effect" or "B is the cause and A the effect", though only one of those two can be actually true. In this view, one opinion, proposed as a metaphysical principle in process philosophy, is that every cause and every effect is respectively some process, event, becoming, or happening.[5] An example is 'his tripping over the step was the cause, and his breaking his ankle the effect'. Another view is that causes and effects are 'states of affairs', with the exact natures of those entities being more loosely defined than in process philosophy.[11]

Another viewpoint on this question is the more classical one, that a cause and its effect can be of different kinds of entity. For example, in Aristotle's efficient causal explanation, an action can be a cause while an enduring object is its effect. For example, the generative actions of his parents can be regarded as the efficient cause, with Socrates being the effect, Socrates being regarded as an enduring object, in philosophical tradition called a 'substance', as distinct from an action.

Epistemology

[edit]

Since causality is a subtle metaphysical notion, considerable intellectual effort, along with exhibition of evidence, is needed to establish knowledge of it in particular empirical circumstances. According to David Hume, the human mind is unable to perceive causal relations directly. On this ground, the scholar distinguished between the regularity view of causality and the counterfactual notion.[12] According to the counterfactual view, X causes Y if and only if, without X, Y would not exist. Hume interpreted the latter as an ontological view, i.e., as a description of the nature of causality; but, given the limitations of the human mind, advised using the former (stating, roughly, that X causes Y if and only if the two events are spatiotemporally conjoined, and X precedes Y) as an epistemic definition of causality. We need an epistemic concept of causality in order to distinguish between causal and noncausal relations. The contemporary philosophical literature on causality can be divided into five major approaches to causality. These include the (mentioned above) regularity, probabilistic, counterfactual, mechanistic, and manipulationist views. The five approaches can be shown to be reductive, i.e., they define causality in terms of relations of other types.[13] According to this reading, they define causality in terms of, respectively, empirical regularities (constant conjunctions of events), changes in conditional probabilities, counterfactual conditions, mechanisms underlying causal relations, and invariance under intervention.

Geometrical significance

[edit]

Causality has the properties of antecedence and contiguity.[14][15] These are topological, and are ingredients for space-time geometry. As developed by Alfred Robb, these properties allow the derivation of the notions of time and space.[16] Max Jammer writes "the Einstein postulate ... opens the way to a straightforward construction of the causal topology ... of Minkowski space."[17] Causal efficacy propagates no faster than light.[18]

Thus, the notion of causality is metaphysically prior to the notions of time and space. In practical terms, this is because use of the relation of causality is necessary for the interpretation of empirical experiments. Interpretation of experiments is needed to establish the physical and geometrical notions of time and space.

Volition

[edit]

The deterministic world-view holds that the history of the universe can be exhaustively represented as a progression of events following one after the other as cause and effect.[15] Incompatibilism holds that determinism is incompatible with free will, so if determinism is true, "free will" does not exist. Compatibilism, on the other hand, holds that determinism is compatible with, or even necessary for, free will.[19]

Necessary and sufficient causes

[edit]

Causes may sometimes be distinguished into two types: necessary and sufficient.[20] A third type of causation, which requires neither necessity nor sufficiency, but which contributes to the effect, is called a "contributory cause".

Necessary causes
If x is a necessary cause of y, then the presence of y necessarily implies the prior occurrence of x. The presence of x, however, does not imply that y will occur.[21]
Sufficient causes
If x is a sufficient cause of y, then the presence of x necessarily implies the subsequent occurrence of y. However, another cause z may alternatively cause y. Thus the presence of y does not imply the prior occurrence of x.[21]
Contributory causes
For some specific effect, in a singular case, a factor that is a contributory cause is one among several co-occurrent causes. It is implicit that all of them are contributory. For the specific effect, in general, there is no implication that a contributory cause is necessary, though it may be so. In general, a factor that is a contributory cause is not sufficient, because it is by definition accompanied by other causes, which would not count as causes if it were sufficient. For the specific effect, a factor that is on some occasions a contributory cause might on some other occasions be sufficient, but on those other occasions it would not be merely contributory.[22]

J. L. Mackie argues that usual talk of "cause" in fact refers to INUS conditions (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect).[23] An example is a short circuit as a cause for a house burning down. Consider the collection of events: the short circuit, the proximity of flammable material, and the absence of firefighters. Together these are unnecessary but sufficient to the house's burning down (since many other collections of events certainly could have led to the house burning down, for example shooting the house with a flamethrower in the presence of oxygen and so forth). Within this collection, the short circuit is an insufficient (since the short circuit by itself would not have caused the fire) but non-redundant (because the fire would not have happened without it, everything else being equal) part of a condition which is itself unnecessary but sufficient for the occurrence of the effect. So, the short circuit is an INUS condition for the occurrence of the house burning down.

However, Mackie's INUS account succumbs to the problem of joint effects of a common cause: it incorrectly identifies one effect of a common cause as an instantiated INUS condition for another effect of the same common cause, even though the two effects are not causally related.[24] Modern regularity theories aim to overcome this problem using so-called non-redundant regularities.[25][26]

Contrasted with conditionals

[edit]

Conditional statements are not statements of causality. An important distinction is that statements of causality require the antecedent to precede or coincide with the consequent in time, whereas conditional statements do not require this temporal order. Confusion commonly arises since many different statements in English may be presented using "If ..., then ..." form (and, arguably, because this form is far more commonly used to make a statement of causality). The two types of statements are distinct, however.

For example, all of the following statements are true when interpreting "If ..., then ..." as the material conditional:

  1. If Barack Obama is president of the United States in 2011, then Germany is in Europe.
  2. If George Washington is president of the United States in 2011, then ⟨arbitrary statement⟩.

The first is true since both the antecedent and the consequent are true. The second is true in sentential logic and indeterminate in natural language, regardless of the consequent statement that follows, because the antecedent is false.

The ordinary indicative conditional has somewhat more structure than the material conditional. For instance, although the first is the closest, neither of the preceding two statements seems true as an ordinary indicative reading. But the sentence:

  • If Shakespeare of Stratford-on-Avon did not write Macbeth, then someone else did.

intuitively seems to be true, even though there is no straightforward causal relation in this hypothetical situation between Shakespeare's not writing Macbeth and someone else's actually writing it.

Another sort of conditional, the counterfactual conditional, has a stronger connection with causality, yet even counterfactual statements are not all examples of causality. Consider the following two statements:

  1. If A were a triangle, then A would have three sides.
  2. If switch S were thrown, then bulb B would light.

In the first case, it would be incorrect to say that A's being a triangle caused it to have three sides, since the relationship between triangularity and three-sidedness is that of definition. The property of having three sides actually determines A's state as a triangle. Nonetheless, even when interpreted counterfactually, the first statement is true. An early version of Aristotle's "four cause" theory is described as recognizing "essential cause". In this version of the theory, that the closed polygon has three sides is said to be the "essential cause" of its being a triangle.[27] This use of the word 'cause' is of course now far obsolete. Nevertheless, it is within the scope of ordinary language to say that it is essential to a triangle that it has three sides.

A full grasp of the concept of conditionals is important to understanding the literature on causality. In everyday language, loose conditional statements are often enough made, and need to be interpreted carefully.

Questionable cause

[edit]

Fallacies of questionable cause, also known as causal fallacies, non-causa pro causa (Latin for "non-cause for cause"), or false cause, are informal fallacies where a cause is incorrectly identified.

Theories

[edit]

Counterfactual theories

[edit]

Counterfactual theories define causation in terms of a counterfactual relation, and can often be seen as "floating" their account of causality on top of an account of the logic of counterfactual conditionals. Counterfactual theories reduce facts about causation to facts about what would have been true under counterfactual circumstances.[28] The idea is that causal relations can be framed in the form of "Had C not occurred, E would not have occurred." This approach can be traced back to David Hume's definition of the causal relation as that "where, if the first object had not been, the second never had existed."[29] More full-fledged analysis of causation in terms of counterfactual conditionals only came in the 20th century after development of the possible world semantics for the evaluation of counterfactual conditionals. In his 1973 paper "Causation," David Lewis proposed the following definition of the notion of causal dependence:[30]

An event E causally depends on C if, and only if, (i) if C had occurred, then E would have occurred, and (ii) if C had not occurred, then E would not have occurred.

Causation is then analyzed in terms of counterfactual dependence. That is, C causes E if and only if there exists a sequence of events C, D1, D2, ... Dk, E such that each event in the sequence counterfactually depends on the previous. This chain of causal dependence may be called a mechanism.

Note that the analysis does not purport to explain how we make causal judgements or how we reason about causation, but rather to give a metaphysical account of what it is for there to be a causal relation between some pair of events. If correct, the analysis has the power to explain certain features of causation. Knowing that causation is a matter of counterfactual dependence, we may reflect on the nature of counterfactual dependence to account for the nature of causation. For example, in his paper "Counterfactual Dependence and Time's Arrow," Lewis sought to account for the time-directedness of counterfactual dependence in terms of the semantics of the counterfactual conditional.[31] If correct, this theory can serve to explain a fundamental part of our experience, which is that we can causally affect the future but not the past.

One challenge for the counterfactual account is overdetermination, whereby an effect has multiple causes. For instance, suppose Alice and Bob both throw bricks at a window and it breaks. If Alice hadn't thrown the brick, then it still would have broken, suggesting that Alice wasn't a cause; however, intuitively, Alice did cause the window to break. The Halpern-Pearl definitions of causality take account of examples like these.[32] The first and third Halpern-Pearl conditions are easiest to understand: AC1 requires that Alice threw the brick and the window broke in the actual work. AC3 requires that Alice throwing the brick is a minimal cause (cf. blowing a kiss and throwing a brick). Taking the "updated" version of AC2(a), the basic idea is that we have to find a set of variables and settings thereof such that preventing Alice from throwing a brick also stops the window from breaking. One way to do this is to stop Bob from throwing the brick. Finally, for AC2(b), we have to hold things as per AC2(a) and show that Alice throwing the brick breaks the window. (The full definition is a little more involved, involving checking all subsets of variables.)

Probabilistic causation

[edit]

Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer or emphysema. As a result, many turn to a notion of probabilistic causation. Informally, A ("The person is a smoker") probabilistically causes B ("The person has now or will have cancer at some time in the future"), if the information that A occurred increases the likelihood of Bs occurrence. Formally, P{B|A}≥ P{B} where P{B|A} is the conditional probability that B will occur given the information that A occurred, and P{B} is the probability that B will occur having no knowledge whether A did or did not occur. This intuitive condition is not adequate as a definition for probabilistic causation because of its being too general and thus not meeting our intuitive notion of cause and effect. For example, if A denotes the event "The person is a smoker," B denotes the event "The person now has or will have cancer at some time in the future" and C denotes the event "The person now has or will have emphysema some time in the future," then the following three relationships hold: P{B|A} ≥ P{B}, P{C|A} ≥ P{C} and P{B|C} ≥ P{B}. The last relationship states that knowing that the person has emphysema increases the likelihood that he will have cancer. The reason for this is that having the information that the person has emphysema increases the likelihood that the person is a smoker, thus indirectly increasing the likelihood that the person will have cancer. However, we would not want to conclude that having emphysema causes cancer. Thus, we need additional conditions such as temporal relationship of A to B and a rational explanation as to the mechanism of action. It is hard to quantify this last requirement and thus different authors prefer somewhat different definitions.[citation needed]

Causal calculus

[edit]

When experimental interventions are infeasible or illegal, the derivation of a cause-and-effect relationship from observational studies must rest on some qualitative theoretical assumptions, for example, that symptoms do not cause diseases, usually expressed in the form of missing arrows in causal graphs such as Bayesian networks or path diagrams. The theory underlying these derivations relies on the distinction between conditional probabilities, as in , and interventional probabilities, as in . The former reads: "the probability of finding cancer in a person known to smoke, having started, unforced by the experimenter, to do so at an unspecified time in the past", while the latter reads: "the probability of finding cancer in a person forced by the experimenter to smoke at a specified time in the past". The former is a statistical notion that can be estimated by observation with negligible intervention by the experimenter, while the latter is a causal notion which is estimated in an experiment with an important controlled randomized intervention. It is specifically characteristic of quantal phenomena that observations defined by incompatible variables always involve important intervention by the experimenter, as described quantitatively by the observer effect.[vague] In classical thermodynamics, processes are initiated by interventions called thermodynamic operations. In other branches of science, for example astronomy, the experimenter can often observe with negligible intervention.

The theory of "causal calculus"[33] (also known as do-calculus, Judea Pearl's Causal Calculus, Calculus of Actions) permits one to infer interventional probabilities from conditional probabilities in causal Bayesian networks with unmeasured variables. One very practical result of this theory is the characterization of confounding variables, namely, a sufficient set of variables that, if adjusted for, would yield the correct causal effect between variables of interest. It can be shown that a sufficient set for estimating the causal effect of on is any set of non-descendants of that -separate from after removing all arrows emanating from . This criterion, called "backdoor", provides a mathematical definition of "confounding" and helps researchers identify accessible sets of variables worthy of measurement.

Structure learning

[edit]

While derivations in causal calculus rely on the structure of the causal graph, parts of the causal structure can, under certain assumptions, be learned from statistical data. The basic idea goes back to Sewall Wright's 1921 work[34] on path analysis. A "recovery" algorithm was developed by Rebane and Pearl (1987)[35] which rests on Wright's distinction between the three possible types of causal substructures allowed in a directed acyclic graph (DAG):

Type 1 and type 2 represent the same statistical dependencies (i.e., and are independent given ) and are, therefore, indistinguishable within purely cross-sectional data. Type 3, however, can be uniquely identified, since and are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when and have common ancestors, except that one must first condition on those ancestors. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed.[33][36][37][38]

Alternative methods of structure learning search through the many possible causal structures among the variables, and remove ones which are strongly incompatible with the observed correlations. In general this leaves a set of possible causal relations, which should then be tested by analyzing time series data or, preferably, designing appropriately controlled experiments. In contrast with Bayesian Networks, path analysis (and its generalization, structural equation modeling), serve better to estimate a known causal effect or to test a causal model than to generate causal hypotheses.

For nonexperimental data, causal direction can often be inferred if information about time is available. This is because (according to many, though not all, theories) causes must precede their effects temporally. This can be determined by statistical time series models, for instance, or with a statistical test based on the idea of Granger causality, or by direct experimental manipulation. The use of temporal data can permit statistical tests of a pre-existing theory of causal direction. For instance, our degree of confidence in the direction and nature of causality is much greater when supported by cross-correlations, ARIMA models, or cross-spectral analysis using vector time series data than by cross-sectional data.

Derivation theories

[edit]

Nobel laureate Herbert A. Simon and philosopher Nicholas Rescher[39] claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics.

Manipulation theories

[edit]

Some theorists have equated causality with manipulability.[40][41][42][43] Under these theories, x causes y only in the case that one can change x in order to change y. This coincides with commonsense notions of causations, since often we ask causal questions in order to change some feature of the world. For instance, we are interested in knowing the causes of crime so that we might find ways of reducing it.

These theories have been criticized on two primary grounds. First, theorists complain that these accounts are circular. Attempting to reduce causal claims to manipulation requires that manipulation is more basic than causal interaction. But describing manipulations in non-causal terms has provided a substantial difficulty.

The second criticism centers around concerns of anthropocentrism. It seems to many people that causality is some existing relationship in the world that we can harness for our desires. If causality is identified with our manipulation, then this intuition is lost. In this sense, it makes humans overly central to interactions in the world.

Some attempts to defend manipulability theories are recent accounts that do not claim to reduce causality to manipulation. These accounts use manipulation as a sign or feature in causation without claiming that manipulation is more fundamental than causation.[33][44]

Process theories

[edit]

Some theorists are interested in distinguishing between causal processes and non-causal processes (Russell 1948; Salmon 1984).[45][46] These theorists often want to distinguish between a process and a pseudo-process. As an example, a ball moving through the air (a process) is contrasted with the motion of a shadow (a pseudo-process). The former is causal in nature while the latter is not.

Salmon (1984)[45] claims that causal processes can be identified by their ability to transmit an alteration over space and time. An alteration of the ball (a mark by a pen, perhaps) is carried with it as the ball goes through the air. On the other hand, an alteration of the shadow (insofar as it is possible) will not be transmitted by the shadow as it moves along.

These theorists claim that the important concept for understanding causality is not causal relationships or causal interactions, but rather identifying causal processes. The former notions can then be defined in terms of causal processes.

Why-because graph of the capsizing of the Herald of Free Enterprise (Click to see in detail.)

A subgroup of the process theories is the mechanistic view on causality. It states that causal relations supervene on mechanisms. While the notion of mechanism is understood differently, the definition put forward by the group of philosophers referred to as the 'New Mechanists' dominate the literature.[47]

Fields

[edit]

Science

[edit]

For the scientific investigation of efficient causality, the cause and effect are each best conceived of as temporally transient processes.

Within the conceptual frame of the scientific method, an investigator sets up several distinct and contrasting temporally transient material processes that have the structure of experiments, and records candidate material responses, normally intending to determine causality in the physical world.[48] For instance, one may want to know whether a high intake of carrots causes humans to develop the bubonic plague. The quantity of carrot intake is a process that is varied from occasion to occasion. The occurrence or non-occurrence of subsequent bubonic plague is recorded. To establish causality, the experiment must fulfill certain criteria, only one example of which is mentioned here. For example, instances of the hypothesized cause must be set up to occur at a time when the hypothesized effect is relatively unlikely in the absence of the hypothesized cause; such unlikelihood is to be established by empirical evidence. A mere observation of a correlation is not nearly adequate to establish causality. In nearly all cases, establishment of causality relies on repetition of experiments and probabilistic reasoning. Hardly ever is causality established more firmly than as more or less probable. It is most convenient for establishment of causality if the contrasting material states of affairs are precisely matched, except for only one variable factor, perhaps measured by a real number.

Physics

[edit]

One has to be careful in the use of the word cause in physics. Properly speaking, the hypothesized cause and the hypothesized effect are each temporally transient processes. For example, force is a useful concept for the explanation of acceleration, but force is not by itself a cause. More is needed. For example, a temporally transient process might be characterized by a definite change of force at a definite time. Such a process can be regarded as a cause. Causality is not inherently implied in equations of motion, but postulated as an additional constraint that needs to be satisfied (i.e. a cause always precedes its effect). This constraint has mathematical implications[49] such as the Kramers-Kronig relations.

Causality is one of the most fundamental and essential notions of physics.[50] Causal efficacy cannot 'propagate' faster than light. Otherwise, reference coordinate systems could be constructed (using the Lorentz transform of special relativity) in which an observer would see an effect precede its cause (i.e. the postulate of causality would be violated).

Causal notions appear in the context of the flow of mass-energy. Any actual process has causal efficacy that can propagate no faster than light. In contrast, an abstraction has no causal efficacy. Its mathematical expression does not propagate in the ordinary sense of the word, though it may refer to virtual or nominal 'velocities' with magnitudes greater than that of light. For example, wave packets are mathematical objects that have group velocity and phase velocity. The energy of a wave packet travels at the group velocity (under normal circumstances); since energy has causal efficacy, the group velocity cannot be faster than the speed of light. The phase of a wave packet travels at the phase velocity; since phase is not causal, the phase velocity of a wave packet can be faster than light.[51]

Causal notions are important in general relativity to the extent that the existence of an arrow of time demands that the universe's semi-Riemannian manifold be orientable, so that "future" and "past" are globally definable quantities.

Engineering

[edit]

A causal system is a system with output and internal states that depends only on the current and previous input values. A system that has some dependence on input values from the future (in addition to possible past or current input values) is termed an acausal system, and a system that depends solely on future input values is an anticausal system. Acausal filters, for example, can only exist as postprocessing filters, because these filters can extract future values from a memory buffer or a file.

We have to be very careful with causality in physics and engineering. Cellier, Elmqvist, and Otter[52] describe causality forming the basis of physics as a misconception, because physics is essentially acausal. In their article they cite a simple example: "The relationship between voltage across and current through an electrical resistor can be described by Ohm's law: V = IR, yet, whether it is the current flowing through the resistor that causes a voltage drop, or whether it is the difference between the electrical potentials on the two wires that causes current to flow is, from a physical perspective, a meaningless question". In fact, if we explain cause-effect using the law, we need two explanations to describe an electrical resistor: as a voltage-drop-causer or as a current-flow-causer. There is no physical experiment in the world that can distinguish between action and reaction.

Biology, medicine and epidemiology

[edit]
Whereas a mediator is a factor in the causal chain (top), a confounder is a spurious factor incorrectly suggesting causation (bottom).

Austin Bradford Hill built upon the work of Hume and Popper and suggested in his paper "The Environment and Disease: Association or Causation?" that aspects of an association such as strength, consistency, specificity, and temporality be considered in attempting to distinguish causal from noncausal associations in the epidemiological situation. (See Bradford Hill criteria.) He did not note however, that temporality is the only necessary criterion among those aspects. Directed acyclic graphs (DAGs) are increasingly used in epidemiology to help enlighten causal thinking.[53]

Psychology

[edit]

Psychologists take an empirical approach to causality, investigating how people and non-human animals detect or infer causation from sensory information, prior experience and innate knowledge.

Attribution: Attribution theory is the theory concerning how people explain individual occurrences of causation. Attribution can be external (assigning causality to an outside agent or force—claiming that some outside thing motivated the event) or internal (assigning causality to factors within the person—taking personal responsibility or accountability for one's actions and claiming that the person was directly responsible for the event). Taking causation one step further, the type of attribution a person provides influences their future behavior.

The intention behind the cause or the effect can be covered by the subject of action. See also accident; blame; intent; and responsibility.

Causal powers

Whereas David Hume argued that causes are inferred from non-causal observations, Immanuel Kant claimed that people have innate assumptions about causes. Within psychology, Patricia Cheng[9] attempted to reconcile the Humean and Kantian views. According to her power PC theory, people filter observations of events through an intuition that causes have the power to generate (or prevent) their effects, thereby inferring specific cause-effect relations.

Causation and salience

Our view of causation depends on what we consider to be the relevant events. Another way to view the statement, "Lightning causes thunder" is to see both lightning and thunder as two perceptions of the same event, viz., an electric discharge that we perceive first visually and then aurally.

Naming and causality

David Sobel and Alison Gopnik from the Psychology Department of UC Berkeley designed a device known as the blicket detector which would turn on when an object was placed on it. Their research suggests that "even young children will easily and swiftly learn about a new causal power of an object and spontaneously use that information in classifying and naming the object."[54]

Perception of launching events

Some researchers such as Anjan Chatterjee at the University of Pennsylvania and Jonathan Fugelsang at the University of Waterloo are using neuroscience techniques to investigate the neural and psychological underpinnings of causal launching events in which one object causes another object to move. Both temporal and spatial factors can be manipulated.[55]

See Causal Reasoning (Psychology) for more information.

Statistics and economics

[edit]

Statistics and economics usually employ pre-existing data or experimental data to infer causality by regression methods. The body of statistical techniques involves substantial use of regression analysis. Typically a linear relationship such as

is postulated, in which is the ith observation of the dependent variable (hypothesized to be the caused variable), for j=1,...,k is the ith observation on the jth independent variable (hypothesized to be a causative variable), and is the error term for the ith observation (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of the s is caused by y, then estimates of the coefficients are obtained. If the null hypothesis that is rejected, then the alternative hypothesis that and equivalently that causes y cannot be rejected. On the other hand, if the null hypothesis that cannot be rejected, then equivalently the hypothesis of no causal effect of on y cannot be rejected. Here the notion of causality is one of contributory causality as discussed above: If the true value , then a change in will result in a change in y unless some other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change in is not sufficient to change y. Likewise, a change in is not necessary to change y, because a change in y could be caused by something implicit in the error term (or by some other causative explanatory variable included in the model).

The above way of testing for causality requires belief that there is no reverse causation, in which y would cause . This belief can be established in one of several ways. First, the variable may be a non-economic variable: for example, if rainfall amount is hypothesized to affect the futures price y of some agricultural commodity, it is impossible that in fact the futures price affects rainfall amount (provided that cloud seeding is never attempted). Second, the instrumental variables technique may be employed to remove any reverse causation by introducing a role for other variables (instruments) that are known to be unaffected by the dependent variable. Third, the principle that effects cannot precede causes can be invoked, by including on the right side of the regression only variables that precede in time the dependent variable; this principle is invoked, for example, in testing for Granger causality and in its multivariate analog, vector autoregression, both of which control for lagged values of the dependent variable while testing for causal effects of lagged independent variables.

Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid false inferences of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as an indirect effect through the potentially causative variable of interest. Given the above procedures, coincidental (as opposed to causal) correlation can be probabilistically rejected if data samples are large and if regression results pass cross-validation tests showing that the correlations hold even for data that were not used in the regression. Asserting with certitude that a common-cause is absent and the regression represents the true causal structure is in principle impossible.[56]

The problem of omitted variable bias, however, has to be balanced against the risk of inserting Causal colliders, in which the addition of a new variable induces a correlation between and via Berkson's paradox.[33]

Apart from constructing statistical models of observational and experimental data, economists use axiomatic (mathematical) models to infer and represent causal mechanisms. Highly abstract theoretical models that isolate and idealize one mechanism dominate microeconomics. In macroeconomics, economists use broad mathematical models that are calibrated on historical data. A subgroup of calibrated models, dynamic stochastic general equilibrium (DSGE) models are employed to represent (in a simplified way) the whole economy and simulate changes in fiscal and monetary policy.[57]

Statistical and economic analyses often rely on regression methods applied to observational or pre‑existing data to infer causal relationships[58]. Experimental designs, in contrast, establish causality by systematically manipulating independent variables under controlled conditions[59]. Experiments therefore provide stronger internal validity[60] because causal mechanisms are demonstrated directly rather than inferred from patterns in observational data[61].

Management

[edit]
Used in management and engineering, an Ishikawa diagram shows the factors that cause the effect. Smaller arrows connect the sub-causes to major causes.

For quality control in manufacturing in the 1960s, Kaoru Ishikawa developed a cause and effect diagram, known as an Ishikawa diagram or fishbone diagram. The diagram categorizes causes, such as into the six main categories shown here. These categories are then sub-divided. Ishikawa's method identifies "causes" in brainstorming sessions conducted among various groups involved in the manufacturing process. These groups can then be labeled as categories in the diagrams. The use of these diagrams has now spread beyond quality control, and they are used in other areas of management and in design and engineering. Ishikawa diagrams have been criticized for failing to make the distinction between necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this distinction.[62]

Humanities

[edit]

History

[edit]

In the discussion of history, events are sometimes considered as if in some way being agents that can then bring about other historical events. Thus, the combination of poor harvests, the hardships of the peasants, high taxes, lack of representation of the people, and kingly ineptitude are among the causes of the French Revolution. This is a somewhat Platonic and Hegelian view that reifies causes as ontological entities. In Aristotelian terminology, this use approximates to the case of the efficient cause.

Some philosophers of history such as Arthur Danto have claimed that "explanations in history and elsewhere" describe "not simply an event—something that happens—but a change".[63] Like many practicing historians, they treat causes as intersecting actions and sets of actions which bring about "larger changes", in Danto's words: to decide "what are the elements which persist through a change" is "rather simple" when treating an individual's "shift in attitude", but "it is considerably more complex and metaphysically challenging when we are interested in such a change as, say, the break-up of feudalism or the emergence of nationalism".[64]

Much of the historical debate about causes has focused on the relationship between communicative and other actions, between singular and repeated ones, and between actions, structures of action or group and institutional contexts and wider sets of conditions.[65] John Gaddis has distinguished between exceptional and general causes (following Marc Bloch) and between "routine" and "distinctive links" in causal relationships: "in accounting for what happened at Hiroshima on August 6, 1945, we attach greater importance to the fact that President Truman ordered the dropping of an atomic bomb than to the decision of the Army Air Force to carry out his orders."[66] He has also pointed to the difference between immediate, intermediate and distant causes.[67] For his part, Christopher Lloyd puts forward four "general concepts of causation" used in history: the "metaphysical idealist concept, which asserts that the phenomena of the universe are products of or emanations from an omnipotent being or such final cause"; "the empiricist (or Humean) regularity concept, which is based on the idea of causation being a matter of constant conjunctions of events"; "the functional/teleological/consequential concept", which is "goal-directed, so that goals are causes"; and the "realist, structurist and dispositional approach, which sees relational structures and internal dispositions as the causes of phenomena".[68]

Law

[edit]

According to law and jurisprudence, legal cause must be demonstrated to hold a defendant liable for a crime or a tort (i.e. a civil wrong such as negligence or trespass). It must be proven that causality, or a "sufficient causal link" relates the defendant's actions to the criminal event or damage in question. Causation is also an essential legal element that must be proven to qualify for remedy measures under international trade law.[69]

History

[edit]

Hindu philosophy

[edit]

Vedic period (c. 1750–500 BCE) literature contains early discussions of karma.[70] Karma is the belief held by Hinduism and other Indian religions that a person's actions cause certain effects in the current life and/or in future life, positively or negatively. The various philosophical schools (darshanas) provide different accounts of the subject. A doctrine of satkaryavada affirms that the effect inheres in the cause in some way. The effect is thus either a real or apparent modification of the cause. A doctrine of asatkaryavada affirms that the effect does not inhere in the cause, but is a new arising. In Brahma Samhita, Brahma describes Krishna as the prime cause of all causes.[71]

Bhagavad-gītā 18.14 identifies five causes for any action (knowing which it can be perfected): the body, the individual soul, the senses, the efforts and the supersoul.

According to Monier-Williams, in the Nyāya causation theory from Sutra I.2.I,2 in the Vaisheshika philosophy, from causal non-existence is effectual non-existence; but, not effectual non-existence from causal non-existence. A cause precedes an effect. With a threads and cloth metaphors, three causes are:

  1. Co-inherence cause: resulting from substantial contact, 'substantial causes', threads are substantial to cloth, corresponding to Aristotle's material cause.
  2. Non-substantial cause: Methods putting threads into cloth, corresponding to Aristotle's formal cause.
  3. Instrumental cause: Tools to make the cloth, corresponding to Aristotle's efficient cause.

Monier-Williams also proposed that Aristotle's and the Nyaya's causality are considered conditional aggregates necessary to man's productive work.[72]

Buddhist philosophy

[edit]

Karma is the causality principle focusing on 1) causes, 2) actions, 3) effects, where it is the mind's phenomena that guide the actions that the actor performs. Buddhism trains the actor's actions for continued and uncontrived virtuous outcomes aimed at reducing suffering. This follows the Subject–verb–object structure.[citation needed]

The general or universal definition of pratityasamutpada (or "dependent origination" or "dependent arising" or "interdependent co-arising") is that everything arises in dependence upon multiple causes and conditions; nothing exists as a singular, independent entity. A traditional example in Buddhist texts is of three sticks standing upright and leaning against each other and supporting each other. If one stick is taken away, the other two will fall to the ground.[73][74]

Causality in the Chittamatrin Buddhist school approach, Asanga's (c. 400 CE) mind-only Buddhist school, asserts that objects cause consciousness in the mind's image. Because causes precede effects, which must be different entities, then subject and object are different. For this school, there are no objects which are entities external to a perceiving consciousness. The Chittamatrin and the Yogachara Svatantrika schools accept that there are no objects external to the observer's causality. This largely follows the Nikayas approach.[75][76][77][78]

The Vaibhashika (c. 500 CE) is an early Buddhist school which favors direct object contact and accepts simultaneous cause and effects. This is based in the consciousness example which says, intentions and feelings are mutually accompanying mental factors that support each other like poles in tripod. In contrast, simultaneous cause and effect rejectors say that if the effect already exists, then it cannot effect the same way again. How past, present and future are accepted is a basis for various Buddhist school's causality viewpoints.[79][80][81]

All the classic Buddhist schools teach karma. "The law of karma is a special instance of the law of cause and effect, according to which all our actions of body, speech, and mind are causes and all our experiences are their effects."[82]

Western philosophy

[edit]

Aristotelian

[edit]

Aristotle identified four kinds of answer or explanatory mode to various "Why?" questions. He thought that, for any given topic, all four kinds of explanatory mode were important, each in its own right. As a result of traditional specialized philosophical peculiarities of language, with translations between ancient Greek, Latin, and English, the word 'cause' is nowadays in specialized philosophical writings used to label Aristotle's four kinds.[27][83] In ordinary language, the word 'cause' has a variety of meanings, the most common of which refers to efficient causation, which is the topic of the present article.

  • Material cause, the material whence a thing has come or that which persists while it changes, as for example, one's mother or the bronze of a statue (see also substance theory).[84]
  • Formal cause, whereby a thing's dynamic form or static shape determines the thing's properties and function, as a human differs from a statue of a human or as a statue differs from a lump of bronze.[85]
  • Efficient cause, which imparts the first relevant movement, as a human lifts a rock or raises a statue. This is the main topic of the present article.
  • Final cause, the criterion of completion, or the end; it may refer to an action or to an inanimate process. Examples: Socrates takes a walk after dinner for the sake of his health; earth falls to the lowest level because that is its nature.

Of Aristotle's four kinds or explanatory modes, only one, the 'efficient cause' is a cause as defined in the leading paragraph of this present article. The other three explanatory modes might be rendered material composition, structure and dynamics, and, again, criterion of completion. The word that Aristotle used was αἰτία. For the present purpose, that Greek word would be better translated as "explanation" than as "cause" as those words are most often used in current English. Another translation of Aristotle is that he meant "the four Becauses" as four kinds of answer to "why" questions.[27]

Aristotle assumed efficient causality as referring to a basic fact of experience, not explicable by, or reducible to, anything more fundamental or basic.

In some works of Aristotle, the four causes are listed as (1) the essential cause, (2) the logical ground, (3) the moving cause, and (4) the final cause. In this listing, a statement of essential cause is a demonstration that an indicated object conforms to a definition of the word that refers to it. A statement of logical ground is an argument as to why an object statement is true. These are further examples of the idea that a "cause" in general in the context of Aristotle's usage is an "explanation".[27]

The word "efficient" used here can also be translated from Aristotle as "moving" or "initiating".[27]

Efficient causation was connected with Aristotelian physics, which recognized the four elements (earth, air, fire, water), and added the fifth element (aether). Water and earth by their intrinsic property gravitas or heaviness intrinsically fall toward, whereas air and fire by their intrinsic property levitas or lightness intrinsically rise away from, Earth's center—the motionless center of the universe—in a straight line while accelerating during the substance's approach to its natural place.

As air remained on Earth, however, and did not escape Earth while eventually achieving infinite speed—an absurdity—Aristotle inferred that the universe is finite in size and contains an invisible substance that holds planet Earth and its atmosphere, the sublunary sphere, centered in the universe. And since celestial bodies exhibit perpetual, unaccelerated motion orbiting planet Earth in unchanging relations, Aristotle inferred that the fifth element, aither, that fills space and composes celestial bodies intrinsically moves in perpetual circles, the only constant motion between two points. (An object traveling a straight line from point A to B and back must stop at either point before returning to the other.)

Left to itself, a thing exhibits natural motion, but can—according to Aristotelian metaphysics—exhibit enforced motion imparted by an efficient cause. The form of plants endows plants with the processes nutrition and reproduction, the form of animals adds locomotion, and the form of humankind adds reason atop these. A rock normally exhibits natural motion—explained by the rock's material cause of being composed of the element earth—but a living thing can lift the rock, an enforced motion diverting the rock from its natural place and natural motion. As a further kind of explanation, Aristotle identified the final cause, specifying a purpose or criterion of completion in light of which something should be understood.

Aristotle himself explained,

Cause means

(a) in one sense, that as the result of whose presence something comes into being—e.g., the bronze of a statue and the silver of a cup, and the classes which contain these [i.e., the material cause];

(b) in another sense, the form or pattern; that is, the essential formula and the classes which contain it—e.g. the ratio 2:1 and number in general is the cause of the octave—and the parts of the formula [i.e., the formal cause].

(c) The source of the first beginning of change or rest; e.g. the man who plans is a cause, and the father is the cause of the child, and in general that which produces is the cause of that which is produced, and that which changes of that which is changed [i.e., the efficient cause].

(d) The same as "end"; i.e. the final cause; e.g., as the "end" of walking is health. For why does a man walk? "To be healthy", we say, and by saying this we consider that we have supplied the cause [the final cause].

(e) All those means towards the end which arise at the instigation of something else, as, e.g., fat-reducing, purging, drugs, and instruments are causes of health; for they all have the end as their object, although they differ from each other as being some instruments, others actions [i.e., necessary conditions].

— Metaphysics, Book 5, section 1013a, translated by Hugh Tredennick[86]

Aristotle further discerned two modes of causation: proper (prior) causation and accidental (chance) causation. All causes, proper and accidental, can be spoken as potential or as actual, particular or generic. The same language refers to the effects of causes, so that generic effects are assigned to generic causes, particular effects to particular causes, and actual effects to operating causes.

Averting infinite regress, Aristotle inferred the first mover—an unmoved mover. The first mover's motion, too, must have been caused, but, being an unmoved mover, must have moved only toward a particular goal or desire.

Pyrrhonism

[edit]

While the plausibility of causality was accepted in Pyrrhonism,[87] it was equally accepted that it was plausible that nothing was the cause of anything.[88]

Middle Ages

[edit]

In line with Aristotelian cosmology, Thomas Aquinas posed a hierarchy prioritizing Aristotle's four causes: "final > efficient > material > formal".[89] Aquinas sought to identify the first efficient cause—now simply first cause—as everyone would agree, said Aquinas, to call it God. Later in the Middle Ages, many scholars conceded that the first cause was God, but explained that many earthly events occur within God's design or plan, and thereby scholars sought freedom to investigate the numerous secondary causes.[90]

After the Middle Ages

[edit]

For Aristotelian philosophy before Aquinas, the word cause had a broad meaning. It meant 'answer to a why question' or 'explanation', and Aristotelian scholars recognized four kinds of such answers. With the end of the Middle Ages, in many philosophical usages, the meaning of the word 'cause' narrowed. It often lost that broad meaning, and was restricted to just one of the four kinds. For authors such as Niccolò Machiavelli, in the field of political thinking, and Francis Bacon, concerning science more generally, Aristotle's moving cause was the focus of their interest. A widely used modern definition of causality in this newly narrowed sense was assumed by David Hume.[89] He undertook an epistemological and metaphysical investigation of the notion of moving cause. He denied that we can ever perceive cause and effect, except by developing a habit or custom of mind where we come to associate two types of object or event, always contiguous and occurring one after the other.[12] In Part III, section XV of his book A Treatise of Human Nature, Hume expanded this to a list of eight ways of judging whether two things might be cause and effect. The first three:

  1. "The cause and effect must be contiguous in space and time."
  2. "The cause must be prior to the effect."
  3. "There must be a constant union betwixt the cause and effect. 'Tis chiefly this quality, that constitutes the relation."

And then additionally there are three connected criteria which come from our experience and which are "the source of most of our philosophical reasonings":

  1. "The same cause always produces the same effect, and the same effect never arises but from the same cause. This principle we derive from experience, and is the source of most of our philosophical reasonings."
  2. Hanging upon the above, Hume says that "where several different objects produce the same effect, it must be by means of some quality, which we discover to be common amongst them."
  3. And "founded on the same reason": "The difference in the effects of two resembling objects must proceed from that particular, in which they differ."

And then two more:

  1. "When any object increases or diminishes with the increase or diminution of its cause, 'tis to be regarded as a compounded effect, deriv'd from the union of the several different effects, which arise from the several different parts of the cause."
  2. An "object, which exists for any time in its full perfection without any effect, is not the sole cause of that effect, but requires to be assisted by some other principle, which may forward its influence and operation."

In 1949, physicist Max Born distinguished determination from causality. For him, determination meant that actual events are so linked by laws of nature that certainly reliable predictions and retrodictions can be made from sufficient present data about them. He describes two kinds of causation: nomic or generic causation and singular causation. Nomic causality means that cause and effect are linked by more or less certain or probabilistic general laws covering many possible or potential instances; this can be recognized as a probabilized version of Hume's criterion 3. An occasion of singular causation is a particular occurrence of a definite complex of events that are physically linked by antecedence and contiguity, which may be recognized as criteria 1 and 2.[14]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Causality is the relationship in which one event, process, state, or object (the cause) influences or contributes to the production of another (the effect), forming a foundational principle across philosophy, science, and other disciplines for explaining change and prediction. This concept encompasses the idea of necessary connections between phenomena, often involving temporal precedence, where causes precede effects, and mechanisms that transmit influence. In philosophy, causality has been debated since antiquity, with developing the first systematic theory in his Physics and Metaphysics (c. 350 BCE), proposing four types of causes—material, formal, efficient, and final—to account for why things exist or occur. , in the 18th century, revolutionized the discussion by arguing in A Treatise of Human Nature (1739–1740) that causality arises from observed constant conjunctions of events rather than any inherent necessity, emphasizing empirical habit over rational intuition. responded by positing causality as a synthetic a priori category of the human mind, essential for organizing experience and enabling scientific knowledge, as outlined in his Critique of Pure Reason (1781). Later philosophers like formalized causality through the "Universal Law of Causation," asserting that every event has a complete cause producing it deterministically via natural laws, as detailed in his A System of Logic (1843). In the 20th century, integrated probability into causality, proposing a "transcendental probability principle" to accommodate quantum indeterminacy in works such as The Direction of Time (1956), while advanced a probabilistic theory linking causation to higher conditional probabilities between events in A Probabilistic Theory of Causality (1970). Contemporary views include the Humean regularity approach, where causes are defined by sufficient regularities; the counterfactual approach, focusing on what would happen if the cause were absent; the manipulation approach, emphasizing interventions to produce effects; and the mechanisms approach, viewing causality as organized entities and activities producing changes. In science, causality underpins methodologies for distinguishing correlation from causation, with applications in fields like physics, where it enforces principles such as light cones in relativity to prevent paradoxes, and in medicine or economics, where randomized controlled trials test causal hypotheses. Modern challenges, including quantum mechanics' uncertainty principle introduced by Werner Heisenberg in 1927, have shifted emphasis toward probabilistic and functional models of causation, influencing statistical tools like causal inference frameworks.

Core Concepts

Definition and Scope

Causality refers to the relationship in which a cause—an event, condition, or factor—produces or influences an effect, another event, state, or object. This binary relation posits that the cause contributes to the occurrence or change in the effect, distinguishing it as a fundamental explanatory principle across domains. In philosophical terms, it embodies the principle that events or states are related such that one (the cause) brings about or influences the other (the effect), often involving chains of consequences. The term "causality" originates from the Latin causa, meaning "cause," "reason," or "motive," evolving through causalis (relating to a cause) to denote the productive relation between entities by the 16th century. First recorded around 1535 in English translations, it shifted by the 1640s to emphasize the abstract connection of cause to effect, reflecting a progression from motive-based explanations in classical thought to modern relational concepts. Causality holds interdisciplinary significance, serving as a cornerstone in philosophy for interpreting reality and necessity, in science for modeling predictions and interventions, and in everyday reasoning for attributing outcomes to actions. For instance, philosophers like Hume examined it to question inductive knowledge, while scientists apply it in fields like epidemiology to infer effects from interventions, and individuals use it intuitively for decisions such as linking diet to health. This broad scope underscores its role in bridging abstract theory with practical inference. Basic examples illustrate causality's forms: striking a match directly causes a flame by igniting the phosphorous, exemplifying immediate production of an effect. In contrast, smoking indirectly causes lung cancer through prolonged cellular damage leading to malignancy, highlighting mediated influences over time. These cases demonstrate how causality involves productive necessity, not mere temporal sequence. Unlike mere association or coincidence, causality implies a necessary productive link that ensures the effect follows reliably from the cause, ruling out random concurrence. For example, while the sun rising and a rooster crowing often coincide, the former does not necessitate the latter, lacking the influential relation defining true causation. This distinction emphasizes that causality requires evidentiary support beyond observed patterns to affirm productive influence.

Necessary and Sufficient Causes

In the philosophy of causation, a necessary cause for an effect is defined as a condition that must be present for the effect to occur, meaning the effect cannot happen in its absence. Formally, if C is a necessary cause of E, then the occurrence of E implies the occurrence of C, expressed in logical notation as E    CE \implies C. Conversely, a sufficient cause is one that, when present, guarantees the effect, regardless of other factors; thus, if C is sufficient for E, then C    EC \implies E. These distinctions highlight that many everyday causes are neither strictly necessary nor sufficient on their own, as effects often depend on a confluence of circumstances. Joint causation arises when multiple factors together form a condition that is both necessary and sufficient for the effect, such that no single factor alone would produce it. For instance, in a scenario where two individuals simultaneously push a heavy object over a threshold, their combined effort is necessary (neither could do it alone) and sufficient (the object moves once both act). This concept addresses scenarios where causation involves interdependent elements, emphasizing the holistic nature of causal complexes rather than isolated events. To refine these ideas for complex cases, philosopher J. L. Mackie introduced the INUS condition in 1965, defining it as an insufficient but non-redundant part of an unnecessary but sufficient condition for the effect. Here, the "sufficient condition" refers to a minimal set of factors that together guarantee the effect, but this set is "unnecessary" because alternative sets could also produce the effect; the INUS component is "insufficient" alone yet "non-redundant" within its set, meaning removing it would prevent the effect under that specific configuration. A classic example is a short circuit causing a house fire: the short circuit (S) is insufficient by itself but a non-redundant part of the unnecessary sufficient condition consisting of S, oxygen, and flammable material (which together ignite the fire, though other ignition sources like a match could also suffice). This framework captures how ordinary causal explanations pick out focal, contributory elements amid broader possibilities. Philosophically, necessary and sufficient criteria, including INUS conditions, help resolve overdetermination—cases where multiple independent sufficient causes could each produce the same effect, such as two separate rock throws shattering a single window. By identifying non-redundant parts within specific causal complexes, these criteria avoid positing redundant causation and clarify which factors genuinely contribute without implying metaphysical coincidence or explanatory excess. Counterfactual analysis can briefly test necessity in such scenarios by assessing whether the effect would fail if the candidate cause were absent.

Causation Versus Correlation and Conditionals

Correlation refers to a statistical association between two variables where changes in one tend to coincide with changes in the other, but without establishing that one causes the other. For instance, ice cream sales and the number of drownings both increase during summer months, not because consuming ice cream causes drownings, but due to the confounding factor of warmer weather leading to more ice cream consumption and more swimming activities. A common error in inferring causation from correlation is the fallacy of questionable cause, particularly the post hoc ergo propter hoc variant, which assumes that because one event precedes another, the former must have caused the latter. This fallacy has historical roots in ancient superstitions, where temporal proximity was mistaken for causation, ignoring other factors. Conditionals, expressed as "if A then B," often rely on material implication in logic, a truth-functional relation where the statement is true unless A is true and B is false, without requiring any causal connection between A and B. In contrast, causal implication demands that A actually produces or influences B through some mechanism, distinguishing it from mere logical entailment. To help distinguish potential causation in time-series data, the Granger causality test provides a statistical approach by assessing whether past values of one variable improve predictions of another beyond what its own past values alone can achieve, thus suggesting directional precedence without proving true causation. Distinguishing causation from correlation and conditionals can be guided by criteria such as those proposed by Bradford Hill, including the strength of the association (stronger links are more likely causal), consistency across studies, and temporality (cause must precede effect), among others like specificity, biological gradient, plausibility, coherence, experiment, and analogy. Probabilistic causation offers a brief complementary perspective by quantifying how much a cause raises the probability of an effect, helping to mitigate risks of mistaking correlations for causal links.

Philosophical Theories

Metaphysical and Ontological Foundations

In metaphysics, causality is often regarded as a fundamental principle underlying the structure of existence, governing how entities interact and change within reality. David Hume challenged the notion of necessary connections in causation, arguing that our idea of cause and effect arises not from observing any inherent necessity but from habitual associations formed through repeated experiences of conjunction between events. In contrast, Immanuel Kant posited causality as a synthetic a priori category of understanding, essential for organizing sensory experience into coherent objective sequences of events, thereby making it a necessary condition for the possibility of empirical knowledge and the unity of nature. These views highlight causality's role as a cornerstone of metaphysical inquiry, bridging the apparent flux of phenomena with principles that ensure intelligibility in being. Ontological debates surrounding causality center on whether causal relations possess objective reality or are merely conceptual or linguistic constructs. Realists maintain that causal powers are intrinsic properties of objects, endowing them with dispositions to produce effects independently of human cognition, thus forming part of the furniture of the world. Nominalists, however, contend that such relations lack independent ontological status, viewing them instead as abstractions or labels imposed by language to describe patterns of events without positing underlying powers. This tension underscores broader questions about the nature of being, where realism affirms causality's embedding in the intrinsic structure of entities, while nominalism reduces it to a tool for categorization, avoiding commitments to unobservable metaphysical commitments. The interplay between causality and determinism further illuminates these foundations, particularly in relation to free will. Hard determinism asserts that every event, including human actions, has a sufficient cause determined by prior conditions, rendering the universe a closed chain of necessities without room for genuine alternatives. Compatibilism, by contrast, reconciles determinism with free will by arguing that agency consists in actions arising from internal motivations unhindered by external coercion, even within a causally necessitated framework, thus preserving moral responsibility as aligned with one's character and reasons. This debate positions causality as the mechanism through which ontological commitments to order in reality either preclude or accommodate volitional freedom. Geometrically, causality manifests in the ontology of spacetime as boundaries defined by light cones, which delineate the limits of influence between events in relativistic frameworks. These cones represent the causal past and future accessible to any point, ensuring that interactions respect the invariant structure of spacetime without implying instantaneous or superluminal propagation. Such structures ontologically ground causality by embedding it in the geometry of existence, where the separation of timelike, spacelike, and lightlike paths enforces a directional order inherent to reality itself. In the realm of volition, causality links intentional action to agency by positing acts of will as personal causal relations between an agent and their behaviors, distinct from event-based causation. Volition operates as a direct exercise of the self's capacity to initiate change, grounding intentionality in the agent's substantive role without reducing it to mechanistic sequences. This metaphysical perspective frames agency as an ontological primitive, where causal efficacy in deliberate actions affirms the reality of purposeful being amid broader deterministic influences.

Epistemological Approaches

Epistemological approaches to causality examine the methods and justifications for acquiring knowledge about causal relations, distinguishing between the ontological status of causation and the epistemic means to ascertain it. These approaches address how humans infer causal connections from observations, emphasizing the challenges in moving from empirical data to justified beliefs about necessity and regularity. Central to this inquiry is the tension between empirical reliability and logical certainty, where causal knowledge is often provisional rather than absolute. One primary epistemological method for establishing causality is induction, which involves generalizing from repeated observations of events occurring together to infer a causal link. For instance, observing that bread consumption consistently precedes satiety leads to the inductive belief that eating bread causes fullness. This process relies on patterns of constant conjunction, where events A and B repeatedly follow one another, suggesting A causes B. However, induction faces profound challenges, most notably Hume's problem of induction, which questions the justification for assuming that future instances will conform to past observations. Hume argued that no amount of observed regularities can logically guarantee their continuation, as the principle of uniformity of nature itself cannot be proven without circular reasoning—relying on induction to justify induction. This skepticism undermines the epistemic warrant of inductive causal inferences, rendering them habitual rather than rationally compelled. To supplement induction, inference to the best explanation (IBE) offers another key epistemological strategy for causal knowledge, positing that among competing hypotheses, the one providing the most comprehensive and unifying account of observed data is likely true. In causal contexts, IBE favors explanations invoking mechanisms or processes that account for data patterns better than mere correlations; for example, hypothesizing a viral infection as the cause of a symptom cluster explains both the onset and potential spread more adequately than coincidental associations. Proponents argue that IBE is ampliative, allowing progress beyond observed evidence by selecting hypotheses with greater explanatory power, though critics note its reliance on subjective assessments of "bestness," which may introduce bias. Despite these concerns, IBE remains influential in scientific reasoning, where causal hypotheses are evaluated for their ability to predict and unify diverse phenomena. Skeptical perspectives, particularly Pyrrhonian doubt, further complicate epistemological claims about causality by questioning the necessity inferred from constant conjunctions. Pyrrhonian skeptics, following Sextus Empiricus, argue that causal necessity cannot be known because appearances of connection may stem from perceptual illusions or unexamined assumptions, urging suspension of judgment (epoché) on whether events are truly linked by necessity or merely appear so. This doubt targets the assumption that repeated conjunctions imply an underlying power or force, asserting that no evidence compels belief in causal invariance across all cases, as alternative interpretations—such as coincidental sequences—remain equally viable. Such skepticism promotes intellectual tranquility by avoiding dogmatic commitments to causal explanations, though it risks paralyzing practical decision-making. Experiments and interventions play a crucial role in providing epistemic justification for causal claims, offering controlled means to test hypothesized relations beyond passive observation. By manipulating a potential cause while holding other variables constant, experiments isolate effects, thereby warranting inferences about directionality and necessity; for example, randomized trials in medicine intervene on treatments to attribute outcomes to the intervention rather than confounders. This method enhances reliability by breaking natural correlations and revealing counterfactual dependencies, thus grounding causal beliefs in direct evidential support. Interventions thus elevate epistemic warrant from inductive conjecture to more robust confirmation, though their feasibility varies across domains. In modern epistemology, Bayesian approaches address causal belief updating by treating degrees of belief as probabilities that revise in light of new evidence, without presupposing deterministic necessity. Bayesianism models causal inference as conditionalizing prior beliefs on observational or interventional data, allowing gradual strengthening or weakening of causal hypotheses; for instance, initial skepticism about a drug's efficacy may shift toward confidence upon accumulating positive trial results. This framework accommodates uncertainty in causal knowledge, emphasizing coherence and predictive success over absolute proof, and integrates inductive and explanatory elements into a probabilistic structure. While avoiding formal equations, Bayesian methods highlight how causal beliefs evolve rationally through evidence accumulation, providing a flexible tool for epistemic justification in uncertain environments.

Counterfactual and Manipulation Theories

Counterfactual theories of causation, prominently developed by David Lewis in his 1973 paper, analyze causal relations in terms of hypothetical scenarios where the cause does not occur. According to Lewis, event C causes event E if and only if E counterfactually depends on C, meaning that had C not occurred, E would not have occurred (or, more precisely, in the closest possible world where C is absent, E is also absent). This analysis extends to chains of causation through the ancestral relation of counterfactual dependence, allowing for transitive causal chains. Lewis's framework draws on his semantics for counterfactuals, where similarity between possible worlds determines the "closest" alternatives, and it aligns with basic structural equation models, such as a simple directed graph X → Y, where intervening on X changes Y while holding other variables fixed. In contrast, manipulation theories, as articulated by James Woodward in his 2003 book, define causation through the effects of hypothetical interventions. A variable X causes Y if there exists an intervention on X that changes the value of Y while the relationship remains invariant under such manipulations. Invariance here means the causal generalization holds across a range of interventions, emphasizing exploitability for prediction and control rather than mere dependence. Woodward's interventionist account applies to both token and type-level causation, using structural equations to model systems where interventions sever incoming arrows to the manipulated variable, thus focusing on modular, stable relationships. The key differences between these theories lie in their orientation: Lewis's counterfactual approach is primarily descriptive, capturing what would happen in hypothetical non-actual scenarios to reveal intrinsic causal connections between events, whereas Woodward's manipulation theory is more prescriptive, prioritizing relationships that inform policy and intervention in actual systems. Counterfactuals emphasize similarity of worlds for dependence, potentially leading to intrinsic metaphysics of causation, while manipulations stress empirical testability and invariance, making them suitable for scientific practice but less focused on singular events. Illustrative examples highlight these distinctions. In a manipulation context, performing surgery (X) on a patient causes recovery (Y) if intervening to perform or withhold the surgery reliably alters the recovery outcome, assuming the surgical process remains invariant. Conversely, a counterfactual example posits that smoking (C) causes shorter lifespan (E) because, had the individual not smoked, they would have lived longer, relying on dependence in the nearest possible world without smoking. Both theories face significant criticisms, particularly regarding transitivity and preemption. In Lewis's framework, preemption—where a backup cause would have produced the effect if the actual cause were absent—undermines direct counterfactual dependence for the actual cause, as the effect still occurs in the closest world without it; transitivity issues arise in chains where intermediate links fail dependence due to overdetermination. Woodward's theory similarly struggles with preemption in non-modular systems, where interventions on one preempting variable do not isolate the actual cause, and it rejects full transitivity, as intervening on A to affect B does not guarantee manipulability from B to a downstream C, challenging intuitions about causal chains. Probabilistic extensions of counterfactuals address uncertainty by weighting dependencies over possible worlds, but they do not fully resolve these structural problems.

Scientific and Formal Theories

Probabilistic and Causal Calculus Models

Probabilistic causation formalizes causality in terms of probabilities, positing that a cause increases the probability of its effect relative to its absence. In Patrick Suppes' seminal theory, a cause CC of an event EE at time t2>t1t_2 > t_1 must satisfy two conditions: prima facie causation, where the probability of EE given CC exceeds the unconditional probability of EE (i.e., P(Et2Ct1)>P(Et2)P(E_{t_2} | C_{t_1}) > P(E_{t_2})), and temporal precedence, ensuring CC occurs before EE. This approach addresses deterministic limitations by accommodating stochastic processes, though it faces challenges like spurious correlations in cases of common causes. Causal diagrams, often represented as directed acyclic graphs (DAGs), encode causal structures where nodes denote variables and directed edges indicate direct causal influences. In these graphs, d-separation provides a graphical criterion to determine conditional independencies: two sets of variables XX and YY are d-separated by a set ZZ if every path between them is blocked, meaning no active path transmits probabilistic dependencies when conditioning on ZZ. This criterion, rooted in Bayesian network theory, enables efficient computation of joint distributions via the Markov condition, which states that each variable is independent of its non-descendants given its parents. Structural causal models (SCMs) extend this framework by specifying functional relationships among variables, typically as Y=f(X,U)Y = f(X, U), where YY is the effect, XX the cause, ff a deterministic function, and UU exogenous noise capturing unobserved factors with P(U)P(U) independent of XX. These models distinguish observational from interventional distributions, underpinning causal inference. Judea Pearl's do-calculus, introduced in 1995, provides three axiomatic rules to compute interventional probabilities P(Ydo(X))P(Y | do(X))—the distribution after forcing XX—from observational data P(YX)P(Y | X), without simulating interventions. For instance, Rule 1 equates P(Ydo(X),Z,W)=P(Ydo(X),Z)P(Y | do(X), Z, W) = P(Y | do(X), Z) if YY is d-separated from WW given XX and ZZ in the graph post-intervention. In applications, do-calculus facilitates handling confounding through the backdoor criterion: a set ZZ identifies the causal effect of XX on YY if no node in ZZ descends from XX and ZZ blocks all backdoor paths (those entering XX) between XX and YY. This allows adjustment via summation: P(Ydo(X))=ZP(YX,Z)P(Z)P(Y | do(X)) = \sum_Z P(Y | X, Z) P(Z), mitigating biases from unobserved common causes. Such tools have broad utility in non-experimental settings, enabling causal queries from probabilistic data.

Process and Derivation Theories

Process theories of causality conceptualize causation as the physical transmission of influences or conserved quantities through spatiotemporal processes, emphasizing the actual mechanisms by which causes produce effects rather than abstract relations between events. In this view, causal processes are continuous worldlines along which specific marks—such as structural modifications or invariant quantities—are transmitted from cause to effect, enabling local identification of causal connections. Philosopher Wesley Salmon developed this approach in detail, arguing that genuine causal processes can be distinguished from pseudo-processes by their ability to transmit such marks without alteration, as opposed to mere correlations that lack this physical propagation. For instance, in a billiard ball collision, the momentum transferred from the cue ball to the object ball represents a conserved quantity propagated along the causal process, illustrating how energy flow physically links the initiating event to the outcome. A key distinction in process theories lies between local markability, which focuses on the intrinsic transmission within individual processes, and global patterns that might involve broader probabilistic dependencies. Salmon's framework prioritizes the former, positing that causality manifests through interactions where processes intersect and exchange conserved quantities, such as in particle collisions where invariants like charge or lepton number are preserved. This local emphasis aligns closely with fundamental physics, where conservation laws underpin causal derivations without invoking counterfactuals or interventions, though it remains compatible with manipulation-based accounts that test causality through physical alterations. Critics, however, argue that process theories struggle with "absence causes," such as a drought resulting from the lack of rain, since no actual mark or quantity is transmitted in cases of omission; the theory excels at positive transmissions but falters on negative or preventive causation. In contrast, derivation theories frame causality through logical deduction, where effects are derived from initial conditions and general laws, treating causation as an explanatory relation embedded in scientific inference. Carl Hempel's deductive-nomological (DN) model exemplifies this, positing that a phenomenon is causally explained if it can be logically deduced from a set of universal laws and particular statements about antecedent conditions, thereby deriving the effect nomologically from its causes. This approach views causality not as a brute physical process but as the subsumption of events under covering laws, as in deriving planetary motion from Newton's laws and initial positions, emphasizing explanatory power over mechanistic details. While derivation theories provide a formal structure for scientific understanding, they have been critiqued for overemphasizing logical form at the expense of capturing the dynamic, processual nature of causation in empirical contexts.

Structure Learning Algorithms

Structure learning algorithms in causal inference aim to automatically infer directed acyclic graphs (DAGs) representing causal structures from observational data, relying on assumptions such as the causal Markov condition and faithfulness. These methods are essential for discovering causal relationships without experimental interventions, enabling applications in fields requiring data-driven hypothesis generation. Broadly, they fall into two categories: constraint-based approaches, which use conditional independence tests to prune edges, and score-based approaches, which optimize a scoring function to select the best-fitting structure. Constraint-based learning identifies causal structures by testing for conditional independencies in the data, leveraging the principle that non-adjacent variables in a DAG are conditionally independent given their parents. The Peter-Clark (PC) algorithm, developed by Peter Spirtes and Clark Glymour, exemplifies this approach: it begins with a complete undirected graph and iteratively removes edges based on statistical tests of conditional independence, starting with zero conditioning sets and increasing the size as needed, ultimately orienting edges to form a DAG consistent with the data. This method assumes causal sufficiency (no latent confounders) and is implemented in software like Tetrad, a suite for graphical causal modeling that supports simulation, estimation, and search for such structures. The PC algorithm's efficiency stems from its skeleton-building phase, which prunes edges using tests like the chi-squared statistic for discrete data or Fisher's Z for continuous data, followed by orientation rules to avoid cycles and resolve v-structures (colliders). Score-based learning evaluates candidate DAGs using a score that balances model fit and complexity, searching the space of possible structures to maximize this score. The Bayesian Information Criterion (BIC) is a widely used score, penalizing overfitting by subtracting a term proportional to the number of parameters and sample size logarithm, defined as BIC = -2 log L + k log n, where L is the likelihood, k the number of parameters, and n the sample size. Search procedures like hill-climbing start from an initial graph (e.g., empty or random) and greedily add, delete, or reverse edges to improve the score until a local maximum is reached, often yielding a Markov equivalence class rather than a unique DAG. The Greedy Equivalence Search (GES) algorithm refines this by operating on equivalence classes to enhance efficiency and accuracy in high-dimensional settings. Key challenges in structure learning include the faithfulness assumption, which posits that all conditional independencies in the data are entailed by the graph's d-separation structure, without probabilistic cancellations that mask true dependencies; violations, though of measure zero in parameter space, can lead to incorrect inferences in finite samples. Handling latent variables exacerbates this, as standard methods like PC assume all relevant variables are observed; extensions such as the Fast Causal Inference (FCI) algorithm incorporate latent confounder patterns but increase computational complexity. These issues underscore the need for robustness checks and hybrid methods combining constraints and scores. A representative application is inferring gene regulatory networks (GRNs) from gene expression data, where structure learning algorithms identify causal interactions between transcription factors and target genes. For instance, constraint-based methods like PC have been applied to single-cell RNA sequencing data to reconstruct GRNs by detecting conditional independencies that reveal regulatory pathways, aiding in understanding cellular responses to perturbations. Modern extensions integrate structure learning with machine learning techniques, addressing scalability for high-dimensional data. The NOTEARS algorithm formulates DAG learning as a continuous optimization problem, minimizing a score (e.g., least squares) subject to an acyclicity constraint enforced via a smooth function like the trace exponential of the adjacency matrix powered by itself, solvable with augmented Lagrangian methods; this avoids discrete search, enabling gradient-based optimization and outperforming hill-climbing on synthetic benchmarks with thousands of variables. Post-2020 advances, such as NOTEARS variants, further incorporate deep learning for nonlinear relationships, enhancing causal discovery in AI systems for tasks like interpretable model building. More recent advances as of 2025 integrate large language models (LLMs) into causal discovery frameworks, using LLM-generated priors aligned with data-driven methods to improve structure recovery and handle complex priors, as shown in evaluations on real-world datasets.

Applications Across Disciplines

Physics and Engineering

In classical physics, causality is fundamentally embodied in Newton's laws of motion, which describe deterministic relationships where forces produce accelerations in a clear cause-and-effect manner. Newton's second law, expressed as F=maF = ma, where FF is the net force acting on an object of mass mm, resulting in acceleration aa, exemplifies this by quantifying how an applied force directly causes a change in motion. This law underpins the predictive power of classical mechanics, allowing the future state of a system to be determined solely from its initial conditions and the causal influences of forces, without retroactive effects. Such determinism aligns with the principle that effects follow causes in a temporal sequence, forming the basis for engineering applications like trajectory predictions in ballistics. In the framework of special relativity, causality is preserved through the spacetime structure defined by light cones, which delineate the boundaries of possible causal influences. For any event, the future light cone encompasses all points that can be reached by signals traveling at or below the speed of light, while the past light cone includes points from which such signals can arrive; events outside these cones are spacelike separated and cannot causally interact./06%3A_Regions_of_Spacetime/6.03%3A_Light_Cone-_Partition_in_Spacetime) This causal structure enforces the prohibition of faster-than-light signaling, ensuring that no information or influence propagates acausally, thereby maintaining the relativistic invariance of cause preceding effect in all inertial frames. Violations of this would lead to paradoxes, such as effects preceding causes in some reference frames, underscoring relativity's role in safeguarding physical causality. Quantum mechanics introduces debates on causality due to phenomena like entanglement, where Bell's theorem reveals that quantum correlations cannot be explained by local hidden variables, implying non-local influences that challenge classical intuitions. Specifically, Bell's inequalities are violated in experiments, demonstrating "spooky action at a distance" without faster-than-light signaling, thus preserving relativistic causality while questioning locality. Local hidden variable theories, which would maintain strict causal locality, fail to reproduce quantum predictions, prompting interpretations like Bohmian mechanics that restore determinism but introduce non-local guidance. Probabilistic models briefly address quantum uncertainties by incorporating inherent randomness, yet standard quantum theory upholds no-signaling causality. In engineering, particularly control systems, causality ensures that system outputs depend only on current and past inputs, enabling real-time stability and predictability. Feedback loops, a cornerstone of control theory, exemplify this by using error signals from past states to adjust inputs, creating stable cause-effect chains that counteract disturbances without anticipating future inputs. For instance, in proportional-integral-derivative (PID) controllers, the output is computed causally from historical error data to regulate processes like temperature or velocity, preventing instability from non-causal dependencies. This causal framework is essential for designing robust systems in automation and robotics, where non-causal elements could lead to impractical or unstable implementations. Recent developments in the 2020s have explored quantum causality through experiments demonstrating indefinite causal order (ICO), where quantum processes occur in a superposition of different causal sequences rather than a fixed order. In 2017, optical implementations verified ICO using a quantum switch, where two channels interfere in an order-indefinite manner, enhancing information processing tasks like discrimination of unitary operations. Building on this, 2023 experiments achieved device-independent certification of ICO, confirming its presence through correlations alone, without trusting device internals, and highlighting advantages in quantum communication protocols. These findings, supported by theoretical frameworks, suggest ICO as a resource for quantum computing, challenging classical causal hierarchies while remaining compatible with no-signaling principles. Further advancements in 2024 and 2025 have included comprehensive reviews of experimental techniques and new explorations of ICO applications, such as in quantum metrology and theoretical extensions to knot invariants.

Biology, Medicine, and Epidemiology

In biology, natural selection functions as a causal process whereby heritable variations in traits among individuals within a population lead to differential survival and reproductive success, thereby driving evolutionary change over generations. This causality is evident in how specific phenotypic variations, such as beak size in Darwin's finches, directly influence foraging efficiency and thus fitness in response to environmental pressures. Seminal analyses emphasize that natural selection explains adaptations not merely as correlations but as outcomes of causal pathways linking traits to survival probabilities. In medicine, establishing causality for interventions like drug efficacy relies on frameworks such as the Bradford Hill criteria, which evaluate associations through aspects including strength of effect, consistency across studies, temporality, biological gradient, plausibility, coherence, specificity, experiment, and analogy. These criteria, originally developed in the context of environmental exposures, guide the interpretation of randomized controlled trials (RCTs), where randomization minimizes confounding and selection bias to provide robust causal evidence for treatment effects. For instance, RCTs demonstrating aspirin’s reduction in cardiovascular events satisfy temporality and experimental criteria, confirming causality when supported by dose-response relationships and biological plausibility. Epidemiology employs concepts like population attributable risk (PAR) to quantify the proportion of disease burden causally linked to a specific exposure, adjusting for confounders to isolate true effects. In the case of smoking and lung cancer, PAR estimates indicate that 85% of cases among women are attributable to ever-smoking, with hazard ratios escalating from 13.9 for current smokers to over 21 for heavy smokers, after adjusting for confounders such as age, education, and alcohol consumption. Landmark studies, like those by Doll and Hill, established this causal link by demonstrating temporal precedence (smoking preceding cancer onset) and ruling out alternative explanations through cohort comparisons, though early analyses highlighted potential confounders like occupational exposures that were later controlled. A key challenge in these fields is multicausality, where diseases arise from complex interactions between genetic predispositions and environmental factors, complicating isolation of individual causes. For example, in Crohn’s disease, genetic variants contribute about 50% heritability in monozygotic twins, but environmental triggers like gut microbiome alterations synergistically amplify risk through non-additive gene-environment (G×E) effects. Similarly, atopic dermatitis involves filaggrin (FLG) gene mutations impairing skin barrier function, with allergens as environmental co-factors exacerbating inflammation and progression to broader allergic conditions. These interactions underscore the need for integrative models to disentangle hierarchical causation in living systems. Modern approaches in genomics advance causal inference through Mendelian randomization (MR), which uses genetic variants as instrumental variables to mimic randomization and infer causality between exposures and outcomes, assuming variants are associated with the exposure, independent of confounders, and affect outcomes only via the exposure. For instance, MR analyses employing single nucleotide polymorphisms (SNPs) linked to low-density lipoprotein (LDL) cholesterol levels have causally linked elevated LDL to increased coronary heart disease risk, providing evidence beyond observational associations. This method has been pivotal in genomics for validating drug targets, such as PCSK9 inhibitors, by leveraging genome-wide association study data to rule out reverse causation and pleiotropy.

Social Sciences: Psychology, Statistics, and Economics

In psychology, causality is central to understanding how individuals attribute causes to behaviors and events, often through intuitive or "naive" psychological processes. Fritz Heider's seminal work introduced attribution theory, positing that people act as intuitive scientists who infer causal relations between actions and their underlying dispositions or environmental factors to make sense of social interactions. Heider emphasized the balance between internal (dispositional) and external (situational) attributions, laying the foundation for later models that explore biases like the fundamental attribution error, where observers overemphasize personal traits over contextual influences. Experimental paradigms in psychology further illustrate causal mechanisms in human behavior, particularly obedience to authority. Stanley Milgram's 1963 obedience study demonstrated that situational pressures from an authority figure could causally induce participants to administer what they believed were harmful electric shocks to a learner, with 65% complying fully despite ethical distress, highlighting how perceived legitimacy and proximity to the victim modulate causal effects on compliance. This experiment underscored the power of experimental manipulation to isolate causal factors in social influence, influencing subsequent research on conformity and ethical decision-making. In statistics, causal inference relies on designs that approximate randomization to identify effects amid confounding variables. Randomized controlled trials (RCTs), pioneered by Ronald Fisher in the 1920s and 1930s, establish causality by randomly assigning units to treatment or control groups, ensuring balance in unobserved factors and enabling unbiased estimation of average treatment effects through statistical tests like Fisher's exact test. For instance, in agricultural experiments, Fisher advocated randomization to attribute yield differences solely to interventions, a principle now standard in clinical and social trials for robust causal claims. When randomization is infeasible, quasi-experimental methods like regression discontinuity designs (RDD) exploit sharp cutoffs in assignment rules to infer causality. Introduced by Thistlethwaite and Campbell in 1960, RDD compares outcomes just above and below a threshold—such as scholarship eligibility based on test scores—assuming continuity in potential outcomes absent the treatment, thus isolating local causal effects through regression models fitted to the discontinuity. This approach has been widely adopted in policy evaluation, providing credible evidence where full randomization is ethically or practically impossible. Economics applies causal methods to assess policy impacts on aggregate behavior and markets, often using observational data to mimic experimental conditions. Difference-in-differences (DiD) estimators, for example, evaluate interventions by comparing changes over time between treated and control groups, assuming parallel trends absent treatment. David Card and Alan Krueger's 1994 study on New Jersey's minimum wage increase from $4.25 to $5.05 per hour used DiD to compare fast-food employment in New Jersey (treated) and neighboring Pennsylvania (control), finding no employment reduction—and possibly a slight increase—challenging traditional labor supply models. This method has become a cornerstone for causal policy analysis, applied to topics like education reforms and health interventions. Instrumental variables (IV) address endogeneity in economic data by leveraging exogenous instruments that affect treatment but not outcomes directly. The basic IV setup requires an instrument correlated with the endogenous treatment (relevance) but independent of the error term (exogeneity), yielding the local average treatment effect (LATE) for compliers—those whose treatment status changes with the instrument. Joshua Angrist and Guido Imbens formalized this in their 1990s work, showing IV estimates the causal effect for subpopulations influenced by the instrument. A classic application is using lottery wins as instruments for income shocks; studies exploiting random Swedish lottery assignments have estimated that unearned income reduces labor supply modestly, particularly among older workers, by providing exogenous variation in household resources. Across these fields, causal inference from observational data faces persistent challenges like endogeneity—where treatment correlates with unobserved confounders—and selection bias, which arises when sample composition differs systematically between groups, inflating or deflating estimated effects. Endogeneity violates assumptions in standard regressions, as regressors may capture reverse causation or omitted variables, while selection bias occurs in non-random samples, such as self-selected program participants with unmeasured motivation. Techniques like IV and RDD mitigate these, but require careful validity checks, as failure to satisfy instrument assumptions can propagate bias. In social sciences, these issues underscore the need for sensitivity analyses to ensure causal claims withstand scrutiny.

Historical Evolution

Ancient and Eastern Traditions

In ancient Hindu philosophy, the concept of karma emerged as a foundational principle of causality, positing that every action, intention, and thought generates corresponding consequences that shape an individual's future experiences across cycles of rebirth. This law of cause and effect is articulated in the Upanishads, dating to approximately 800 BCE, where karma is described as an impersonal mechanism linking moral actions to ethical outcomes, independent of divine intervention. For instance, the Brihadaranyaka Upanishad explains that virtuous deeds lead to favorable rebirths, while harmful actions result in suffering, establishing causality as a moral imperative that governs human conduct. Buddhist philosophy further developed causal interconnectedness through the doctrine of pratītyasamutpāda, or dependent origination, which asserts that all phenomena arise interdependently without a singular first cause or creator. Introduced by Siddhartha Gautama around the 5th century BCE, this framework outlines a chain of twelve links—from ignorance to aging and death—illustrating how conditions mutually condition each other in a web of causation, emphasizing impermanence (anicca) and the absence of an inherent self (anatta). Unlike linear causality, pratītyasamutpāda rejects absolute determinism by highlighting the potential for ethical intervention through mindfulness and the Eightfold Path to disrupt negative causal chains, thereby influencing views on responsibility and liberation. In parallel, ancient Greek thought, particularly in Aristotle's works from 384–322 BCE, formalized causality through the theory of four causes, providing a structured analysis of change and existence. In his Physics, Aristotle delineates the material cause (the substance from which something is made, such as bronze for a statue), the formal cause (the essence or structure defining it, like the statue's shape), the efficient cause (the agent producing it, such as the sculptor's action), and the final cause (the purpose or telos toward which it aims, like commemorating a hero). This teleological approach contrasts with early Greek atomism, as proposed by Democritus around 460–370 BCE, which viewed causality as mechanistic collisions of indivisible atoms in a void, devoid of purpose or final ends, thus prioritizing necessity over intentionality. The Nyaya Sutras, an Indian text from roughly the 2nd century BCE, complemented these ideas by systematizing causal inference through anumana (syllogistic reasoning), where effects are deduced from observed causes, such as inferring fire from smoke, to support epistemological rigor in debating reality and ethics. These traditions profoundly shaped early conceptions of determinism and ethics by integrating causality into moral frameworks: karma and pratītyasamutpāda underscored ethical accountability within interdependent cycles, fostering non-theistic determinism tempered by personal agency, while Aristotle's final cause infused ethics with purpose-driven virtue, influencing later Western ideas on eudaimonia. In Hindu and Buddhist contexts, causality reinforced dharma (cosmic order) as a guide for ethical living, promoting harmony amid inevitable consequences, whereas Aristotelian teleology emphasized rational pursuit of the good life, bridging natural processes with moral ends. Such influences persisted into medieval adaptations, highlighting causality's role in reconciling fate with human choice.

Western Philosophy: Antiquity to Middle Ages

In Western philosophy from antiquity to the Middle Ages, causal concepts evolved from skeptical challenges in Pyrrhonism to theological syntheses in scholasticism, emphasizing necessity, divine agency, and efficient causation. Pyrrhonism, as articulated by Sextus Empiricus around 200 CE, advanced skepticism toward causal necessity by arguing that apparent causal connections lack definitive proof, employing modes to show equipollent arguments on both sides. In Outlines of Pyrrhonism (PH I 180–186), Sextus critiques dogmatic causal explanations, such as those positing necessary links between events, by highlighting alternative interpretations and perceptual relativity, leading to suspension of judgment (epochē) rather than affirmation of causal determinism. This approach undermined Stoic views of fate as inexorable causation, promoting tranquility (ataraxia) through avoidance of unsubstantiated causal beliefs. Building on Aristotelian foundations of four causes—material, formal, efficient, and final—medieval philosophers integrated causality into Christian theology, viewing God as the ultimate efficient cause. Thomas Aquinas, in his Summa Theologica (1274), synthesized these ideas in his "Five Ways" to prove God's existence, with the second way arguing from efficient causation: every effect requires a prior cause, forming a chain that cannot regress infinitely, thus necessitating a first uncaused cause, identified as God. Aquinas further distinguished essence (what a thing is) from existence (that it is), positing that in created beings, existence is caused by God as the primary efficient cause, while in God, essence and existence are identical, ensuring divine simplicity. This essence-existence distinction, drawn from Aristotelian metaphysics, underscored causality as a hierarchical process where secondary causes depend on divine concurrence. Arabic philosophers profoundly influenced this development, particularly through Avicenna (Ibn Sina, d. 1037), whose conception of efficient causation as the "giver of form and being" shaped Latin scholasticism. Avicenna's emanationist model, where the Necessary Existent (God) causes contingent beings through necessary efficient chains, informed Aquinas's rejection of infinite regress in causation while adapting it to Christian creation ex nihilo. Key debates emerged between occasionalism, prefigured by al-Ghazali (d. 1111) in The Incoherence of the Philosophers, and doctrines of continuous creation. Al-Ghazali denied natural necessity in causation, arguing that events like fire burning cotton occur only by God's direct, habitual intervention, as true causal power resides solely in the divine will to preserve omnipotence against necessitarian philosophies. In contrast, continuous creation, defended by thinkers like Aquinas, held that God sustains the world's existence moment-to-moment through efficient causation, allowing secondary causes (e.g., natural agents) to operate concurrently without implying occasionalist discontinuity. These medieval discussions of divine and secondary causation bridged theological causality to emerging empirical inquiries, influencing the scientific revolution by providing frameworks for understanding regular natural laws as divinely ordained while questioning absolute necessity in favor of probabilistic or concurrent models.

Modern and Contemporary Developments

The Enlightenment marked a pivotal shift toward empiricism in the philosophy of causality, with David Hume challenging traditional notions of necessary connection. In his A Treatise of Human Nature (1739–1740), Hume proposed the bundle theory of the self, arguing that the mind consists of a collection of perceptions without an underlying substance, and extended this skepticism to causality by reducing it to constant conjunction—repeated observations of events occurring together without any inherent necessity or power linking them. Hume contended that causal inferences arise from custom or habit rather than rational insight, as no impression of necessary connection exists in experience. This empiricist critique undermined metaphysical accounts of causality, emphasizing psychological association over objective necessity. Immanuel Kant responded directly to Hume's skepticism in Critique of Pure Reason (1781), awakening from what he called his "dogmatic slumber" to reframe causality as an a priori category of the understanding. Kant argued that causality is not derived from empirical habit but is a synthetic a priori judgment imposed by the mind to structure sensory experience, enabling objective succession in time—such as the principle that every event has a cause. In the Second Analogy of Experience, he demonstrated that without this category, perceptions would lack necessary connection, reducing the world to subjective appearances; causality thus ensures the possibility of coherent experience, distinguishing phenomena from things-in-themselves. This transcendental idealism reconciled empiricism with rational necessity, limiting causality to the realm of appearances while allowing for freedom in the noumenal domain. Building on this empirical turn, John Stuart Mill advanced causal discovery in A System of Logic (1843) through inductive methods, particularly the methods of agreement and difference. The method of agreement posits that if multiple instances of an effect share only one antecedent circumstance amid varying conditions, that circumstance is the cause (or effect). For example, if bread, fish, and pork all cause indigestion in cases where other factors differ, the common element (e.g., staleness) is causal. The method of difference, deemed more conclusive, compares cases where the effect occurs with a factor present and absent otherwise, isolating the cause—as in observing a plant's growth with and without sunlight under identical conditions. These canons assume uniformity in nature and plurality of causes, providing tools for scientific inference without relying on metaphysical necessity, though they require verification to rule out hidden factors. In the 20th century, Bertrand Russell critiqued causality's role in physics, arguing in "On the Notion of Cause" (1917) that the concept is obsolete and should be discarded from scientific discourse. Russell contended that modern physics, such as relativity and quantum mechanics, replaces causal laws with functional relations and differential equations describing uniformities, not deterministic sequences—e.g., gravitational formulas predict events without invoking "cause." He viewed traditional causality as a relic of pre-scientific thought, useful for everyday approximations but misleading for precise analysis, as it implies asymmetry absent in symmetric physical equations. Hans Reichenbach extended probabilistic approaches in The Direction of Time (1956), introducing the common cause principle: if two events are correlated without direct causal connection, they share a common cause rendering them independent conditionally. This "fork" model—conjunctive forks for common causes—addresses time's arrow through screening-off, influencing statistical causality while accommodating quantum indeterminism. Contemporary developments integrate causality into complex systems, with Judea Pearl's framework revolutionizing inference across disciplines. Pearl's do-calculus, developed in Causality (2000, updated 2009) and recognized by the 2011 Turing Award, enables causal analysis from observational data using graphical models like directed acyclic graphs, distinguishing interventions from correlations in non-experimental settings. This has impacted fields from epidemiology to AI, allowing robust predictions in high-dimensional systems where randomized trials are infeasible. Recent debates on causal emergence explore how macro-level causes arise from micro-dynamics, particularly in quantum mechanics and AI; for instance, Erik Hoel's 2025 theory quantifies emergence by treating system scales as higher-dimensional slices. In AI, causal models enhance explainability and robustness, as seen in quantum causal inference initiatives that predict networks beyond pattern recognition. Phyllis Illari and Jon Williamson's 2011 dispositional account, in Causality in the Sciences, posits causation as capacities or dispositions realized in mechanisms, bridging epistemic inference with metaphysical production for biomedical and social applications. Modern revivals in non-Western philosophy have reinvigorated causal concepts, particularly in Buddhist traditions. In India and China, 20th- and 21st-century Buddhist movements revive dependent origination (pratītyasamutpāda), viewing causality as interdependent arising without a first cause, applied to contemporary issues like environmental ethics and cognitive science. David J. Kalupahana's Causality: The Central Philosophy of Buddhism (1975, influential in 21st-century discourse) emphasizes this relational causality as central to Buddhist metaphysics, countering Western linear models and informing global dialogues on determinism. Chinese Neo-Confucian revivals, such as those by Mou Zongsan, integrate causal immanence with modern science, portraying causality as holistic patterns (li) in complex systems.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.