Recent from talks
Nothing was collected or created yet.
Game theory
View on Wikipedia
| Part of a series on |
| Economics |
|---|
|
|
| Part of a series on |
| Strategy |
|---|
Game theory is the study of mathematical models of strategic interactions.[1] It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science.[2] Initially, game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of behavioral relations. It is now an umbrella term for the science of rational decision making in humans, animals, and computers.
Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by Theory of Games and Economic Behavior (1944), co-written with Oskar Morgenstern, which considered cooperative games of several players.[3] The second edition provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s, and was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory in 1999, and fifteen game theorists have won the Nobel Prize in economics as of 2020, including most recently Paul Milgrom and Robert B. Wilson.
History
[edit]Discussions on the mathematics of games began long before the rise of modern, mathematical game theory. Cardano wrote on games of chance in Liber de ludo aleae (Book on Games of Chance), written around 1564 but published posthumously in 1663.[4] Influenced by the work of Fermat and Pascal on the problem of points, Huygens developed the concept of expectation on reasoning about the structure of games of chance, publishing his gambling calculus in De ratiociniis in ludo aleæ (On Reasoning in Games of Chance) in 1657.[5]
In 1713, a letter attributed to Charles Waldegrave, an active Jacobite and uncle to British diplomat James Waldegrave, analyzed a game called "le her". Waldegrave provided a minimax mixed strategy solution to a two-person version of the card game, and the problem is now known as the Waldegrave problem.[6][7]
In 1838, Antoine Augustin Cournot provided a model of competition in oligopolies. Though he did not refer to it as such, he presented a solution that is the Nash equilibrium of the game in his Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth). In 1883, Joseph Bertrand critiqued Cournot's model as unrealistic, providing an alternative model of price competition[8] which would later be formalized by Francis Ysidro Edgeworth.[9]
In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels (On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy is strictly determined.[10]
Foundation
[edit]
The work of John von Neumann established game theory as its own independent field in the early-to-mid 20th century, with von Neumann publishing his paper On the Theory of Games of Strategy in 1928.[11][12] Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. Von Neumann's work in game theory culminated in his 1944 book Theory of Games and Economic Behavior, co-authored with Oskar Morgenstern.[13] The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility (of money) as an independent discipline. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.[14]
In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann.[15]

In 1950, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies.
Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science. The first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy.[16]
Prize-winning achievements
[edit]In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection and common knowledge[a] were introduced and analyzed.
In 1994, John Nash was awarded the Nobel Memorial Prize in the Economic Sciences for his contribution to game theory. Nash's most famous contribution to game theory is the concept of the Nash equilibrium, which is a solution concept for non-cooperative games, published in 1951. A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict.[1] Hurwicz introduced and formalized the concept of incentive compatibility.
In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole.
Different types of games
[edit]Cooperative / non-cooperative
[edit]A game is cooperative if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is non-cooperative if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats).[17]
Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is different from non-cooperative game theory which focuses on predicting individual players' actions and payoffs by analyzing Nash equilibria.[18][19]
Cooperative game theory provides a high-level approach as it describes only the structure and payoffs of coalitions, whereas non-cooperative game theory also looks at how strategic interaction will affect the distribution of payoffs. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation.
Symmetric / asymmetric
[edit]| E | F | |
| E | 1, 2 | 0, 0 |
| F | 0, 0 | 1, 2 |
| An asymmetric game | ||
A symmetric game is a game where each player earns the same payoff when making the same choice. In other words, the identity of the player does not change the resulting game facing the other player.[20] Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games.
The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players.
Zero-sum / non-zero-sum
[edit]| A | B | |
| A | –1, 1 | 3, −3 |
| B | 0, 0 | –2, 2 |
| A zero-sum game | ||
Zero-sum games (more generally, constant-sum games) are games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, and always adds to zero (more informally, a player benefits only at the equal expense of others).[21] Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.
Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Furthermore, constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings.
Simultaneous / sequential
[edit]Simultaneous games are games where both players move simultaneously, or instead the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (a type of dynamic games) are games where players do not make decisions simultaneously, and player's earlier actions affect the outcome and decisions of other players.[22] This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.
In short, the differences between sequential and simultaneous games are as follows:
| Sequential | Simultaneous | |
|---|---|---|
| Normally denoted by | Decision trees | Payoff matrices |
Prior knowledge of opponent's move? |
Yes | No |
| Time axis? | Yes | No |
| Also known as | Extensive-form game Extensive game |
Strategy game
Strategic game |
Perfect information and imperfect information
[edit]
An important subset of sequential games consists of games of perfect information. A game with perfect information means that all players, at every move in the game, know the previous history of the game and the moves previously made by all other players. An imperfect information game is played when the players do not know all moves already made by the opponent such as a simultaneous move game.[23] Examples of perfect-information games include tic-tac-toe, checkers, chess, and Go.[24][25][26]
Many card games are games of imperfect information, such as poker and bridge.[27] Perfect information is often confused with complete information, which is a similar concept pertaining to the common knowledge of each player's sequence, strategies, and payoffs throughout gameplay.[28] Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken, whereas perfect information is knowledge of all aspects of the game and players.[29] Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature".[30]
Bayesian game
[edit]One of the assumptions of the Nash equilibrium is that every player has correct beliefs about the actions of the other players. However, there are many situations in game theory where participants do not fully understand the characteristics of their opponents. Negotiators may be unaware of their opponent's valuation of the object of negotiation, companies may be unaware of their opponent's cost functions, combatants may be unaware of their opponent's strengths, and jurors may be unaware of their colleague's interpretation of the evidence at trial. In some cases, participants may know the character of their opponent well, but may not know how well their opponent knows his or her own character.[31]
Bayesian game means a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist.[32]

For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium.
Combinatorial games
[edit]Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and Go. Games that involve imperfect information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve some particular problems and answer some general questions.[33]
Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory.[34][35] A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.[36]
Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice.[33][37]
Discrete and continuous games
[edit]Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities.
Differential games
[edit]Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin maximum principle while the closed-loop strategies are found using Bellman's Dynamic Programming method.
A particular case of differential games are the games with a random time horizon.[38] In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval.
Evolutionary game theory
[edit]Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted.[39] In general, the evolution of strategies over time according to such rules is modeled as a Markov chain with a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest.
In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies.[40]
Stochastic outcomes (and relation to other fields)
[edit]Individual decision problems with stochastic outcomes are sometimes considered "one-player games". They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP).[41]
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature").[42] This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen.[43] (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.[43]
Metagames
[edit]These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard,[44] whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis.
Mean field game theory
[edit]Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines, and by mathematicians Pierre-Louis Lions and Jean-Michel Lasry.
Representation of games
[edit]The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".)[45][46][47][48] A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability.
Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
Extensive form
[edit]
The extensive form can be used to formalize games with a time sequencing of moves. Extensive form games can be visualized using game trees (as pictured here). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree.[49] To solve any extensive form game, backward induction must be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached.[50]
The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), Player 1 "moves" first by choosing either F or U (fair or unfair). Next in the sequence, Player 2, who has now observed Player 1's move, can choose to play either A or R (accept or reject). Once Player 2 has made their choice, the game is considered finished and each player gets their respective payoff, represented in the image as two numbers, where the first number represents Player 1's payoff, and the second number represents Player 2's payoff. Suppose that Player 1 chooses U and then Player 2 chooses A: Player 1 then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and Player 2 gets a payoff of "two".
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)
Normal form
[edit]| Player 2 chooses Left |
Player 2 chooses Right | |
| Player 1 chooses Up |
4, 3 | –1, –1 |
| Player 1 chooses Down |
0, 0 | 3, 4 |
| Normal form or payoff matrix of a 2-player, 2-strategy game | ||
The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.[51]
Characteristic function form
[edit]In cooperative game theory the characteristic function lists the payoff of each coalition. The origin of this formulation is in John von Neumann and Oskar Morgenstern's book.[52]
Formally, a characteristic function is a function [53] from the set of all possible coalitions of players to a set of payments, and also satisfies . The function describes how much collective payoff a set of players can gain by forming a coalition.
Alternative game representations
[edit]Alternative game representation forms are used for some subclasses of games or adjusted to the needs of interdisciplinary research.[54] In addition to classical game representations, some of the alternative representations also encode time related aspects.
| Name | Year | Means | Type of games | Time |
|---|---|---|---|---|
| Congestion game[55] | 1973 | functions | subset of n-person games, simultaneous moves | No |
| Sequential form[56] | 1994 | matrices | 2-person games of imperfect information | No |
| Timed games[57][58] | 1994 | functions | 2-person games | Yes |
| Gala[59] | 1997 | logic | n-person games of imperfect information | No |
| Graphical games[60][61] | 2001 | graphs, functions | n-person games, simultaneous moves | No |
| Local effect games[62] | 2003 | functions | subset of n-person games, simultaneous moves | No |
| GDL[63] | 2005 | logic | deterministic n-person games, simultaneous moves | No |
| Game Petri-nets[64] | 2006 | Petri net | deterministic n-person games, simultaneous moves | No |
| Continuous games[65] | 2007 | functions | subset of 2-person games of imperfect information | Yes |
| PNSI[66][67] | 2008 | Petri net | n-person games of imperfect information | Yes |
| Action graph games[68] | 2012 | graphs, functions | n-person games, simultaneous moves | No |
General and applied uses
[edit]As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.[69]
Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games.[70]
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior.[71] In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in the philosophy of language and philosophy of science.[72] Game-theoretic arguments of this type can be found as far back as Plato.[73] An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules".[74] Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions.
Description and modeling
[edit]
The primary use of game theory is to describe and model how human populations behave.[citation needed] Some[who?] scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human rationality and/or behavior often deviates from the model of rationality as used in game theory. Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, empirical work has shown that in some classic games, such as the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation.[b]
Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
Prescriptive or normative analysis
[edit]| Cooperate | Defect | |
| Cooperate | -1, −1 | -10, 0 |
| Defect | 0, −10 | -5, −5 |
| The prisoner's dilemma | ||
Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one's best response to the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism.[76]
Economics
[edit]Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents.[c][77][78][79] Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers and acquisitions pricing,[80] fair division, duopolies, oligopolies, social network formation, agent-based computational economics,[81][82] general equilibrium, mechanism design,[83][84][85][86][87] and voting systems;[88] and across such broad areas as experimental economics,[89][90][91][92][93] behavioral economics,[94][95][96][97][98][99] information economics,[45][46][47][48] industrial organization,[100][101][102][103] and political economy.[104][105][106][47]
This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.[107][108]
The payoffs of the game are generally taken to represent the utility of individual players.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive.[71]
Managerial economics
[edit]Game theory also has an extensive use in a specific branch or stream of economics – Managerial Economics. One important usage of it in the field of managerial economics is in analyzing strategic interactions between firms.[109] For example, firms may be competing in a market with limited resources, and game theory can help managers understand how their decisions impact their competitors and the overall market outcomes. Game theory can also be used to analyze cooperation between firms, such as in forming strategic alliances or joint ventures. Another use of game theory in managerial economics is in analyzing pricing strategies. For example, firms may use game theory to determine the optimal pricing strategy based on how they expect their competitors to respond to their pricing decisions. Overall, game theory serves as a useful tool for analyzing strategic interactions and decision making in the context of managerial economics.
Business
[edit]The Chartered Institute of Procurement & Supply (CIPS) promotes knowledge and use of game theory within the context of business procurement.[110] CIPS and TWS Partners have conducted a series of surveys designed to explore the understanding, awareness and application of game theory among procurement professionals. Some of the main findings in their third annual survey (2019) include:
- application of game theory to procurement activity has increased – at the time it was at 19% across all survey respondents
- 65% of participants predict that use of game theory applications will grow
- 70% of respondents say that they have "only a basic or a below basic understanding" of game theory
- 20% of participants had undertaken on-the-job training in game theory
- 50% of respondents said that new or improved software solutions were desirable
- 90% of respondents said that they do not have the software they need for their work.[111]
Project management
[edit]Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory.
Piraveenan (2019)[112] in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory.
Piraveenan[112] summarizes that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management.
- Government-sector–private-sector games (games that model public–private partnerships)
- Contractor–contractor games
- Contractor–subcontractor games
- Subcontractor–subcontractor games
- Games involving other players
In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios.
Political science
[edit]| Conflict resolution |
|---|
| Principles |
| Law |
| Management |
| International relations |
| Models and theories |
The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians.[113]
Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book An Economic Theory of Democracy,[114] he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy.[115]
It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime.[116] Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively.[citation needed]
A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy.[117]
However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities.[118]
Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.[119]
Defence science and technology
[edit]Game theory has been used extensively to model decision-making scenarios relevant to defence applications.[120] Most studies that has applied game theory in defence settings are concerned with Command and Control Warfare, and can be further classified into studies dealing with (i) Resource Allocation Warfare (ii) Information Warfare (iii) Weapons Control Warfare, and (iv) Adversary Monitoring Warfare.[120] Many of the problems studied are concerned with sensing and tracking, for example a surface ship trying to track a hostile submarine and the submarine trying to evade being tracked, and the interdependent decision making that takes place with regards to bearing, speed, and the sensor technology activated by both vessels.
The tool,[121] for example, automates the transformation of public vulnerability data into models, allowing defenders to synthesize optimal defence strategies through Stackelberg equilibrium analysis. This approach enhances cyber resilience by enabling defenders to anticipate and counteract attackers’ best responses, making game theory increasingly relevant in adversarial cybersecurity environments.
Ho et al. provide a broad summary of game theory applications in defence, highlighting its advantages and limitations across both physical and cyber domains.
Biology
[edit]| Hawk | Dove | |
| Hawk | 20, 20 | 80, 40 |
| Dove | 40, 80 | 60, 60 |
| The hawk-dove game | ||
Unlike those in economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as the evolutionarily stable strategy (ESS), first introduced in (Maynard Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication.[122] The analysis of signaling games and other communication games has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's Butterfly Economics).
Biologists have used the game of chicken to analyze fighting behavior and territoriality.[123]
According to Maynard Smith, in the preface to Evolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.[124]
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival.[125] All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation c < b × r, where the cost c to the altruist must be less than the benefit b to the recipient multiplied by the coefficient of relatedness r. The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of 1⁄2, because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring.[125] The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was 1⁄2 in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Computer science and logic
[edit]Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.[126]
Separately, game theory has played a role in online algorithms; in particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games.[127] Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms.
The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory[87] and within it algorithmic mechanism design[86] combine computational algorithm design and analysis of complex systems with economic theory.[128][129][130]
Game theory has multiple applications in the field of artificial intelligence and machine learning. It is often used in developing autonomous systems that can make complex decisions in uncertain environment.[131] Some other areas of application of game theory in AI/ML context are as follows - multi-agent system formation, reinforcement learning,[132] mechanism design etc.[133] By using game theory to model the behavior of other agents and anticipate their actions, AI/ML systems can make better decisions and operate more effectively.[134]
Philosophy
[edit]| Stag | Hare | |
| Stag | 3, 3 | 0, 2 |
| Hare | 2, 0 | 2, 2 |
| Stag hunt | ||
Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967), Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis.[135][136] Following Lewis (1969) game-theoretic account of conventions, Edna Ullmann-Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.[137][138]
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993),[139][140] Skyrms (1990),[141] and Stalnaker (1999).[142]
The synthesis of game theory with ethics was championed by R. B. Braithwaite.[143] The hope was that rigorous mathematical analysis of game theory might help formalize the more imprecise philosophical discussions. However, this expectation was only materialized to a limited extent.[144]
In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) [who?] authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986) and Kavka (1986)).[d]
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1998)).
Epidemiology
[edit]Since the decision to take a vaccine for a particular disease is often made by individuals, who may consider a range of factors and parameters in making this decision (such as the incidence and prevalence of the disease, perceived and real risks associated with contracting the disease, mortality rate, perceived and real risks associated with vaccination, and financial cost of vaccination), game theory has been used to model and predict vaccination uptake in a society.[145][146]
Well known examples of games
[edit]Prisoner's dilemma
[edit]B A
|
B stays silent |
B betrays |
|---|---|---|
| A stays silent |
−2 −2
|
0 −10
|
| A betrays |
−10 0
|
−5 −5
|
William Poundstone described the game in his 1993 book Prisoner's Dilemma:[147]
Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge.
The dominant strategy (and therefore the best response to any possible opponent strategy), is to betray the other, which aligns with the sure-thing principle.[148] However, both prisoners staying silent would yield a greater reward for both of them than mutual betrayal.
Battle of the sexes
[edit]The "battle of the sexes" is a term used to describe the perceived conflict between men and women in various areas of life, such as relationships, careers, and social roles. This conflict is often portrayed in popular culture, such as movies and television shows, as a humorous or dramatic competition between the genders. This conflict can be depicted in a game theory framework. This is an example of non-cooperative games.
An example of the "battle of the sexes" can be seen in the portrayal of relationships in popular media, where men and women are often depicted as being fundamentally different and in conflict with each other. For instance, in some romantic comedies, the male and female protagonists are shown as having opposing views on love and relationships, and they have to overcome these differences in order to be together.[149]
In this game, there are two pure strategy Nash equilibria: one where both the players choose the same strategy and the other where the players choose different options. If the game is played in mixed strategies, where each player chooses their strategy randomly, then there is an infinite number of Nash equilibria. However, in the context of the "battle of the sexes" game, the assumption is usually made that the game is played in pure strategies.[150]
Ultimatum game
[edit]The ultimatum game is a game that has become a popular instrument of economic experiments. An early description is by Nobel laureate John Harsanyi in 1961.[151]
One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. The game demonstrates how social acceptance, fairness, and generosity influence the players decisions.[152]
Ultimatum game has a variant, that is the dictator game. They are mostly identical, except in dictator game the responder has no power to reject the proposer's offer.
Trust game
[edit]The Trust Game is an experiment designed to measure trust in economic decisions. It is also called "the investment game" and is designed to investigate trust and demonstrate its importance rather than "rationality" of self-interest. The game was designed by Berg Joyce, John Dickhaut and Kevin McCabe in 1995.[153]
In the game, one player (the investor) is given a sum of money and must decide how much of it to give to another player (the trustee). The amount given is then tripled by the experimenter. The trustee then decides how much of the tripled amount to return to the investor. If the recipient is completely self interested, then he/she should return nothing. However that is not true as the experiment conduct. The outcome suggest that people are willing to place a trust, by risking some amount of money, in the belief that there would be reciprocity.[154]
Cournot Competition
[edit]The Cournot competition model involves players choosing quantity of a homogenous product to produce independently and simultaneously, where marginal cost can be different for each firm and the firm's payoff is profit. The production costs are public information and the firm aims to find their profit-maximizing quantity based on what they believe the other firm will produce and behave like monopolies. In this game firms want to produce at the monopoly quantity but there is a high incentive to deviate and produce more, which decreases the market-clearing price.[23] For example, firms may be tempted to deviate from the monopoly quantity if there is a low monopoly quantity and high price, with the aim of increasing production to maximize profit.[23] However this option does not provide the highest payoff, as a firm's ability to maximize profits depends on its market share and the elasticity of the market demand.[155] The Cournot equilibrium is reached when each firm operates on their reaction function with no incentive to deviate, as they have the best response based on the other firms output.[23] Within the game, firms reach the Nash equilibrium when the Cournot equilibrium is achieved.

Bertrand Competition
[edit]The Bertrand competition assumes homogenous products and a constant marginal cost and players choose the prices.[23] The equilibrium of price competition is where the price is equal to marginal costs, assuming complete information about the competitors' costs. Therefore, the firms have an incentive to deviate from the equilibrium because a homogenous product with a lower price will gain all of the market share, known as a cost advantage.[156]
In popular culture
[edit]- Based on the 1998 book by Sylvia Nasar,[157] the life story of game theorist and mathematician John Nash was turned into the 2001 biopic A Beautiful Mind, starring Russell Crowe as Nash.[158]
- The 1959 military science fiction novel Starship Troopers by Robert A. Heinlein mentioned "games theory" and "theory of games".[159] In the 1997 film of the same name, the character Carl Jenkins referred to his military intelligence assignment as being assigned to "games and theory".
- The 1964 film Dr. Strangelove satirizes game theoretic ideas about deterrence theory. For example, nuclear deterrence depends on the threat to retaliate catastrophically if a nuclear attack is detected. A game theorist might argue that such threats can fail to be credible, in the sense that they can lead to subgame imperfect equilibria. The movie takes this idea one step further, with the Soviet Union irrevocably committing to a catastrophic nuclear response without making the threat public.[160]
- The 1980s power pop band Game Theory was founded by singer/songwriter Scott Miller, who described the band's name as alluding to "the study of calculating the most appropriate action given an adversary ... to give yourself the minimum amount of failure".[161]
- Liar Game, a 2005 Japanese manga and 2007 television series, presents the main characters in each episode with a game or problem that is typically drawn from game theory, as demonstrated by the strategies applied by the characters.[162]
- The 1974 novel Spy Story by Len Deighton explores elements of game theory in regard to cold war army exercises.
- The 2008 novel The Dark Forest by Liu Cixin explores the relationship between extraterrestrial life, humanity, and game theory.
- Joker, the prime antagonist in the 2008 film The Dark Knight presents game theory concepts—notably the prisoner's dilemma in a scene where he asks passengers in two different ferries to bomb the other one to save their own.
- In the 2018 film Crazy Rich Asians, the female lead Rachel Chu is a professor of economics and game theory at New York University. At the beginning of the film she is seen in her NYU classroom playing a game of poker with her teaching assistant and wins the game by bluffing;[163] then in the climax of the film, she plays a game of mahjong with her boyfriend's disapproving mother Eleanor, losing the game to Eleanor on purpose but winning her approval as a result.[164]
- In the 2017 film Molly's Game, Brad, an inexperienced poker player, makes an irrational betting decision without realizing and causes his opponent Harlan to deviate from his Nash Equilibrium strategy, resulting in a significant loss when Harlan loses the hand.[165]
See also
[edit]- Applied ethics – Practical application of moral considerations
- Bandwidth-sharing game – Type of resource allocation game
- Chainstore paradox – Game theory paradox
- Collective intentionality – Intentionality that occurs when two or more individuals undertake a task together
- Core (game theory) – Set in game theory
- Glossary of game theory
- Intra-household bargaining
- Kingmaker scenario – Endgame situation in game theory
- Law and economics – Application of economic theory to analysis of legal systems
- Mutual assured destruction – Doctrine of military strategy
- Outline of artificial intelligence – Overview of and topical guide to artificial intelligence
- Parrondo's paradox – Paradox in game theory
- Precautionary principle – Risk management strategy
- Quantum refereed game
- Risk management – Identification, evaluation and control of risks
- Self-confirming equilibrium
- Tragedy of the commons – Self-interests causing depletion of a shared resource
- Traveler's dilemma – Non-zero-sum game thought experiment
- Wilson doctrine (economics) – Argument in economic theory
- Compositional game theory
Lists
Notes
[edit]- ^ Although common knowledge was first discussed by the philosopher David Lewis in his dissertation (and later book) Convention in the late 1960s, it was not widely considered by economists until Robert Aumann's work in the 1970s.
- ^ Experimental work in game theory goes by many names, experimental economics, behavioral economics, and behavioural game theory are several.[75]
- ^ At JEL:C7 of the Journal of Economic Literature classification codes.
- ^ For a more detailed discussion of the use of game theory in ethics, see the Stanford Encyclopedia of Philosophy's entry game theory and ethics.
References
[edit]- ^ a b Myerson, Roger B. (1991). Game Theory: Analysis of Conflict. Harvard University Press. ISBN 9780674341166.
- ^ Shapley, Lloyd S.; Shubik, Martin (1 January 1971). "Chapter 1, Introduction, The Use of Models". Game Theory in Economics. Archived from the original on 23 April 2023. Retrieved 23 April 2023.
- ^ Neumann, John von; Morgenstern, Oskar (8 April 2007). Theory of Games and Economic Behavior. Princeton University Press. ISBN 978-0-691-13061-3. Archived from the original on 28 March 2023. Retrieved 23 April 2023.
- ^ Stigler, Stephen M. (2007). "Chance Is 350 Years Old". CHANCE. 20 (4): 26–30. doi:10.1080/09332480.2007.10722870.
- ^ Schneider, Ivo (2001), Heyde, C. C.; Seneta, E.; Crépel, P.; Fienberg, S. E. (eds.), "Christiaan Huygens", Statisticians of the Centuries, New York, NY: Springer, pp. 23–28, doi:10.1007/978-1-4613-0179-0_5#citeas, ISBN 978-1-4613-0179-0, retrieved 17 October 2025
- ^ Bellhouse, David R. (2007), "The Problem of Waldegrave" (PDF), Journal Électronique d'Histoire des Probabilités et de la Statistique [Electronic Journal of Probability History and Statistics], 3 (2), archived (PDF) from the original on 20 August 2008
- ^ Bellhouse, David R. (2015). "Le Her and Other Problems in Probability Discussed by Bernoulli, Montmort and Waldegrave". Statistical Science. 30 (1). Institute of Mathematical Statistics: 26–39. arXiv:1504.01950. Bibcode:2015arXiv150401950B. doi:10.1214/14-STS469. S2CID 59066805.
- ^ Qin, Cheng-Zhong; Stuart, Charles (1997). "Bertrand versus Cournot Revisited". Economic Theory. 10 (3): 497–507. doi:10.1007/s001990050169. ISSN 0938-2259. JSTOR 25055054. S2CID 153431949.
- ^ Edgeworth, Francis (1889) "The pure theory of monopoly", reprinted in Collected Papers relating to Political Economy 1925, vol.1, Macmillan.
- ^ Zermelo, Ernst (1913). Hobson, E. W.; Love, A. E. H. (eds.). Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels [On an Application of Set Theory to the Theory of the Game of Chess] (PDF). Proceedings of the Fifth International Congress of Mathematicians (1912) (in German). Cambridge: Cambridge University Press. pp. 501–504. Archived from the original (PDF) on 31 July 2020. Retrieved 29 August 2019.
- ^ von Neumann, John (1928). "Zur Theorie der Gesellschaftsspiele" [On the Theory of Games of Strategy]. Mathematische Annalen [Mathematical Annals] (in German). 100 (1): 295–320. doi:10.1007/BF01448847. S2CID 122961988.
- ^ von Neumann, John (1959). "On the Theory of Games of Strategy". In Tucker, A. W.; Luce, R. D. (eds.). Contributions to the Theory of Games. Vol. 4. Translated by Bargmann, Sonya. Princeton, New Jersey: Princeton University Press. pp. 13–42. ISBN 0-691-07937-4.
{{cite book}}: ISBN / Date incompatibility (help) - ^ Mirowski, Philip (1992). "What Were von Neumann and Morgenstern Trying to Accomplish?". In Weintraub, E. Roy (ed.). Toward a History of Game Theory. Durham: Duke University Press. pp. 113–147. ISBN 978-0-8223-1253-6.
- ^ Leonard, Robert (2010), Von Neumann, Morgenstern, and the Creation of Game Theory, New York: Cambridge University Press, doi:10.1017/CBO9780511778278, ISBN 978-0-521-56266-9
- ^ Kim, Sungwook, ed. (2014). Game theory applications in network design. IGI Global. p. 3. ISBN 978-1-4666-6051-9.
- ^ Kuhn, Steven (4 September 1997). Zalta, Edward N. (ed.). "Prisoner's Dilemma". Stanford Encyclopedia of Philosophy. Stanford University. Archived from the original on 18 January 2012. Retrieved 3 January 2013.
- ^ Shor, Mike. "Non-Cooperative Game". GameTheory.net. Archived from the original on 1 April 2014. Retrieved 15 September 2016.
- ^ Chandrasekaran, Ramaswamy. "Cooperative Game Theory" (PDF). University of Texas at Dallas. Archived (PDF) from the original on 18 April 2016.
- ^ Brandenburger, Adam. "Cooperative Game Theory: Characteristic Functions, Allocations, Marginal Contribution" (PDF). Archived from the original (PDF) on 29 August 2017. Retrieved 14 April 2020.
- ^ Shor, Mike (2006). "Symmetric Game". Game Theory.net.
- ^ Owen, Guillermo (1995). Game Theory: Third Edition. Bingley: Emerald Group Publishing. p. 11. ISBN 978-0-12-531151-9.
- ^ Chang, Kuang-Hua (2015). "Decisions in Engineering Design". Design Theory and Methods Using CAD/CAE. pp. 39–101. doi:10.1016/b978-0-12-398512-5.00002-5. ISBN 978-0-12-398512-5.
- ^ a b c d e Gibbons, Robert (1992). Game Theory for Applied Economists. Princeton, New Jersey: Princeton University Press. pp. 14–17. ISBN 0-691-04308-6.
- ^ Ferguson, Thomas S. "Game Theory" (PDF). UCLA Department of Mathematics. pp. 56–57. Archived (PDF) from the original on 30 July 2004.
- ^ Mycielski, Jan (1992). "Games with Perfect Information". Handbook of Game Theory with Economic Applications. Vol. 1. pp. 41–70. doi:10.1016/S1574-0005(05)80006-2. ISBN 978-0-4448-8098-7.
- ^ "Infinite Chess". PBS Infinite Series. 2 March 2017. Archived from the original on 28 October 2021. Perfect information defined at 0:25, with academic sources arXiv:1302.4377 and arXiv:1510.08155.
- ^ Owen, Guillermo (1995). Game Theory: Third Edition. Bingley: Emerald Group Publishing. p. 4. ISBN 978-0-12-531151-9.
- ^ Mirman, Leonard J. (1989). "Perfect Information". Game Theory. pp. 194–198. doi:10.1007/978-1-349-20181-5_22. ISBN 978-0-333-49537-7.
- ^ Mirman, Leonard (1989). Perfect Information. London: Palgrave Macmillan. pp. 194–195. ISBN 978-1-349-20181-5.
- ^ Shoham & Leyton-Brown (2008), p. 60.
- ^ Osborne, Martin J. (2000). An Introduction to Game Theory. Oxford University Press. pp. 271–272.
- ^ Osborne, Martin J (2020). An Introduction to Game Theory. Oxford University Press. pp. 271–277.
- ^ a b Jörg Bewersdorff (2005). "31". Luck, logic, and white lies: the mathematics of games. A K Peters, Ltd. pp. ix–xii. ISBN 978-1-56881-210-6.
- ^ Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007), Lessons in Play: In Introduction to Combinatorial Game Theory, A K Peters Ltd, pp. 3–4, ISBN 978-1-56881-277-9
- ^ Beck, József (2008). Combinatorial Games: Tic-Tac-Toe Theory. Cambridge University Press. pp. 1–3. ISBN 978-0-521-46100-9.
- ^ Hearn, Robert A.; Demaine, Erik D. (2009), Games, Puzzles, and Computation, A K Peters, Ltd., ISBN 978-1-56881-322-6
- ^ Jones, M. Tim (2008). Artificial Intelligence: A Systems Approach. Jones & Bartlett Learning. pp. 106–118. ISBN 978-0-7637-7337-3.
- ^ Petrosjan, L. A.; Murzov, N. V. (1966). "Game-theoretic problems of mechanics". Litovsk. Mat. Sb. (in Russian). 6: 423–433.
- ^ Newton, Jonathan (2018). "Evolutionary Game Theory: A Renaissance". Games. 9 (2): 31. doi:10.3390/g9020031. hdl:10419/179191.
- ^ Webb (2007).
- ^ Lozovanu, D; Pickl, S (2015). A Game-Theoretical Approach to Markov Decision Processes, Stochastic Positional Games and Multicriteria Control Models. Springer, Cham. ISBN 978-3-319-11832-1.
- ^ Osborne & Rubinstein (1994).
- ^ a b McMahan, Hugh Brendan (2006). Robust Planning in Domains with Stochastic Outcomes, Adversaries, and Partial Observability (PDF) (PhD dissertation). Carnegie Mellon University. pp. 3–4. Archived (PDF) from the original on 1 April 2011.
- ^ Howard (1971).
- ^ a b Rasmusen, Eric (2007). Games and Information (4th ed.). Wiley. ISBN 978-1-4051-3666-2.
- ^ a b Kreps, David M. (1990). Game Theory and Economic Modelling. Oxford University Press. doi:10.1093/0198283814.001.0001. ISBN 978-0-19-828381-2.[page needed]
- ^ a b c Aumann, R. J.; Hart, S., eds. (1992). Handbook of Game Theory with Economic Applications. Elsevier. ISBN 978-0-444-89427-4.[page needed]
- ^ a b Aumann, Robert J.; Heifetz, Aviad (2002). "Chapter 43 Incomplete information". Handbook of Game Theory with Economic Applications Volume 3. Vol. 3. pp. 1665–1686. doi:10.1016/S1574-0005(02)03006-0. ISBN 978-0-444-89428-1.
- ^ Fudenberg, Drew; Tirole, Jean (1991). Game Theory. MIT Press. p. 67. ISBN 978-0-262-06141-4.
- ^ Williams, Paul D. (2013). Security Studies: an Introduction (second ed.). Abingdon: Routledge. pp. 55–56.
- ^ Shoham & Leyton-Brown (2008), p. 35.
- ^ "Game theory - Von Neumann, Morgenstern, Theory | Britannica". Britannica. 12 February 2025. Retrieved 19 March 2025.
- ^ denotes the power set of .
- ^ Tagiew, Rustam (3 May 2011). "If more than Analytical Modeling is Needed to Predict Real Agents' Strategic Interaction". arXiv:1105.0558 [cs.GT].
- ^ Rosenthal, Robert W. (December 1973). "A class of games possessing pure-strategy Nash equilibria". International Journal of Game Theory. 2 (1): 65–67. doi:10.1007/BF01737559. S2CID 121904640.
- ^ Koller, Daphne; Megiddo, Nimrod; von Stengel, Bernhard (1994). "Fast algorithms for finding randomized strategies in game trees". Proceedings of the twenty-sixth annual ACM symposium on Theory of computing – STOC '94. pp. 750–759. doi:10.1145/195058.195451. ISBN 0-89791-663-8. S2CID 1893272.
- ^ Alur, Rajeev; Dill, David L. (April 1994). "A theory of timed automata". Theoretical Computer Science. 126 (2): 183–235. doi:10.1016/0304-3975(94)90010-8.
- ^ Tomlin, C.J.; Lygeros, J.; Shankar Sastry, S. (July 2000). "A game theoretic approach to controller design for hybrid systems". Proceedings of the IEEE. 88 (7): 949–970. Bibcode:2000IEEEP..88..949T. doi:10.1109/5.871303. S2CID 1844682.
- ^ Koller, Daphne; Pfeffer, Avi (July 1997). "Representations and solutions for game-theoretic problems". Artificial Intelligence. 94 (1–2): 167–215. doi:10.1016/S0004-3702(97)00023-4.
- ^ Michael, Michael Kearns; Littman, Michael L. (2001). "Graphical Models for Game Theory". In UAI: 253–260. CiteSeerX 10.1.1.22.5705.
- ^ Kearns, Michael; Littman, Michael L.; Singh, Satinder (7 March 2011). "Graphical Models for Game Theory". arXiv:1301.2281 [cs.GT].
- ^ Leyton-Brown, Kevin; Tennenholtz, Moshe (2005). Local-Effect Games (PDF). Dagstuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum für Informatik. Archived from the original (PDF) on 3 February 2023. Retrieved 3 February 2023.
- ^ Genesereth, Michael; Love, Nathaniel; Pell, Barney (15 June 2005). "General Game Playing: Overview of the AAAI Competition". AI Magazine. 26 (2): 62. doi:10.1609/aimag.v26i2.1813.
- ^ Clempner, Julio (2006). "Modeling shortest path games with Petri nets: a Lyapunov based theory". International Journal of Applied Mathematics and Computer Science. 16 (3): 387–397.
- ^ Sannikov, Yuliy (September 2007). "Games with Imperfectly Observable Actions in Continuous Time" (PDF). Econometrica. 75 (5): 1285–1329. doi:10.1111/j.1468-0262.2007.00795.x.
- ^ Tagiew, Rustam (December 2008). "Multi-Agent Petri-Games". 2008 International Conference on Computational Intelligence for Modelling Control & Automation. pp. 130–135. doi:10.1109/CIMCA.2008.15. ISBN 978-0-7695-3514-2. S2CID 16679934.
- ^ Tagiew, Rustam (2009). "On Multi-agent Petri Net Models for Computing Extensive Finite Games". New Challenges in Computational Collective Intelligence. Studies in Computational Intelligence. Vol. 244. Springer. pp. 243–254. doi:10.1007/978-3-642-03958-4_21. ISBN 978-3-642-03957-7.
- ^ Bhat, Navin; Leyton-Brown, Kevin (11 July 2012). "Computing Nash Equilibria of Action-Graph Games". arXiv:1207.4128 [cs.GT].
- ^ Larson, Jennifer M. (11 May 2021). "Networks of Conflict and Cooperation". Annual Review of Political Science. 24 (1): 89–107. doi:10.1146/annurev-polisci-041719-102523.
- ^ Friedman, Daniel (1998). "On economic applications of evolutionary game theory" (PDF). Journal of Evolutionary Economics. 8: 14–53. Archived (PDF) from the original on 11 February 2014.
- ^ a b Camerer, Colin F. (2003). "1.1 What Is Game Theory Good For?". Behavioral Game Theory: Experiments in Strategic Interaction. pp. 5–7. Archived from the original on 14 May 2011.
- ^ Bruin, Boudewijn de (September 2005). "Game Theory in Philosophy". Topoi. 24 (2): 197–208. doi:10.1007/s11245-005-5055-3.
- ^ Ross, Don (10 March 2006). "Game Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 21 August 2008.
- ^ Velegol, Darrell; Suhey, Paul; Connolly, John; Morrissey, Natalie; Cook, Laura (17 October 2018). "Chemical Game Theory". Industrial & Engineering Chemistry Research. 57 (41): 13593–13607. doi:10.1021/acs.iecr.8b03835. S2CID 105204747.
- ^ Camerer, Colin F. (2003). "Introduction". Behavioral Game Theory: Experiments in Strategic Interaction. pp. 1–25. Archived from the original on 14 May 2011.
- ^ Kadane, Joseph B.; Larkey, Patrick D. (December 1983). "The Confusion of Is and Ought in Game Theoretic Contexts". Management Science. 29 (12): 1365–1379. doi:10.1287/mnsc.29.12.1365.
- ^ Aumann, Robert J. (2008). "game theory". The New Palgrave Dictionary of Economics (2nd ed.). Archived from the original on 15 May 2011. Retrieved 22 August 2011.
- ^ Shubik, Martin (1981). "Game Theory Models and Methods in Political Economy". In Arrow, Kenneth; Intriligator, Michael (eds.). Handbook of Mathematical Economics, v. 1. 1. Vol. 1. pp. 285–330. doi:10.1016/S1573-4382(81)01011-4. ISBN 978-0-444-86126-9.
- ^ Shapiro, Carl (Spring 1989). "The Theory of Business Strategy". The RAND Journal of Economics. 20 (1). Wiley: 125–137. JSTOR 2555656. PMID 10296625..
- ^ Agarwal, N.; Zeephongsekul, P. (11–12 December 2011). Psychological Pricing in Mergers & Acquisitions using Game Theory (PDF). 19th International Congress on Modelling and Simulation. Perth. Retrieved 3 February 2023.
- ^ Tesfatsion, Leigh (2006). Agent-Based Computational Economics: A Constructive Approach to Economic Theory. Handbook of Computational Economics. Vol. 2. pp. 831–880. doi:10.1016/S1574-0021(05)02016-2. ISBN 978-0-444-51253-6.
- ^ Joseph Y. Halpern (2008). "computer science and game theory". The New Palgrave Dictionary of Economics.
- ^ Myerson, Roger B. (2008). "mechanism design". The New Palgrave Dictionary of Economics. Archived from the original on 23 November 2011. Retrieved 4 August 2011.
- ^ Myerson, Roger B. (2008). "revelation principle". The New Palgrave Dictionary of Economics. Archived from the original on 16 May 2013. Retrieved 4 August 2011.
- ^ Sandholm, Tuomas (2008). "computing in mechanism design". The New Palgrave Dictionary of Economics. Archived from the original on 23 November 2011. Retrieved 5 December 2011.
- ^ a b Nisan, Noam; Ronen, Amir (April 2001). "Algorithmic Mechanism Design". Games and Economic Behavior. 35 (1–2): 166–196. doi:10.1006/game.1999.0790.
- ^ a b Nisan, Noam; Roughgarden, Tim; Tardos, Eva; Vazirani, Vijay V., eds. (2007). Algorithmic Game Theory. Cambridge University Press. ISBN 9780521872829. LCCN 2007014231.
- ^ Brams, Steven J. (1994). Chapter 30 Voting procedures. Handbook of Game Theory with Economic Applications. Vol. 2. pp. 1055–1089. doi:10.1016/S1574-0005(05)80062-1. ISBN 978-0-444-89427-4. and Moulin, Hervé (1994). Chapter 31 Social choice. Handbook of Game Theory with Economic Applications. Vol. 2. pp. 1091–1125. doi:10.1016/S1574-0005(05)80063-3. ISBN 978-0-444-89427-4.
- ^ Smith, Vernon L. (December 1992). "Game Theory and Experimental Economics: Beginnings and Early Influences". History of Political Economy. 24 (Supplement): 241–282. doi:10.1215/00182702-24-Supplement-241.
- ^ Smith, Vernon L. (2001). "Experimental Economics". International Encyclopedia of the Social & Behavioral Sciences. pp. 5100–5108. doi:10.1016/B0-08-043076-7/02232-4. ISBN 978-0-08-043076-8.
- ^ Plott, Charles R.; Smith, Vernon L., eds. (2008). Handbook of Experimental Economics Results. Elsevier. ISBN 978-0-08-088796-8.[page needed]
- ^ Vincent P. Crawford (1997). "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and Econometrics: Theory and Applications, pp. 206–242 Archived 1 April 2012 at the Wayback Machine. Cambridge. Reprinted in Colin F. Camerer et al., ed. (2003). Advances in Behavioral Economics, Princeton. 1986–2003 papers. Description Archived 18 January 2012 at the Wayback Machine, preview, Princeton, ch. 12
- ^ Shubik, Martin (2002). "Chapter 62 Game theory and experimental gaming". Handbook of Game Theory with Economic Applications Volume 3. Vol. 3. pp. 2327–2351. doi:10.1016/S1574-0005(02)03025-4. ISBN 978-0-444-89428-1.
- ^ The New Palgrave Dictionary of Economics. 2008.Faruk Gul. "behavioural economics and game theory." Abstract. Archived 7 August 2017 at the Wayback Machine
- ^ Camerer, Colin F. (2008). "behavioral game theory". The New Palgrave Dictionary of Economics. Archived from the original on 23 November 2011. Retrieved 4 August 2011.
- ^ Camerer, Colin F. (1997). "Progress in Behavioral Game Theory". Journal of Economic Perspectives. 11 (4): 172. doi:10.1257/jep.11.4.167.
- ^ Camerer, Colin F. (2003). Behavioral Game Theory. Princeton. Description Archived 14 May 2011 at the Wayback Machine, preview Archived 26 March 2023 at the Wayback Machine ([ctrl]+), and ch. 1 link Archived 4 July 2013 at the Wayback Machine.
- ^ Camerer, Colin F.; Loewenstein, George; Rabin, Matthew, eds. (2011). Advances in Behavioral Economics. Princeton University Press. ISBN 978-1-4008-2911-8.[page needed]
- ^ Fudenberg, Drew (2006). "Advancing Beyond Advances in Behavioral Economics". Journal of Economic Literature. 44 (3): 694–711. doi:10.1257/jel.44.3.694. JSTOR 30032349. S2CID 3490729.
- ^ Tirole, Jean (1988). The Theory of Industrial Organization. MIT Press. Description and chapter-preview links, pp. vii–ix, "General Organization," pp. 5–6, and "Non-Cooperative Game Theory: A User's Guide Manual,' " ch. 11, pp. 423–59.
- ^ Bagwell, Kyle; Wolinsky, Asher (2002). "Game theory and industrial organization". Handbook of Game Theory with Economic Applications Volume 3. Vol. 3. pp. 1851–1895. doi:10.1016/S1574-0005(02)03012-6. ISBN 978-0-444-89428-1.
- ^ Fels, E. M. (1961). "Review of Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games". Weltwirtschaftliches Archiv. 87: 12–14. JSTOR 40434883.
- ^ Reid, Gavin C. (1982). "Review of Market Structure and Behavior". The Economic Journal. 92 (365): 200–202. doi:10.2307/2232276. JSTOR 2232276.
- ^ Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," in Handbook of Mathematical Economics, v. 1, pp. 285–330 doi:10.1016/S1573-4382(81)01011-4.
- ^ Martin Shubik (1987). A Game-Theoretic Approach to Political Economy. MIT Press. Description. Archived 29 June 2011 at the Wayback Machine
- ^ Martin Shubik (1978). "Game Theory: Economic Applications," in W. Kruskal and J.M. Tanur, ed., International Encyclopedia of Statistics, v. 2, pp. 372–78.
- ^ Christen, Markus (1 July 1998). "Game-theoretic model to examine the two tradeoffs in the acquisition of information for a careful balancing act". INSEAD. Archived from the original on 24 May 2013. Retrieved 1 July 2012.
- ^ Chevalier-Roignant, Benoît; Trigeorgis, Lenos (15 February 2012). "Options Games: Balancing the trade-off between flexibility and commitment". The European Financial Review. Archived from the original on 20 June 2013. Retrieved 3 January 2013.
- ^ Wilkinson, Nick (2005). "Game theory". Managerial Economics. pp. 331–381. doi:10.1017/CBO9780511810534.015. ISBN 978-0-521-81993-0.
- ^ "CIPS and TWS Partners promote game theory on the global stage". 27 November 2020. Archived from the original on 27 November 2020. Retrieved 20 April 2023.
- ^ CIPS (2021), Game Theory Archived 11 April 2021 at the Wayback Machine, CIPS in conjunction with TWS Partners, accessed 11 April 2021
- ^ a b Piraveenan, Mahendra (2019). "Applications of Game Theory in Project Management: A Structured Review and Analysis". Mathematics. 7 (9): 858. doi:10.3390/math7090858.
- ^ "What game theory tells us about politics and society". MIT News | Massachusetts Institute of Technology. 4 December 2018. Archived from the original on 23 April 2023. Retrieved 23 April 2023.
- ^ Downs (1957).
- ^ Brams, Steven J. (1 January 2001). "Game theory and the Cuban missile crisis". Plus Magazine. Archived from the original on 24 April 2015. Retrieved 31 January 2016.
- ^ "How game theory explains 'irrational' behavior". MIT Sloan. 5 April 2022. Archived from the original on 23 April 2023. Retrieved 23 April 2023.
- ^ Levy, Gilat; Razin, Ronny (March 2004). "It Takes Two: An Explanation for the Democratic Peace". Journal of the European Economic Association. 2 (1): 1–29. doi:10.1162/154247604323015463.
- ^ Fearon, James D. (1 January 1995). "Rationalist Explanations for War". International Organization. 49 (3): 379–414. doi:10.1017/s0020818300033324. JSTOR 2706903. S2CID 38573183.
- ^ Wood, Peter John (February 2011). "Climate change and game theory". Annals of the New York Academy of Sciences. 1219 (1): 153–170. Bibcode:2011NYASA1219..153W. doi:10.1111/j.1749-6632.2010.05891.x. PMID 21332497.
- ^ a b Ho, Edwin; Rajagopalan, Arvind; Skvortsov, Alex; Arulampalam, Sanjeev; Piraveenan, Mahendra (28 January 2022). "Game Theory in Defence Applications: A Review". Sensors. 22 (3): 1032. arXiv:2111.01876. Bibcode:2022Senso..22.1032H. doi:10.3390/s22031032. PMC 8838118. PMID 35161778.
- ^ Phetmanee, Surasak; Sevegnani, Michele; Andrei, Oana (2024). "StEVe: A Rational Verification Tool for Stackelberg Security Games". Integrated Formal Methods: 19th International Conference, IFM 2024. Manchester, United Kingdom: Springer-Verlag. pp. 267–275. doi:10.1007/978-3-031-76554-4_15.
- ^ Harper & Maynard Smith (2003).
- ^ Maynard Smith, John (1974). "The theory of games and the evolution of animal conflicts" (PDF). Journal of Theoretical Biology. 47 (1): 209–221. Bibcode:1974JThBi..47..209M. doi:10.1016/0022-5193(74)90110-6. PMID 4459582.
- ^ Alexander, J. McKenzie (19 July 2009). "Evolutionary Game Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 3 January 2013.
- ^ a b Okasha, Samir (3 June 2003). "Biological Altruism". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Stanford University. Retrieved 3 January 2013.
- ^ Shoham, Yoav; Leyton-Brown, Kevin (2008). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press. ISBN 978-1-139-47524-2.[page needed]
- ^ Ben-David et al. (1994).
- ^ Halpern, Joseph Y. (2008). "Computer science and game theory". The New Palgrave Dictionary of Economics (2nd ed.).
- ^ Shoham, Yoav (August 2008). "Computer science and game theory". Communications of the ACM. 51 (8): 74–79. doi:10.1145/1378704.1378721.
- ^ Littman, Amy; Littman, Michael L. (2007). "Introduction to the Special Issue on Learning and Computational Game Theory". Machine Learning. 67 (1–2): 3–6. doi:10.1007/s10994-007-0770-1. S2CID 22635389.
- ^ Hanley, John T. (14 December 2021). "GAMES, game theory and artificial intelligence". Journal of Defense Analytics and Logistics. 5 (2): 114–130. doi:10.1108/JDAL-10-2021-0011.
- ^ Albrecht, Stefano V.; Christianos, Filippos; Schäfer, Lukas (2024). Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press. ISBN 978-0-262-04937-5.[page needed]
- ^ Parashar, Nilesh (15 August 2022). "What is Game Theory in AI?". Medium.
- ^ Hazra, Tanmoy; Anjaria, Kushal (March 2022). "Applications of game theory in deep learning: a survey". Multimedia Tools and Applications. 81 (6): 8963–8994. doi:10.1007/s11042-022-12153-2. PMC 9039031. PMID 35496996.
- ^ Skyrms (1996)
- ^ Grim et al. (2004).
- ^ Ullmann-Margalit, E. (1977), The Emergence of Norms, Oxford University Press, ISBN 978-0-19-824411-0[page needed]
- ^ Bicchieri, Cristina (2006), The Grammar of Society: the Nature and Dynamics of Social Norms, Cambridge University Press, ISBN 978-0-521-57372-6[page needed]
- ^ Bicchieri, Cristina (1989). "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge". Erkenntnis. 30 (1–2): 69–85. doi:10.1007/BF00184816. S2CID 120848181.
- ^ Bicchieri, Cristina (1993), Rationality and Coordination, Cambridge University Press, ISBN 978-0-521-57444-0
- ^ Skyrms, Brian (1990), The Dynamics of Rational Deliberation, Harvard University Press, ISBN 978-0-674-21885-7
- ^ Stalnaker, Robert (October 1996). "Knowledge, Belief and Counterfactual Reasoning in Games". Economics and Philosophy. 12 (2): 133–163. doi:10.1017/S0266267100004132.
- ^ Braithwaite, Richard Bevan (1955). Theory of Games as a Tool for the Moral Philosopher. An Inaugural Lecture Delivered in Cambridge on 2 December 1954. University Press. ISBN 978-0-521-11351-9.
{{cite book}}: ISBN / Date incompatibility (help)[page needed] - ^ Kuhn, Steven T. (July 2004). "Reflections on Ethics and Game Theory". Synthese. 141 (1): 1–44. doi:10.1023/B:SYNT.0000035846.91195.cb.
- ^ Chang, Sheryl L.; Piraveenan, Mahendra; Pattison, Philippa; Prokopenko, Mikhail (2020). "Game theoretic modelling of infectious disease dynamics and intervention methods: a review". Journal of Biological Dynamics. 14 (1): 57–89. arXiv:1901.04143. Bibcode:2020JBioD..14...57C. doi:10.1080/17513758.2020.1720322. PMID 31996099.
- ^ Roberts, Siobhan (20 December 2020). "'The Pandemic Is a Prisoner's Dilemma Game'". The New York Times.
- ^ Poundstone 1993, pp. 8, 117.
- ^ Rapoport, Anatol (1987). "Prisoner's Dilemma". The New Palgrave Dictionary of Economics. pp. 1–5. doi:10.1057/978-1-349-95121-5_1850-1. ISBN 978-1-349-95121-5.
- ^ "Battle of the Sexes | History, Participants, & Facts | Britannica". www.britannica.com. Archived from the original on 23 April 2023. Retrieved 23 April 2023.
- ^ Athenarium (12 August 2020). "Battle of the Sexes – Nash equilibrium in mixed strategies for coordination". Athenarium. Archived from the original on 23 April 2023. Retrieved 23 April 2023.
- ^ Harsanyi, John C. (June 1961). "On the rationality postulates underlying the theory of cooperative games". Journal of Conflict Resolution. 5 (2): 179–196. doi:10.1177/002200276100500205.
- ^ Aoki, Ryuta; Yomogida, Yukihito; Matsumoto, Kenji (January 2015). "The neural bases for valuing social equality". Neuroscience Research. 90: 33–40. doi:10.1016/j.neures.2014.10.020. PMID 25452125.
- ^ Berg, Joyce; Dickhaut, John; McCabe, Kevin (July 1995). "Trust, Reciprocity, and Social History". Games and Economic Behavior. 10 (1): 122–142. doi:10.1006/game.1995.1027.
- ^ Johnson, Noel D.; Mislin, Alexandra A. (October 2011). "Trust games: A meta-analysis". Journal of Economic Psychology. 32 (5): 865–889. doi:10.1016/j.joep.2011.05.007.
- ^ "Cournot (Nash) Equilibrium". OECD. 18 April 2013. Archived from the original on 23 May 2021. Retrieved 20 April 2021.
- ^ Spulber, Daniel F. (1995). "Bertrand Competition when Rivals' Costs are Unknown". The Journal of Industrial Economics. 43 (1): 1–11. doi:10.2307/2950422. JSTOR 2950422.
- ^ Nasar, Sylvia (1998) A Beautiful Mind, Simon & Schuster. ISBN 0-684-81906-6.
- ^ Singh, Simon (14 June 1998). "Between Genius and Madness". The New York Times.
- ^ Heinlein, Robert A. (1959), Starship Troopers
- ^ Dr. Strangelove Or How I Learned to Stop Worrying and Love the Bomb. 29 January 1964. 51 minutes in.
... is that the whole point of the doomsday machine is lost, if you keep it a secret!
- ^ Guzman, Rafer (6 March 1996). "Star on hold: Faithful following, meager sales". Pacific Sun. Archived from the original on 6 November 2013. Retrieved 25 July 2018..
- ^ "Liar Game (manga) – Anime News Network". www.animenewsnetwork.com. Archived from the original on 25 November 2022. Retrieved 25 November 2022.
- ^ Chaffin, Sean (20 August 2018). "Poker and Game Theory Featured in Hit Film 'Crazy Rich Asians'". PokerNews.com. Archived from the original on 5 November 2022. Retrieved 5 November 2022.
- ^ Bean, Travis (8 February 2019). "Game theory in Crazy Rich Asians: explaining the Mahjong showdown between Rachel and Eleanor". Colossus. Archived from the original on 5 November 2022. Retrieved 5 November 2022.
- ^ "An Analysis of the Applications of Networks in "Molly's Game" : Networks Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090". Archived from the original on 8 April 2023. Retrieved 8 April 2023.
Sources
[edit]- Ben-David, S.; Borodin, A.; Karp, R.; Tardos, G.; Wigderson, A. (January 1994). "On the power of randomization in on-line algorithms". Algorithmica. 11 (1): 2–14. doi:10.1007/BF01294260. S2CID 26771869.
- Downs, Anthony (1957), An Economic theory of Democracy, New York: Harper
- Fisher, Sir Ronald Aylmer (1930). The Genetical Theory of Natural Selection. Clarendon Press.
- Gauthier, David (1986), Morals by agreement, Oxford University Press, ISBN 978-0-19-824992-4
- Grim, Patrick; Kokalis, Trina; Alai-Tafti, Ali; Kilb, Nicholas; St Denis, Paul (2004), "Making meaning happen", Journal of Experimental & Theoretical Artificial Intelligence, 16 (4): 209–243, Bibcode:2004JETAI..16..209G, doi:10.1080/09528130412331294715, S2CID 5737352
- Harper, David; Maynard Smith, John (2003), Animal signals, Oxford University Press, ISBN 978-0-19-852685-8
- Howard, Nigel (1971), Paradoxes of Rationality: Games, Metagames, and Political Behavior, Cambridge, MA: The MIT Press, ISBN 978-0-262-58237-7
- Kavka, Gregory S. (1986). Hobbesian Moral and Political Theory. Princeton University Press. ISBN 978-0-691-02765-4.
- Lewis, David (1969), Convention: A Philosophical Study, ISBN 978-0-631-23257-5 (2002 edition)
- Maynard Smith, John; Price, George R. (1973), "The logic of animal conflict", Nature, 246 (5427): 15–18, Bibcode:1973Natur.246...15S, doi:10.1038/246015a0, S2CID 4224989
- Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3. A modern introduction at the graduate level.
- Poundstone, William (1993). Prisoner's Dilemma (1st Anchor Books ed.). New York: Anchor. ISBN 0-385-41580-X.
- Quine, W.v.O (1967), "Truth by Convention", Philosophica Essays for A.N. Whitehead, Russel and Russel Publishers, ISBN 978-0-8462-0970-6
- Quine, W.v.O (1960), "Carnap and Logical Truth", Synthese, 12 (4): 350–374, doi:10.1007/BF00485423, S2CID 46979744
- Skyrms, Brian (1996), Evolution of the social contract, Cambridge University Press, ISBN 978-0-521-55583-8
- Skyrms, Brian (2004), The stag hunt and the evolution of social structure, Cambridge University Press, ISBN 978-0-521-53392-8
- Sober, Elliott; Wilson, David Sloan (1998), Unto others: the evolution and psychology of unselfish behavior, Harvard University Press, ISBN 978-0-674-93047-6
- Webb, James N. (2007), Game theory: decisions, interaction and evolution, Undergraduate mathematics, Springer, ISBN 978-1-84628-423-6 Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes.
Further reading
[edit]Textbooks and general literature
[edit]- Aumann, Robert J (1987), "game theory", The New Palgrave: A Dictionary of Economics, vol. 2, pp. 460–82.
- Camerer, Colin (2003), "Introduction", Behavioral Game Theory: Experiments in Strategic Interaction, Russell Sage Foundation, pp. 1–25, ISBN 978-0-691-09039-9, archived from the original on 14 May 2011, retrieved 9 February 2011, Description.
- Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 978-0-262-04169-0. Suitable for undergraduate and business students.
- Fernandez, L F.; Bierman, H S. (1998), Game theory with economic applications, Addison-Wesley, ISBN 978-0-201-84758-1. Suitable for upper-level undergraduates.
- Gaffal, Margit; Padilla Gálvez, Jesús (2014). Dynamics of Rational Negotiation: Game Theory, Language Games and Forms of Life. Springer.
- Gibbons, Robert D. (1992), Game theory for applied economists, Princeton University Press, ISBN 978-0-691-00395-5. Suitable for advanced undergraduates.
- Published in Europe as Gibbons, Robert (2001), A Primer in Game Theory, London: Harvester Wheatsheaf, ISBN 978-0-7450-1159-2.
- Gintis, Herbert (2000), Game theory evolving: a problem-centered introduction to modeling strategic behavior, Princeton University Press, ISBN 978-0-691-00943-8
- Green, Jerry R.; Mas-Colell, Andreu; Whinston, Michael D. (1995), Microeconomic theory, Oxford University Press, ISBN 978-0-19-507340-9. Presents game theory in formal way suitable for graduate level.
- Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, ISBN 0-7167-6630-2. Textbook suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation.
- Isaacs, Rufus (1999), Differential Games: A Mathematical Theory With Applications to Warfare and Pursuit, Control and Optimization, New York: Dover Publications, ISBN 978-0-486-40682-4
- Michael Maschler; Eilon Solan; Shmuel Zamir (2013), Game Theory, Cambridge University Press, ISBN 978-1-108-49345-1. Undergraduate textbook.
- Miller, James H. (2003), Game theory at work: how to use game theory to outthink and outmaneuver your competition, New York: McGraw-Hill, ISBN 978-0-07-140020-6. Suitable for a general audience.
- Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7, retrieved 8 March 2016
- Watson, Joel (2013), Strategy: An Introduction to Game Theory (3rd edition), New York: W.W. Norton and Co., ISBN 978-0-393-91838-0. A leading textbook at the advanced undergraduate level.
- McCain, Roger A. (2010). Game Theory: A Nontechnical Introduction to the Analysis of Strategy. World Scientific. ISBN 978-981-4289-65-8.
Historically important texts
[edit]- Aumann, R. J.; Shapley, L. S. (1974), Values of Non-Atomic Games, Princeton University Press
- Cournot, A. Augustin (1838), "Recherches sur les principles mathematiques de la théorie des richesses", Libraire des Sciences Politiques et Sociales
- Edgeworth, Francis Y. (1881), Mathematical Psychics, London: Kegan Paul
- Farquharson, Robin (1969), Theory of Voting, Blackwell (Yale U.P. in the U.S.), ISBN 978-0-631-12460-3
- Luce, R. Duncan; Raiffa, Howard (1957), Games and decisions: introduction and critical survey, New York: Wiley
- reprinted edition: R. Duncan Luce; Howard Raiffa (1989), Games and decisions: introduction and critical survey, New York: Dover Publications, ISBN 978-0-486-65943-5
- Maynard Smith, John (1982), Evolution and the theory of games, Cambridge University Press, ISBN 978-0-521-28884-2
- Nash, John (1950), "Equilibrium points in n-person games", Proceedings of the National Academy of Sciences of the United States of America, 36 (1): 48–49, Bibcode:1950PNAS...36...48N, doi:10.1073/pnas.36.1.48, PMC 1063129, PMID 16588946
- Shapley, L.S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H. W. Kuhn and A. W. Tucker (eds.)
- Shapley, L. S. (October 1953). "Stochastic Games". Proceedings of the National Academy of Sciences. 39 (10): 1095–1100. Bibcode:1953PNAS...39.1095S. doi:10.1073/pnas.39.10.1095. PMC 1063912. PMID 16589380.
- von Neumann, John (1928), "Zur Theorie der Gesellschaftsspiele", Mathematische Annalen, 100 (1): 295–320, doi:10.1007/bf01448847, S2CID 122961988 English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p. 42. Princeton University Press.
- von Neumann, John; Morgenstern, Oskar (1944), "Theory of games and economic behavior", Nature, 157 (3981), Princeton University Press: 172, Bibcode:1946Natur.157..172R, doi:10.1038/157172a0, S2CID 29754824
- Zermelo, Ernst (1913), "Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels", Proceedings of the Fifth International Congress of Mathematicians, 2: 501–4
Other material
[edit]- Allan Gibbard, "Manipulation of voting schemes: a general result", Econometrica, Vol. 41, No. 4 (1973), pp. 587–601.
- McDonald, John (1950–1996), Strategy in Poker, Business & War, W. W. Norton, ISBN 978-0-393-31457-1
{{citation}}: ISBN / Date incompatibility (help). A layman's introduction. - Papayoanou, Paul (2010), Game Theory for Business: A Primer in Strategic Gaming, Probabilistic, ISBN 978-0-9647938-7-3.
- Satterthwaite, Mark Allen (April 1975). "Strategy-proofness and Arrow's conditions: Existence and correspondence theorems for voting procedures and social welfare functions" (PDF). Journal of Economic Theory. 10 (2): 187–217. doi:10.1016/0022-0531(75)90050-2.
- Siegfried, Tom (2006), A Beautiful Math, Joseph Henry Press, ISBN 978-0-309-10192-9
- Skyrms, Brian (1990), The Dynamics of Rational Deliberation, Harvard University Press, ISBN 978-0-674-21885-7
- Thrall, Robert M.; Lucas, William F. (1963), "-person games in partition function form", Naval Research Logistics Quarterly, 10 (4): 281–298, doi:10.1002/nav.3800100126
- Dolev, Shlomi; Panagopoulou, Panagiota N.; Rabie, Mikaël; Schiller, Elad M.; Spirakis, Paul G. (2011). "Rationality authority for provable rational behavior". Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing. pp. 289–290. doi:10.1145/1993806.1993858. ISBN 978-1-4503-0719-2.
- Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh (June 2014), "Algorithms, games, and evolution", Proceedings of the National Academy of Sciences of the United States of America, 111 (29): 10620–10623, Bibcode:2014PNAS..11110620C, doi:10.1073/pnas.1406556111, PMC 4115542, PMID 24979793
External links
[edit]- James Miller (2015): Introductory Game Theory Videos.
- "Games, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
- Paul Walker: History of Game Theory Page.
- David Levine: Game Theory. Papers, Lecture Notes and much more stuff.
- Alvin Roth:"Game Theory and Experimental Economics page". Archived from the original on 15 August 2000. Retrieved 13 September 2003. — Comprehensive list of links to game theory information on the Web
- Adam Kalai: Game Theory and Computer Science — Lecture notes on Game Theory and Computer Science
- Mike Shor: GameTheory.net — Lecture notes, interactive illustrations and other information.
- Jim Ratliff's Graduate Course in Game Theory Archived 29 March 2010 at the Wayback Machine (lecture notes).
- Don Ross: Review Of Game Theory in the Stanford Encyclopedia of Philosophy.
- Bruno Verbeek and Christopher Morris: Game Theory and Ethics
- Elmer G. Wiens: Game Theory — Introduction, worked examples, play online two-person zero-sum games.
- Marek M. Kaminski: Game Theory and Politics Archived 20 October 2006 at the Wayback Machine — Syllabuses and lecture notes for game theory and political science.
- Websites on game theory and social interactions
- Kesten Green's Conflict Forecasting at the Wayback Machine (archived 11 April 2011) — See Papers for evidence on the accuracy of forecasts from game theory and other methods Archived 15 September 2019 at the Wayback Machine.
- McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for Game Theory.
- Benjamin Polak: Open Course on Game Theory at Yale Archived 3 August 2010 at the Wayback Machine videos of the course
- Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An application for Game Theory implemented in JAVA.
- Antonin Kucera: Stochastic Two-Player Games.
- Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ?( #5) – Finale, summing up, and my own view
Game theory
View on GrokipediaFundamentals
Definition and Basic Principles
Game theory is the study of mathematical models representing strategic interactions among rational decision-makers, where the outcome for each participant depends on the choices of all involved.[1] These models formalize situations of conflict, cooperation, or mixed motives, analyzing how agents select actions to maximize their own utilities given the anticipated responses of others.[9] The framework originated in efforts to extend economic analysis beyond isolated decisions to interdependent ones, emphasizing that no agent's payoff can be evaluated in isolation.[10] At its core, a game in game theory comprises players, who are the decision-makers; strategies, which are the complete plans of action available to each player contingent on information; and payoffs, which quantify the outcomes or utilities resulting from the combination of strategies chosen.[11] Payoffs reflect preferences over possible results, often represented numerically under the assumption of ordinal or cardinal utility comparability.[12] Games may be depicted in normal form as payoff matrices for simultaneous moves or in extensive form as decision trees for sequential interactions, capturing the timing and information structure.[9] Fundamental principles include the assumption of rationality, whereby players seek to maximize their expected payoffs, and strategic interdependence, where each player's optimal choice hinges on beliefs about others' actions.[10] Equilibrium concepts, such as those ensuring mutual best responses, emerge as solutions where no player gains by unilaterally altering strategy, though early formulations like von Neumann's minimax theorem applied specifically to zero-sum games.[1] These principles underpin applications across economics, biology, and political science, revealing potential inefficiencies like suboptimal collective outcomes despite individual rationality.[12] The field's foundational text, Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern, published in 1944, established these elements by integrating utility theory with combinatorial analysis.[4]Key Components: Players, Actions, Payoffs
A game in game theory is formally structured around three primary components: the players, their actions, and the payoffs derived from action combinations. The players constitute a finite set of decision-making entities, denoted typically as , where each player acts to advance their own interests based on the anticipated responses of others.[13] These entities can represent individuals, firms, nations, or other agents in strategic interactions, with the assumption that their number and identities are explicitly defined to model the conflict or coordination scenario.[14] Actions refer to the choices available to each player, forming a set for player , which may be pure actions in simple simultaneous-move settings or contingent plans (strategies) in games with sequential moves or incomplete information. In the normal form representation of a game, actions are often synonymous with pure strategies, listing all feasible options without regard to timing or information revelation.[15] For instance, in a two-player game like matching pennies, player 1's actions might be "heads" or "tails," while player 2 mirrors this set; the full action profile is the Cartesian product , enumerating all possible joint choices.[14] This component captures the strategic menu, ensuring the model reflects realistic decision points without extraneous options. Payoffs quantify the outcomes for each player given an action profile, represented by utility functions for player , where higher values indicate preferred results under von Neumann-Morgenstern expected utility theory.[16] These are not mere monetary rewards but ordinal or cardinal measures of preference satisfaction, often normalized for analysis; for example, in zero-sum games, one player's gain equals another's loss, yielding for action profile .[2] Payoff matrices tabulate these values row-wise for one player's actions and column-wise for another's, facilitating equilibrium computation, as in the bimatrix form where rows denote player 1's payoffs and columns player 2's.[17] Empirical calibration of payoffs draws from observed behavior or elicited preferences, underscoring that misspecification can distort predicted equilibria.[18]Assumptions of Rationality and Common Knowledge
The assumption of rationality in game theory holds that each player is a self-interested decision-maker who selects strategies to maximize their own expected payoff, given their beliefs about others' actions and the game's structure. This implies adherence to utility maximization principles, such as those outlined in Savage's 1954 axiomatic framework, where players update beliefs via Bayesian reasoning and choose dominant or best-response actions when available.[19] Rationality does not require omniscience but consistency in pursuing higher payoffs over lower ones, enabling predictions of behavior in strategic settings like the Prisoner's Dilemma, where defection maximizes individual gain under mutual suspicion.[20] Common knowledge extends this by requiring that the game's rules, payoff matrices, and players' rationality are mutually known at all levels of recursion: all players know a fact, know that others know it, know that others know they know it, and so on indefinitely. Formally introduced by David Lewis in his 1969 analysis of conventions, this concept ensures aligned higher-order beliefs, preventing paradoxes like infinite regress in anticipating opponents' foresight.[21] Robert Aumann formalized its role in interactive epistemology in 1976, showing that common knowledge of rationality implies convergence on posterior beliefs in Bayesian updating scenarios, foundational for equilibrium refinements.[22] Together, these assumptions underpin non-cooperative solution concepts, such as Nash equilibrium, where strategies are mutual best responses under common knowledge, as deviations would yield lower payoffs if others remain rational. Empirical tests, however, reveal deviations: for instance, ultimatum game experiments since the 1980s show proposers offering substantial shares despite rational predictions of minimal acceptance thresholds, indicating bounded rationality influenced by fairness norms or incomplete information processing.[23] Critics argue the infinite regress of common knowledge is psychologically implausible, as real agents exhibit cognitive limits rather than perfect foresight, though proponents maintain the framework's value for modeling incentives in economics and biology despite behavioral anomalies.[24][25]Historical Development
Precursors and Early Contributions
Early mathematical analyses of deterministic games provided foundational insights into strategic decision-making under perfect information. In 1913, Ernst Zermelo proved that in finite games of perfect information, such as chess, one player has a winning strategy, a draw is possible, or the opponent has a winning strategy, establishing the concept of backward induction for solving such games.[26] This result, while limited to zero-sum, two-player scenarios without chance elements, anticipated key elements of extensive-form game analysis.[27] Combinatorial game theory emerged from efforts to solve impartial games like Nim. Charles L. Bouton formalized a winning strategy for Nim in 1901 using mex and nimbers, precursors to the Sprague-Grundy theorem independently developed by Roland Sprague in 1930 and Patrick Grundy in 1931, though these built on earlier 19th-century puzzles. These works emphasized recursive evaluation of positions, influencing later impartial game solutions but remaining disconnected from broader strategic interactions.[27] In economics, Antoine Augustin Cournot's 1838 model of duopoly described firms simultaneously choosing output quantities to maximize profits, yielding a stable equilibrium where neither deviates unilaterally—a concept later recognized as analogous to a Nash equilibrium in non-cooperative settings.[27] Joseph Bertrand critiqued this in 1883, proposing price competition instead, where undercutting leads to marginal cost pricing, highlighting sensitivity to strategic assumptions.[27] These models treated competition as interdependent choices without explicit randomization or general solution methods, focusing on market stability rather than adversarial play. Émile Borel explored minimax strategies for two-person games in the early 1920s, deriving optimal mixed strategies for cases with three or five actions, though he erroneously claimed no general solution existed for larger games.[27] Such isolated contributions demonstrated strategic interdependence but lacked a unified framework, paving the way for von Neumann's 1928 minimax theorem that generalized these ideas to arbitrary finite zero-sum games.[3]Formal Foundations (1920s-1950s)
The formal foundations of game theory emerged in the 1920s with Émile Borel's series of papers exploring strategic interactions in games like poker, where he introduced the concept of mixed strategies to model bluffing and randomization.[28] Borel's work, spanning 1921 to 1927, analyzed two-person games under uncertainty but lacked rigorous proofs for general existence of optimal strategies, limiting its scope to specific cases.[28] John von Neumann advanced these ideas decisively in his 1928 paper "Zur Theorie der Gesellschaftsspiele," published in Mathematische Annalen.[29] There, von Neumann proved the minimax theorem for two-person zero-sum games, establishing that for any finite game, there exists a mixed strategy equilibrium where each player's maximin value equals the minimax value, guaranteeing an optimal value of the game independent of the opponent's play.[30] This theorem formalized the notion of strategy as a complete plan contingent on all possible information, shifting analysis from pure intuition to mathematical rigor and providing a cornerstone for zero-sum game solutions.[3] Von Neumann's framework expanded significantly in 1944 with the publication of Theory of Games and Economic Behavior, co-authored with economist Oskar Morgenstern.[4] The book axiomatized von Neumann-Morgenstern utility theory, deriving expected utility from rationality postulates like completeness, transitivity, and continuity, which justified probabilistic choices under risk.[4] It introduced extensive-form representations using game trees for sequential moves, cooperative n-person game analysis via characteristic functions that assign values to coalitions, and solution concepts like stable sets to predict bargaining outcomes, applying these tools to economic competition and oligopoly models.[31] In 1950, John Nash extended non-cooperative game theory beyond zero-sum settings with his Princeton dissertation "Non-Cooperative Games" and a contemporaneous paper "Equilibrium Points in n-Person Games."[32][33] Nash's equilibrium concept defines a strategy profile where no player can improve their payoff by unilaterally deviating, proven to exist for finite strategic-form games via fixed-point theorems like Brouwer's or Kakutani's.[33] This innovation addressed multi-player, non-zero-sum scenarios, such as coordination problems, and became central to analyzing competitive equilibria in economics, contrasting with von Neumann's focus on opposition by allowing mutual benefit or conflict.[32]Expansion and Nobel Recognitions (1960s onward)
During the 1960s and 1970s, game theory expanded beyond its initial economic and military applications into biology and other social sciences, with the development of evolutionary game theory by John Maynard Smith, who in 1973 introduced replicator dynamics to analyze stable strategies in populations where "fitness" replaces individual payoffs, drawing on concepts like the evolutionarily stable strategy (ESS).[34] This framework modeled animal conflicts and cooperation without assuming conscious rationality, influencing behavioral ecology by treating genes or behaviors as players in repeated interactions over generations.[35] Concurrently, cooperative game theory advanced through Lloyd Shapley's work on matching mechanisms, such as the deferred acceptance algorithm developed in 1962, which provided stable solutions for assignments like housing markets or marriages, later applied to organ transplants and school choice.[36] In economics and political science, the 1970s and 1980s saw refinements in non-cooperative models, including repeated games analyzed by Robert Aumann, who in 1959–1960s proved the folk theorem, showing that in infinitely repeated interactions with discounting, a wide range of outcomes, including cooperation, can be sustained as equilibria under rational play and common knowledge.[37] These developments facilitated applications to oligopolistic competition, bargaining, and international relations, where Thomas Schelling's 1960 book The Strategy of Conflict emphasized focal points and credible threats in mixed-motive scenarios, bridging zero-sum and cooperative elements. The Nobel Prize in Economic Sciences began formally recognizing game theory's contributions in 1994, awarding John F. Nash Jr., John C. Harsanyi, and Reinhard Selten "for their pioneering analysis of equilibria in the theory of non-cooperative games," validating Nash's 1950 equilibrium concept for finite games, Harsanyi's Bayesian approach to incomplete information in 1967–1968, and Selten's 1965 perfection refinement to eliminate non-credible threats.[38] In 2005, Robert J. Aumann and Thomas C. Schelling received the prize "for having enhanced our understanding of conflict and cooperation through game-theory analysis," highlighting repeated games and strategic communication. Subsequent awards included 2007 to Leonid Hurwicz, Eric Maskin, and Roger Myerson for mechanism design theory, which uses incentive-compatible equilibria to achieve social optima under asymmetric information; 2012 to Alvin E. Roth and Lloyd S. Shapley for stable matching; and 2020 to Paul R. Milgrom and Robert B. Wilson for auction formats improving revenue and efficiency via game-theoretic bidding models. These recognitions underscore game theory's maturation into a foundational tool for analyzing strategic interdependence across disciplines.[39]Classifications of Games
Cooperative versus Non-Cooperative Games
In non-cooperative game theory, players act independently to maximize their own payoffs, without mechanisms for binding commitments or enforceable side payments between them. This approach models scenarios where strategic choices are made simultaneously or sequentially, but cooperation cannot be externally imposed, leading to outcomes driven by individual rationality and potential conflicts of interest. Key solution concepts, such as the Nash equilibrium introduced by John Nash in his 1951 paper "Non-Cooperative Games," identify strategy profiles where no player benefits from unilateral deviation, assuming others' strategies fixed.[40] Cooperative game theory, by contrast, assumes players can form coalitions with binding agreements, often enforceable through contracts or institutions, shifting focus to group rationality and the division of collective gains. Games are typically represented in characteristic function form, where a value is assigned to each subset of players (coalition) indicating the maximum payoff that coalition can secure on its own, a concept first formulated by John von Neumann.[41] This formulation underpins analysis of transferable utility games, where payoffs can be redistributed among coalition members without loss.[41] Prominent solution concepts in cooperative games include the core, defined as the set of payoff imputations where no coalition has incentive to deviate and block the allocation by achieving higher payoffs for its members, ensuring stability against subgroup objections.[42] Another is the Shapley value, developed by Lloyd Shapley in 1953, which uniquely allocates payoffs to each player as the average marginal contribution across all possible coalition formation orders, satisfying axioms of efficiency, symmetry, dummy player irrelevance, and additivity.[42] These differ from non-cooperative equilibria by prioritizing coalition-proof allocations over individual best responses.[42] The distinction hinges on assumptions about enforcement: non-cooperative models lack pre-game binding pacts, predicting self-enforcing outcomes like Nash equilibria in settings such as oligopolistic competition, while cooperative models presuppose institutional support for coalitions, applicable to scenarios like resource sharing or parliamentary voting.[43] Empirical applications reveal that non-cooperative frameworks better capture decentralized markets without contracts, whereas cooperative ones suit regulated environments with verifiable agreements, though real-world games often require hybrid analysis to account for endogenous enforcement.[44]Zero-Sum versus Non-Zero-Sum Games
A zero-sum game in game theory is a model of conflict where the sum of all players' payoffs equals zero across every possible combination of strategies, such that any gain by one player precisely equals the loss of others.[45] This structure implies strict antagonism, with no net value created or destroyed in the interaction.[46] John von Neumann formalized the analysis of two-player zero-sum games in 1928 through his minimax theorem, which guarantees the existence of optimal mixed strategies that equalize the game's value regardless of the opponent's play.[3] Examples include chess, where one player's victory yields a payoff of +1 and the opponent's -1 (or draws at zero), and most poker variants, where the pot redistributes fixed stakes without external addition.[45] Non-zero-sum games, by contrast, feature payoff sums that can exceed, fall short of, or fluctuate around zero depending on strategies chosen, enabling scenarios of collective benefit or harm.[47] Here, players' interests partially align, blending competition with potential cooperation, and no single dominant strategy universally resolves the game.[47] The Prisoner's Dilemma exemplifies this: two suspects can each receive a light sentence (-1 payoff) by cooperating (silence), but mutual defection yields harsher outcomes (-2 each), while one defects and the other cooperates results in +1 for the defector and -3 for the cooperator, summing to -2 overall rather than zero.[48] The classification hinges on payoff interdependence: zero-sum games enforce pure rivalry, solvable via minimax where each player minimizes maximum loss, whereas non-zero-sum games admit Nash equilibria—strategy profiles where no unilateral deviation improves payoff—but these may Pareto-dominate inefficient outcomes, as in coordination games like the Stag Hunt.[49] Real-world applications differentiate accordingly; zero-sum models suit fixed-resource contests like military engagements over territory, while non-zero-sum frameworks capture trade, where voluntary exchange expands total welfare (e.g., comparative advantage yielding mutual gains beyond initial endowments).[50] Empirical studies, such as those on oligopolistic markets, confirm non-zero-sum dynamics often prevail outside pure antagonism, with cooperation emerging under repeated play or communication.[51]| Characteristic | Zero-Sum Games | Non-Zero-Sum Games |
|---|---|---|
| Payoff Sum | Always zero for all outcomes | Varies (positive, negative, or zero) |
| Player Interests | Strictly opposed | Partially aligned or divergent |
| Optimal Solution | Minimax value and strategies exist | Nash equilibria, potentially multiple and inefficient |
| Examples | Chess, poker | Prisoner's Dilemma, trade negotiations |
Symmetric versus Asymmetric Games
In game theory, a symmetric game is defined as one in which all players possess identical strategy sets, and the payoff to any player for selecting a particular strategy depends solely on the combination of strategies chosen by others, irrespective of player identities.[52] This structure implies that the game's payoff functions are invariant under permutations of the players, allowing for the existence of symmetric equilibria where all players adopt the same strategy.[53] For instance, in the Prisoner's Dilemma, both players face the same choices—cooperate or defect—and receive payoffs that mirror each other based on the pair of actions taken, such as mutual cooperation yielding (3,3) or mutual defection yielding (1,1).[54] Asymmetric games, by contrast, feature players with heterogeneous strategy sets, payoffs, or roles, where outcomes depend on specific player identities or positional differences.[55] A classic example is the Ultimatum Game, where one player (the proposer) offers a division of a fixed resource, and the other (the responder) accepts or rejects it; the proposer's strategies involve specific split amounts, while the responder's are limited to accept/reject, leading to payoffs that are not interchangeable.[56] In such games, equilibria often require distinct strategies tailored to each player's position, complicating analysis compared to symmetric cases.[57] The distinction between symmetric and asymmetric games holds analytical significance, as symmetry simplifies equilibrium computation and prediction by enabling the focus on strategy profiles invariant to player labels, often yielding pure-strategy symmetric Nash equilibria under certain conditions.[58] Symmetric games serve as foundational benchmarks in fields like evolutionary game theory, where population-level dynamics assume interchangeable agents, facilitating models of cooperation and selection pressures.[57] Asymmetry, however, better captures real-world scenarios with inherent roles—such as principal-agent interactions or markets with differentiated firms—necessitating more complex solution methods, including asymmetric Nash equilibria that may not generalize across players.[59] While symmetric structures promote tractable insights into uniform behavior, asymmetric ones reveal how positional advantages or informational disparities drive strategic divergence, though they demand verification of player-specific incentives to avoid overgeneralization from symmetric approximations.[60]Simultaneous versus Sequential Games
In game theory, simultaneous games are those in which players select their actions concurrently, without observing the choices made by others.[61] These are typically represented in normal form, using payoff matrices that enumerate all possible action profiles and their associated outcomes for each player.[18] A classic example is the Cournot duopoly model, where two firms independently choose production quantities to maximize profits, anticipating rivals' outputs based on rational expectations rather than direct observation.[62] In contrast, sequential games involve players acting in a predefined order, with subsequent players able to observe prior actions before deciding.[61] These are formalized in extensive form, depicted as game trees that branch according to decision nodes, information sets, and terminal payoffs, capturing the dynamic structure of play.[18] The ultimatum game illustrates this: a proposer offers a division of a fixed sum to a responder, who can accept (yielding the proposed split) or reject (resulting in zero for both), with the responder's choice informed by the observed offer.[63] The distinction affects equilibrium analysis: simultaneous games rely on Nash equilibria, where no player benefits from unilateral deviation given others' strategies, but may yield multiple or inefficient outcomes due to lack of commitment.[62] Sequential games permit backward induction, starting from endpoints to derive subgame perfect equilibria, often resolving ambiguities in simultaneous counterparts by incorporating credible threats or promises.[61] For instance, any simultaneous game can be recast as a sequential one with simultaneous information sets (nature's move randomizing observation), but the extensive form reveals strategies as complete contingency plans over histories, enabling refinements like trembling-hand perfection.[18] This sequential lens, formalized by von Neumann and Morgenstern in 1944, underscores how timing influences strategic foresight and outcomes in non-cooperative settings.[11]Perfect versus Imperfect Information
In game theory, perfect information refers to scenarios where every player, at each decision point, possesses complete knowledge of all prior actions taken by all participants, allowing full observability of the game's history up to that moment.[64] This structure is formalized in the extensive-form representation, where each decision node for a player corresponds to a singleton information set, meaning the player can distinguish precisely among all possible histories leading to that node.[65] Classic examples include chess and tic-tac-toe, where moves are sequential and fully visible, enabling deterministic analysis without uncertainty about opponents' past choices.[66] Finite two-player zero-sum games of perfect information, absent chance elements, admit a pure strategy solution via backward induction, as established by Ernst Zermelo in 1913 for games like chess, where one player can force a win, the opponent can force a draw, or both can force at least a draw.[26] This theorem underscores the resolvability of such games: by recursively evaluating terminal payoffs and optimal responses from the end of the game tree, players can identify winning or drawing strategies without randomization.[67] Subgame perfect Nash equilibria emerge naturally in these settings, as the absence of hidden information eliminates incentives for non-credible threats or promises off the equilibrium path.[68] For large-scale perfect information games with vast state spaces, practical approximations of optimal pure strategies are achieved using AlphaZero-like methods that combine Monte Carlo tree search with a deep neural network featuring two heads—one for policy approximation and one for value estimation—trained via self-play reinforcement learning, as applied to chess using engines like Leela Chess Zero (lc0)[69], shogi using engines like YaneuraOu[70], and Go using engines like KataGo.[71][72] In contrast, imperfect information arises when players lack full knowledge of prior actions or states, often modeled through information sets encompassing multiple indistinguishable histories in the extensive form.[73] Examples include poker, where private card holdings obscure opponents' hands, or simultaneous-move games like rock-paper-scissors, where actions occur without observation of counterparts.[74] This opacity introduces uncertainty, necessitating strategies that account for beliefs about unobserved elements, such as Bayesian updating over possible histories.[75] Solution concepts shift toward mixed strategies or refinements like trembling-hand perfection to handle bluffing and signaling, as pure backward induction fails due to unresolved ambiguities at decision points.[76] The distinction critically affects computational complexity and strategic depth: perfect information games yield tractable deterministic outcomes in finite cases, while imperfect ones demand approximation algorithms for equilibrium computation, as seen in large-scale applications like no-limit Texas hold'em, where exact solutions remain infeasible without abstractions.[77] Empirical studies confirm that imperfect information amplifies the role of opponent modeling and exploitation, diverging from the mechanical optimality of perfect settings.[78] Note that perfect information pertains specifically to action histories, distinct from complete information, which assumes public knowledge of payoffs and strategies but permits hidden moves.[79]Stochastic and Evolutionary Games
Stochastic games, also known as Markov games, model dynamic interactions where players' actions influence probabilistic state transitions and payoffs over an infinite horizon. Introduced by Lloyd Shapley in 1953, these games extend repeated matrix games by incorporating a finite set of states, with transitions governed by joint action-dependent probabilities.[80] Formally, a two-player discounted stochastic game is defined as a tuple , where is the state space, are action sets for player in state , specifies transition probabilities, and denotes stage payoffs; players discount future rewards by factor .[81] Existence of a value and optimal stationary strategies holds under finite state-action spaces, as proven by Shapley via policy iteration akin to value iteration in Markov decision processes.[80] Applications include multi-agent reinforcement learning, where algorithms like Q-learning generalize to these settings for tasks such as competitive resource allocation.[81] Undiscounted stochastic games, without stopping probabilities, lack guaranteed convergence to equilibria, with counterexamples showing non-stationary optimal play.[82] Research since the 1990s has focused on limit-of-means payoffs and folk theorem analogs, establishing that patient players approximate any feasible payoff via correlated strategies, though computational complexity remains NP-hard for equilibrium finding.[83] Evolutionary games adapt classical game theory to biological or cultural evolution, treating strategies as heritable traits in populations where payoffs represent relative fitness. Pioneered by John Maynard Smith in the 1970s, the framework resolves behavioral indeterminacy in games like the Prisoner's Dilemma by incorporating Darwinian selection over rational choice.[84] A core concept is the evolutionarily stable strategy (ESS), a Nash equilibrium resistant to invasion by mutants: for population state , strategy is ESS if, against alternative , either payoff or and .[8] Maynard Smith's 1982 analysis applied this to animal conflicts, such as hawk-dove games, predicting mixed equilibria stable under frequency-dependent selection.[84] Dynamics in evolutionary games often follow the replicator equation, , where is the frequency of strategy , its expected fitness against population , and the average.[85] This ODE, derived from differential replication rates, converges to ESS in symmetric games under Lyapunov stability, though cyclic attractors emerge in rock-paper-scissors-like setups.[85] Extensions to asymmetric games and metric strategy spaces preserve convergence properties for interior equilibria.[86] Empirical validation includes microbial experiments confirming ESS predictions in bacterial competitions.[8] Unlike stochastic games' focus on individual optimization, evolutionary models emphasize long-run stability, bridging biology and economics without assuming bounded rationality.[87]Other Specialized Forms (Bayesian, Differential, Mean Field)
Bayesian games extend non-cooperative game theory to settings of incomplete information, where each player possesses private information about their own "type," such as payoffs or capabilities, and holds beliefs about others' types drawn from a common prior distribution. Formally, a Bayesian game is specified by a set of players, finite type spaces for each player, actions available to each type, payoff functions depending on the action profile and the realized types, and players' beliefs over type profiles, which satisfy Bayes' rule conditional on the common prior.[88] This framework, introduced by John C. Harsanyi in his 1967–1968 trilogy of papers, models strategic interactions like auctions or signaling games where players update beliefs rationally upon observing actions or signals. Solution concepts include Bayesian Nash equilibrium, in which a strategy for each player—mapping types to actions—is mutually best responses given beliefs about others' strategies and types; refinements like perfect Bayesian equilibrium address sequential settings with beliefs updated after every information set.[88] Applications span economics, such as first-price auctions where bidders' types represent valuations, and political science, analyzing voting under uncertainty about opponents' ideologies.[89] Differential games analyze continuous-time strategic interactions where the state of the system evolves according to differential equations controlled by players' actions, often in zero-sum pursuit-evasion scenarios like missile guidance or combat. Pioneered by Rufus Isaacs during his work at RAND Corporation in the 1950s and formalized in his 1965 book, these games involve players selecting time-dependent controls to optimize payoffs, such as minimizing or maximizing terminal state values, subject to dynamics , where is the state vector and are controls.[90] Isaacs' method of characteristics derives value functions via Hamilton-Jacobi-Isaacs equations, , for zero-sum cases, enabling synthesis of optimal strategies through retrograde integration from terminal conditions.[91] Examples include the homicidal chauffeur game, where a pursuer with superior maneuverability evades a slower armed evader, yielding barrier surfaces separating capture and escape regions; non-zero-sum variants appear in resource extraction or oligopoly models with continuous production adjustments.[92] The theory underpins modern applications in robotics, aerospace control, and finance, such as option pricing under adversarial market conditions.[90] Mean field games approximate Nash equilibria in large-population non-cooperative games by treating each agent's strategy as responding to the aggregate distribution of others' states and actions, rather than individual interactions, yielding a continuum limit as the number of players . Formulated by Jean-Michel Lasry and Pierre-Louis Lions starting in 2006, these models couple a Hamilton-Jacobi-Bellman equation for individual optimization, , with a Fokker-Planck equation tracking the mean field distribution , , where is noise intensity and is the Hamiltonian.[93] This decoupled system replaces computationally intractable -player games, with consistency ensured by fixed-point arguments on measures; stochastic variants incorporate idiosyncratic noise for diffusion approximations.[94] Early applications modeled crowd dynamics, such as pedestrian flows minimizing travel time amid congestion effects from the empirical density, or financial models of herd behavior in portfolio choice.[93] Extensions handle common noise, heterogeneous agents, or master equations for sensitivity analysis, influencing epidemiology for disease spread under vaccination incentives and energy markets for storage decisions.[95] The approach assumes myopic agents with rational expectations over the mean field, validated empirically in limits by law of large numbers, though critiques note potential coordination failures absent in finite games.[96]Formal Representations
Normal Form Representation
The normal form, also known as the strategic form, represents a game in which a finite set of players simultaneously select strategies from their respective strategy sets, with payoffs determined solely by the resulting strategy profile.[18] This representation assumes complete information, where each player's strategy set and payoff function are known to all, and no sequential moves or imperfect information are explicitly modeled, though such elements from extensive-form games can be incorporated via behavioral strategies.[97] Introduced by John von Neumann in his 1928 paper on the theory of games of strategy, the normal form reduces complex games to a canonical structure suitable for analyzing equilibria under simultaneous choice.[30] Formally, a finite n-player normal-form game Γ is defined as a tuple Γ = (N, (S_i){i∈N}, (u_i){i∈N}), where N = {1, ..., n} is the set of players, S_i is the finite strategy set for player i (with strategies often pure actions in simple cases), and u_i: ∏_{j∈N} S_j → ℝ is player i's payoff (or utility) function assigning a real-valued payoff to each strategy profile s = (s_1, ..., s_n).[98][18] Mixed strategies extend this by allowing players to randomize over pure strategies, represented by probability distributions σ_i over S_i, with expected payoffs E[u_i(σ)] computed via linearity.[99] Payoffs reflect ordinal or cardinal preferences, depending on whether von Neumann-Morgenstern utility assumptions hold for risk attitudes.[97] For two-player games, the normal form is compactly depicted as a bimatrix, with rows indexing player 1's strategies, columns player 2's, and cells containing payoff pairs (u_1(s_i, s_j), u_2(s_i, s_j)).[100] A canonical example is the Prisoner's Dilemma, where two suspects choose to cooperate (C) or defect (D) simultaneously, with payoffs structured such that mutual cooperation yields moderate rewards (e.g., 2 years served each), but defection tempts higher gains if the other cooperates (0 years vs. 3), while mutual defection results in poor outcomes (1 year each), illustrating incentives for non-cooperative equilibria.[99]| Player 2 \ Player 1 | C | D |
|---|---|---|
| C | (2, 2) | (0, 3) |
| D | (3, 0) | (1, 1) |
Extensive Form Representation
The extensive form representation depicts games as directed trees, explicitly modeling the sequential nature of players' decisions, the information available at each decision point, and the resulting payoffs. This structure was first formalized by John von Neumann and Oskar Morgenstern in their 1944 book Theory of Games and Economic Behavior, where they defined games in extensive form using a set-theoretic approach involving sequences of moves by players or chance.[102][103] Unlike the normal form, which abstracts away timing and compresses all possibilities into simultaneous choices, the extensive form preserves the chronological order of actions, enabling analysis of dynamic strategies and credibility of threats or promises.[18] A standard extensive-form game consists of a finite tree with a root node representing the initial state, non-terminal nodes partitioned into decision nodes for players and chance nodes for random events, and terminal nodes assigning payoff vectors to each player. Each decision node is labeled with the acting player, and branches from nodes represent possible actions, leading to successor nodes. Payoffs are specified only at terminal nodes, reflecting outcomes after all moves. For games with perfect information, every decision node is uniquely reachable based on prior history, allowing players to observe all previous actions.[104][105] Imperfect information is incorporated via information sets, which partition a player's decision nodes into groups where the player cannot distinguish between nodes within the same set due to unobserved prior moves. This concept was rigorously developed by Harold W. Kuhn in his 1953 contributions to extensive games, emphasizing how information partitions affect strategy formulation. Information sets ensure non-singleton groups only connect nodes with identical future subtrees, maintaining consistency in potential outcomes. Chance moves can be modeled similarly, with probability distributions over branches.[106][104] Strategies in extensive-form games are defined as functions assigning actions to information sets: pure strategies specify a single action per set, while behavioral strategies assign probabilities to actions, suitable for imperfect information due to their equivalence to mixed strategies under perfect recall. The ultimatum game, for instance, illustrates a simple extensive form where a proposer offers a division of a pie, and a responder accepts or rejects, with payoffs terminating the tree accordingly. More complex examples, like the centipede game, extend this to multiple sequential choices, highlighting potential for backward induction in perfect-information settings. This representation facilitates solution methods such as subgame perfection, which refine equilibria by requiring credibility off the equilibrium path.[104][18] The extensive form's tree structure supports computational analysis and allows reduction to normal form via strategy enumeration, though exponential growth in node depth limits practicality for large games; von Neumann and Morgenstern noted this leads to massive matrices for strategic-form equivalents. Despite such scalability issues, it remains foundational for modeling real-world sequential interactions, from bargaining to repeated encounters, by capturing causal sequences and informational asymmetries explicitly.[107][18]Characteristic Function Form
In cooperative game theory, the characteristic function form models games where players can form binding coalitions and utility is transferable among coalition members. A game in this form is denoted by the pair , where is a finite set of players and is the characteristic function that assigns to each coalition its worth , defined as the maximum total payoff the coalition can guarantee itself regardless of the actions of players outside .[108][41] This formulation assumes that coalitions enforce agreements and redistribute payoffs internally, focusing analysis on coalition stability and payoff allocation rather than individual strategies.[109] The characteristic function originates from the work of John von Neumann and Oskar Morgenstern in their 1944 book Theory of Games and Economic Behavior, where it was introduced to handle multiperson games with coalitions. For a coalition , is typically computed from an underlying non-cooperative game by allowing to act as a single entity maximizing its minimum payoff against the complementary coalition , often under zero-sum assumptions in early formulations, though extensions apply to general-sum settings.[41] A key property is , reflecting that the empty coalition generates no value, and many analyses impose superadditivity, where for disjoint , incentivizing grand coalition formation.[108] To derive from a strategic-form game, one reduces the game by treating each coalition as a unitary player opposing its complement, selecting strategies that maximize the coalition's payoff in a max-min fashion; multiple strategic forms may yield the same , emphasizing the abstraction's focus on coalitional power.[110] This form enables solution concepts like the core, which consists of payoff vectors where no coalition can improve by deviating, and the Shapley value, an axiomatic fair division of .[111] Limitations include assumptions of perfect enforceability and transferable utility, which may not hold in all real-world scenarios, prompting extensions like non-transferable utility games.[109]
Alternative Representations
In addition to the standard normal, extensive, and characteristic function forms, game theory employs alternative representations to address limitations such as externalities in cooperative settings or scalability in large multiplayer noncooperative games. These forms prioritize compactness, incorporation of dependencies like player partitions or local interactions, and applicability to specific domains like networks or economies with spillovers.[112][113] The partition function form extends the characteristic function form for cooperative games by accounting for externalities, where a coalition's payoff depends not only on its members but also on how the remaining players organize into coalitions. Formally, for a player set , a partition function maps each partition (the set of all partitions of ) and each coalition to a value , representing the worth of given the overall partition . This contrasts with the characteristic function , which assumes independence from external organization. Introduced by Thrall in the mid-20th century, this form is essential for modeling scenarios like international trade alliances or oligopolies where competitors' groupings affect profits.[114][115] Solution concepts, such as efficient values or stable sets, adapt Shapley value axioms to this structure, ensuring properties like efficiency and dummy independence.[116][117] Graphical games provide a compact alternative to the full normal form for multiplayer noncooperative games, particularly those with local dependencies. A graphical game specifies an undirected graph with vertices as players and edges indicating interactions, plus local payoff functions for each player depending only on 's action and those of its neighbors . Introduced by Kearns, Littman, and Singh in 2001, this representation exploits sparsity—for instance, in social networks or spatial games—reducing exponential complexity in specifying full payoff matrices for players with action sets of size , from to where is maximum degree. Nash equilibria computation benefits from this structure, often via local approximations or message-passing algorithms, though exact solutions remain PPAD-complete for general graphs.[113][118] This form has applications in algorithmic game theory for large-scale systems like auctions or epidemic models.Solution Concepts
Nash Equilibrium and Refinements
The Nash equilibrium is a solution concept for non-cooperative games where no player benefits from unilaterally altering their strategy while others maintain theirs. Introduced by John Nash in his January 1950 paper "Equilibrium Points in n-Person Games," published in the Proceedings of the National Academy of Sciences, it generalizes equilibrium beyond zero-sum games by focusing on mutual best responses in mixed or pure strategies.[33] Nash's 1951 doctoral thesis, "Non-Cooperative Games," formalized the existence proof: every finite game with a finite number of players and pure strategies possesses at least one Nash equilibrium in mixed strategies, established via Brouwer's fixed-point theorem applied to best-response correspondences.[32] [119] In the Cournot duopoly model of quantity competition, firms choose output levels simultaneously; the Nash equilibrium occurs where each firm's output maximizes profit given the other's, typically yielding higher total output and lower prices than a monopoly but less efficient than perfect competition.[120] Pure-strategy Nash equilibria exist in games like the Prisoner's Dilemma, where mutual defection is stable despite collective incentives for cooperation, illustrating how individual rationality can lead to suboptimal outcomes.[121] Multiple equilibria often arise, as in coordination games (e.g., Battle of the Sexes), where players prefer matching choices but differ in preferences, complicating prediction without additional criteria.[122] Refinements address Nash equilibria's limitations, such as supporting implausible strategies in sequential games via non-credible threats. Subgame perfect equilibrium (SPE), proposed by Reinhard Selten in 1965, refines Nash by requiring the strategy profile to induce a Nash equilibrium in every subgame, enforced via backward induction to eliminate off-path deviations.[123] In the centipede game, where alternating players decide to continue or terminate for escalating payoffs, the unique SPE prescribes immediate termination, though empirical play often extends longer, highlighting tensions with observed behavior.[124] Trembling-hand perfect equilibrium, introduced by Selten in 1975, further refines by considering Nash equilibria robust to small perturbations in strategies, modeling accidental "trembles" in implementation; it coincides with sequential equilibria in extensive-form games and ensures strategies are optimal even under minor errors.[38] For games with incomplete information, perfect Bayesian equilibrium (PBE) extends SPE by incorporating consistent Bayesian beliefs updated via Bayes' rule where possible, followed by sequential rationality, as analyzed in signaling models like the beer-quiche game.[125] These refinements reduce the set of Nash equilibria—SPE properly subsets Nash, and trembling-hand perfect further narrows to strategically stable outcomes—but may not yield uniqueness, prompting additional selection mechanisms like evolutionary stability or risk dominance.[126] Empirical tests, such as in laboratory ultimatum games, show players often deviate from SPE predictions, suggesting bounded rationality influences real-world approximations.[127]Cooperative Solution Concepts
Cooperative solution concepts in game theory analyze outcomes under the assumption that players can form binding coalitions and enforce agreements on payoff divisions, typically within transferable utility (TU) games represented in characteristic function form, where the value denotes the maximum payoff coalition can guarantee independently of the grand coalition . These concepts seek allocations—payoff vectors with and for individual rationality—that are stable against deviations or deemed fair by axiomatic criteria, contrasting with noncooperative approaches by prioritizing coalition incentives over individual strategies. Empirical applications, such as cost-sharing in networks or profit division in firms, reveal that while these concepts predict stability in convex games (where ), many real-world TU games exhibit empty cores, underscoring the limits of enforceability absent external mechanisms.[128] The core defines stability as the set of imputations satisfying for all coalitions , ensuring no subgroup can improve collectively by secession. Introduced by Gillies in 1959 as a refinement of earlier stability notions, the core is nonempty in balanced games (where no collection of coalitions exceeds the grand coalition's capacity) but often empty in nonconvex settings, as in the 2-player division game with , , where competitive pressures erode joint surplus. Computational evidence from market games shows cores shrinking with competition, aligning with causal observations of breakdown in weakly superadditive environments.[129][128] Von Neumann-Morgenstern stable sets, proposed in 1944, generalize the core by identifying subsets of imputations that are internally stable—no imputation in is dominated by another in via a coalition blocking—and externally stable—every imputation outside is so dominated. Dominance occurs if a coalition prefers an alternative imputation over for its members, with feasible for . Unlike the core, stable sets may be multiple or absent; for simple voting games, they often coincide with minimal winning coalitions' imputations, but farsighted extensions reveal fragility to indirect dominance chains, as players anticipate multi-stage deviations.[130][131] The Shapley value, developed by Lloyd Shapley in 1953, yields a unique imputation , averaging each player's marginal contribution across all coalition formation orders , where precedes in . It satisfies efficiency (), symmetry (equal contributors get equal shares), null player (zero marginal gets zero), and additivity (linear games sum values), providing a fairness benchmark robust to order uncertainty. In airport cost games, it allocates proportionally to runway needs, matching empirical fairness perceptions in surveys, though critics note its inefficiency in nonconvex games where it falls outside the core.[129][132] The nucleolus, introduced by Schmeidler in 1969, refines the core by lexicographically minimizing the vector of maximum coalition excesses (dissatisfactions), prioritizing the worst-off coalition iteratively. As a single-valued selector, it always exists and lies in the core when nonempty, favoring egalitarian stability; in glove market games (left/right hands as complements), it equalizes suppliers despite asymmetries, unlike the Shapley value's contribution weighting. Stability analyses confirm its selection in 70-80% of experimental TU games with nonempty cores, though computational complexity limits scalability beyond small .[133][128] The Nash bargaining solution, axiomatized by John Nash in 1950 for two-player problems, selects the feasible payoff pair maximizing over disagreement points , satisfying Pareto optimality, symmetry, scale invariance, and independence of irrelevant alternatives. Extended to TU -person games via symmetric bargaining over the core or as a canonical representation, it converges to egalitarian splits in symmetric disputes but yields player-specific outcomes under asymmetry, as in ultimatum experiments where offers near 50-50% prevail due to rejection threats. Causal bargaining models validate its predictive power in repeated interactions, though violations arise from incomplete information, highlighting enforceability dependencies.[134][135]Equilibrium Selection and Dynamics
The equilibrium selection problem arises in non-cooperative games where multiple Nash equilibria exist, requiring criteria to predict which outcome rational players will coordinate on.[136] This challenge is prominent in coordination games, such as the stag hunt, where payoffs incentivize both efficient but risky cooperation and safer but inefficient defection.[137] A foundational rationalist approach is the theory of John Harsanyi and Reinhard Selten, outlined in their 1988 book, which refines Nash equilibria into a unique "solution" via iterative procedures emphasizing payoff dominance (higher joint payoffs) and risk dominance (resilience to belief perturbations).[138] Their tracing procedure models players' initial inclinations and gradual adjustments under complete information, prioritizing equilibria that are uniformly perfect—robust to small trembles in strategies.[139] For 2x2 games, risk-dominant equilibria often prevail when strategic uncertainty is high, as quantified by the product of deviation losses; for instance, in matching pennies variants, the equilibrium minimizing maximum regret is selected.[140] Evolutionary and stochastic dynamics provide alternative selection mechanisms by simulating long-run outcomes under imitation, mutation, or noise. In evolutionary game theory, the replicator equation governs strategy frequency changes proportional to relative fitness: , where is the proportion of strategy , its payoff, and the average; this dynamic converges to Nash equilibria, with asymptotically stable ones (like evolutionarily stable strategies) selected in large populations.[141] Stochastic perturbations, as in Young (1993), favor equilibria with larger attraction basins under rare mutations, explaining persistence of risk-dominant outcomes in coordination despite payoff inferiority.[142] Learning dynamics in repeated play, such as fictitious play—where players best-respond to empirical frequency distributions—also refine equilibria; convergence to Nash in zero-sum games was proven by Robinson (1951), but in general finite games, it may cycle or select via perturbations.[143] Empirical studies validate these: in laboratory coordination experiments, risk-dominant equilibria emerge under uncertainty, while payoff-dominant ones require focal points or communication.[137] These mechanisms underscore that selection depends on informational and perturbation structures, with no universal rule absent context-specific refinements.[144]Applications
Economics and Market Analysis
Game theory provides a framework for analyzing strategic interactions in economic markets, where firms' decisions on output, pricing, and entry depend on rivals' anticipated actions. The field's application to economics originated with John von Neumann and Oskar Morgenstern's 1944 book Theory of Games and Economic Behavior, which formalized zero-sum games and expected utility to model economic decision-making under uncertainty.[4] This work laid the groundwork for treating markets as non-cooperative games, shifting from classical price-taking assumptions to interdependent strategies, particularly in oligopolistic structures where few firms dominate.[145] In oligopoly models, the Cournot framework, originally proposed by Antoine Augustin Cournot in 1838, is reinterpreted through Nash equilibrium, where firms simultaneously choose quantities assuming rivals' outputs are fixed. Each firm maximizes profit given the residual demand, leading to a symmetric equilibrium where total output exceeds monopoly levels but falls short of perfect competition, with prices above marginal cost.[146] For identical firms with constant marginal costs c and inverse demand P(Q) = a - bQ, the Nash equilibrium quantities are q_i = (a - c)/(n+1)b for n firms, yielding market price P = (a + nc)/(n+1).[147] In contrast, the Bertrand model posits price competition for homogeneous goods, resulting in a Nash equilibrium where prices equal marginal costs even with two firms, as undercutting incentives drive profits to zero unless capacities or differentiation intervene.[148] Auction design leverages game theory to maximize revenue and efficiency, with the Vickrey auction—introduced by William Vickrey in 1961—featuring sealed second-price bidding where the highest bidder wins but pays the second-highest bid, incentivizing truthful revelation of valuations as a dominant strategy.[149] This mechanism ensures incentive compatibility, allocating goods to the highest-valuing bidder while mitigating winner's curse risks in common-value settings. Principal-agent problems, such as moral hazard in employment contracts, are modeled as sequential games with asymmetric information, where principals design incentive-compatible contracts to align agents' efforts with firm value, often using performance pay to mitigate shirking.[150] Empirical applications include regulatory analysis, where game-theoretic models predict firms' responses to antitrust policies, revealing potential for collusion in repeated interactions via trigger strategies.[151]Biology and Evolutionary Processes
Evolutionary game theory extends classical game theory to model interactions in biological populations, where strategies represent heritable traits or behaviors, and payoffs correspond to reproductive fitness rather than utility. In this framework, natural selection drives the dynamics of strategy frequencies, with successful strategies increasing in prevalence proportional to their relative fitness advantages over alternatives. Unlike traditional game theory assuming rational agents, evolutionary models treat organisms as pursuing implicit strategies shaped by selection pressures, often leading to stable population equilibria.[152] The concept of an evolutionarily stable strategy (ESS) formalizes stability in such systems: a strategy is ESS if, when nearly fixed in the population, it yields higher fitness against itself than any rare mutant strategy, or equal fitness but superior against the mutant in pairwise contests. Introduced by John Maynard Smith and George Price in 1973, this refinement of Nash equilibrium accounts for evolutionary invasion barriers, preventing mutants from displacing the resident strategy even at low frequencies. Maynard Smith's 1982 book Evolution and the Theory of Games synthesized these ideas, applying them to phenotypic evolution where fitness depends on frequency-dependent selection.[8][84] Replicator dynamics provide a mathematical backbone for these models, describing how strategy proportions evolve via differential equations: the growth rate of a strategy's frequency equals its fitness minus the population average fitness. Formulated by Peter Taylor and Luc Wathen in 1978 as a continuous-time approximation of imitation and selection processes, replicator equations predict convergence to equilibria where no strategy has a fitness advantage, often aligning with ESS. These dynamics reveal phenomena like cycles in polymorphic equilibria, as in the hawk-dove game modeling animal aggression, where a mixed strategy of conditional fighting (hawk) and display (dove) resists invasion when resource value balances injury costs—typically yielding dove frequencies above 0.5 for low-value contests.[141][153] Applications span conflict resolution and cooperation. In parental investment and sex ratio evolution, game-theoretic models explain Fisher's 1:1 sex ratio as an ESS under frequency-dependent fitness, where deviating parents produce the rarer sex at a disadvantage, supported by empirical deviations in haplodiploid insects like bees where sisters share 75% relatedness, favoring female-biased ratios. For cooperation, iterated prisoner's dilemma simulations show tit-for-tat as robust against exploitation in noisy environments, paralleling microbial quorum sensing or symbiosis where reciprocal altruism evolves via direct fitness benefits, though kin selection via Hamilton's rule often underpins apparent altruism more causally than pure reciprocity. Empirical validation includes lab evolution experiments with bacteria, where cooperation-defecting dynamics match replicator predictions, and field observations of bird alarm calls aligning with ESS thresholds for vigilance costs versus predation risk.[154][155][8] Critics note limitations in assuming infinite populations and weak selection, yet EGT's predictive power persists in microbial evolution and cancer dynamics, where mutant invasions mirror ESS instability. Stochastic extensions and spatial structure refine models, incorporating drift and local interactions to explain persistence of cooperation despite defection incentives. Overall, EGT underscores how frequency dependence enforces realism in evolutionary predictions, distinguishing viable strategies from unstable ones via causal selection mechanisms.[35][156]Political Science and Conflict Resolution
Game theory provides analytical tools for modeling strategic interactions among political actors, such as voters, legislators, and executives, where outcomes depend on interdependent choices rather than isolated decisions. In legislative settings, it elucidates coalition formation and bargaining over policy, as formalized in models like the Baron-Ferejohn framework, where parties alternate proposals and acceptances under time constraints to divide resources.[157] These non-cooperative games highlight how veto power and discounting of future payoffs influence equilibrium outcomes, predicting inefficiencies from incomplete information about rivals' reservation values. In international relations, game theory frames conflict as a bargaining process where states negotiate over disputed resources, with war arising from failures to credibly commit or reveal private information about military capabilities. Robert Powell's bargaining model posits that rational actors resort to force when the costs of fighting are low relative to gains from bluffing, explaining prolonged disputes like territorial claims. Empirical applications include arms races, often represented as iterated Prisoner's Dilemma games, where mutual armament dominates despite collective incentives for disarmament; for instance, U.S.-Soviet nuclear buildup from 1945 to 1991 escalated due to fears of defection, costing trillions in resources.[158] [159] Conflict resolution leverages game-theoretic insights into credible threats and commitments, as advanced by Thomas Schelling in The Strategy of Conflict (1960), which analyzes mixed-motive scenarios where parties seek joint gains amid rivalry. Schelling's focal points and precommitment strategies—such as burning bridges to eliminate retreat options—facilitate de-escalation by making concessions costly, influencing doctrines like mutually assured destruction.[160] [161] The 1962 Cuban Missile Crisis exemplifies a Chicken game variant, where U.S. naval quarantine and Soviet missile withdrawal averted nuclear exchange through brinkmanship; dynamic extensions predict up to 60% war probability absent signaling, underscoring the role of reputation in repeated play. [162] Experimental validations in political contexts reveal deviations from pure rationality, yet reinforce core predictions; for example, laboratory simulations of crisis bargaining show subjects achieving Pareto-superior outcomes via communication, mirroring real-world diplomatic channels that mitigate information asymmetries.[163] Critics note that assuming unitary rational states overlooks domestic politics, but refinements incorporating audience costs—where leaders risk credibility by backing down—enhance predictive power for democratic signaling in conflicts.[164] Overall, these models inform policy by quantifying trade-offs in deterrence and negotiation, though empirical tests against historical data, such as post-WWII alliances, confirm that enforceable agreements reduce defection risks more effectively than unilateral restraint.[165]Military Strategy and Defense
Game theory's application to military strategy emerged prominently during World War II and the early Cold War, with John von Neumann's minimax theorem providing a foundational tool for zero-sum conflicts where one side's gain is the other's loss.[3] The theorem, proved by von Neumann in 1928, guarantees an optimal mixed strategy that minimizes maximum expected losses against a rational adversary, influencing decisions such as bomber route planning to evade anti-aircraft fire during wartime operations.[3] [166] This approach modeled adversarial engagements as games, enabling commanders to anticipate enemy responses and select strategies robust to worst-case scenarios.[167] The RAND Corporation, established in 1948, institutionalized game theory in U.S. defense planning amid the Cold War, applying it to nuclear strategy, target selection, and resource allocation.[168] RAND researchers used game-theoretic models to analyze strategic air warfare, missile scheduling, and deterrence dynamics, contributing to doctrines like mutually assured destruction (MAD), a concept rooted in von Neumann's ideas where mutual nuclear retaliation ensures no rational actor initiates full-scale war.[168] [169] Schelling's work at RAND extended these models to bargaining and credible threats, emphasizing commitment devices in deterrence to prevent escalation.[170] The Cuban Missile Crisis of October 1962 exemplifies game theory's retrospective analysis of high-stakes confrontations, often framed as a "game of chicken" where swerving signals weakness but collision risks catastrophe.[171] Analyses reveal U.S. quarantine and Soviet withdrawal as equilibrium outcomes under incomplete information, with Kennedy's blockade creating a focal point for de-escalation while preserving face.[172] [173] Such models highlight brinkmanship's role, where leaders signal resolve to shift payoffs, though empirical success depends on shared rationality assumptions not always verified in crises.[174] Beyond nuclear contexts, game theory informs tactical decisions in non-zero-sum settings, such as counterinsurgency or cyber defense, where repeated interactions and alliances complicate pure minimax solutions.[175] U.S. Army research integrates it with AI for resource allocation against adaptive threats, as in optimizing deployments against time-critical targets.[176] [177] Limitations persist, as real-world actors deviate from rational predictions due to incomplete information or miscalculation, underscoring the need for hybrid models incorporating behavioral factors.[178]
Business and Management
Game theory provides frameworks for analyzing strategic interactions in business environments where outcomes depend on the actions of multiple interdependent parties, such as competitors, suppliers, or internal stakeholders. In oligopolistic markets, firms use non-cooperative game models like the Cournot duopoly to predict rivals' output decisions, leading to a Nash equilibrium where no firm benefits from unilaterally changing its quantity given others' strategies; for instance, in the Cournot model, two firms producing homogeneous goods set outputs such that marginal revenue equals marginal cost adjusted for rivals' anticipated production.[179] Similarly, Bertrand competition models price-setting under homogeneous goods, often resulting in marginal cost pricing as the equilibrium, though real-world differentiation or capacity constraints modify this to sustain higher prices.[180] Market entry decisions employ sequential game models, such as Stackelberg leadership, where first-mover tactics allow incumbents to commit to high output levels, deterring entrants and securing advantages like brand loyalty or economies of scale.[181] The prisoner's dilemma illustrates challenges in business competition, such as advertising expenditures or price wars, where individual firms gain short-term advantages by aggressive actions like undercutting prices, but collective restraint would yield higher joint profits; for example, airlines might all benefit from higher fares if coordinated, yet the temptation to discount erodes margins for all, and in the cola industry, Coca-Cola and PepsiCo tacitly avoid mutual destruction by signaling cooperation in repeated pricing interactions rather than defecting to low-price equilibria.[182][183] In repeated games, mechanisms like tit-for-tat strategies can sustain cooperation, as seen in industries where firms monitor and retaliate against deviations, fostering implicit collusion without explicit agreements; auction theory further applies to business through optimal bid strategies in procurement or asset sales, balancing aggressive bidding with avoiding winner's curse.[184] Bargaining theory applies to negotiations in mergers and acquisitions, where parties divide surplus based on relative bargaining power, outside options, and information asymmetry; models like Rubinstein bargaining predict outcomes splitting gains according to discount rates and patience levels, incorporating credible threats and commitments—such as threatening to switch partners if alternatives exist—and signaling strength through selective information leakage to improve deal terms.[185][186] In management, principal-agent models address conflicts where owners (principals) design incentives to align managers' (agents) actions with firm value maximization, using contracts with performance-based pay to mitigate moral hazard and adverse selection.[187] Empirical applications include executive compensation structures tying bonuses to stock performance or earnings targets to counteract agency costs.[188]Other Disciplines (Epidemiology, Philosophy)
In epidemiology, game theory analyzes strategic interactions in disease control, particularly vaccination and behavioral responses to outbreaks. Vaccination decisions often form a public goods game, where individuals weigh personal risks and costs against collective herd immunity benefits, leading to free-rider incentives that can undermine coverage. A 2004 analysis demonstrated that game-theoretic models predict suboptimal equilibria in vaccination uptake, as rational self-interest favors delay or avoidance when others vaccinate, mirroring the prisoner's dilemma structure.[189] Imperfect vaccines introduce multiple Nash equilibria, with low-vaccination states stable under certain parameters, explaining persistent outbreaks despite available interventions.[190] Similarly, social distancing during epidemics is modeled as a differential game, where agents optimize self-protection against infection risks while accounting for others' compliance, revealing that voluntary measures may falter without coordination mechanisms.[191] Evolutionary game theory further extends this to population-level dynamics, simulating how behavioral strategies evolve under selective pressures from disease transmission.[192] In philosophy, game theory elucidates interdependent rational choice, distinguishing it from solitary decision theory by emphasizing how agents' outcomes hinge on mutual strategies. It probes foundational questions in ethics, such as reconciling individual rationality with moral cooperation, exemplified by the prisoner's dilemma where defection maximizes personal gain but collective defection yields worse results, challenging utilitarian prescriptions.[193] Philosophers leverage these models to assess whether morality emerges from repeated interactions or requires external enforcement, critiquing pure self-interest as insufficient for social order.[193] In decision theory's intersection with philosophy, game-theoretic tools formalize backward induction in sequential games to test assumptions of perfect rationality, revealing paradoxes like those in ultimatum bargaining that question empirical alignment with predicted equilibria.[9] Applications extend to epistemology and social philosophy, where concepts like common knowledge underpin analyses of trust and convention formation, as in coordination games modeling language or norm adherence.[193] These frameworks, originating from von Neumann and Morgenstern's 1944 axiomatization, inform debates on whether strategic reasoning can ground ethical norms without invoking deontological priors.[194]Experimental and Behavioral Insights
Laboratory Experiments
Laboratory experiments in game theory test theoretical predictions under controlled conditions, typically using human subjects incentivized by monetary payoffs scaled to game outcomes. These studies, pioneered in the mid-20th century and expanded since the 1980s, often employ student participants in sessions lasting 30-90 minutes, with stakes equivalent to several dollars per decision. Results frequently diverge from strict rational choice models, revealing patterns of fairness, reciprocity, and learning that refine equilibrium concepts.[195][196] In the one-shot Prisoner's Dilemma, mutual defection is the unique Nash equilibrium, yet empirical cooperation rates range from 40% to 60%, with subjects forgoing personal gain to avoid mutual loss or promote joint benefit. Repeated iterations show initial cooperation declining due to perceived exploitation, but strategies like tit-for-tat sustain higher cooperation in some populations. Gender differences appear minimal, though teams may defect less than individuals in certain setups.[197][198][199] The Ultimatum Game, where a proposer divides a stake and the responder accepts or rejects (yielding zero for both upon rejection), predicts minimal offers accepted under subgame perfection, as responders should take any positive amount. Experiments consistently show proposers offering 40-50% and responders rejecting unfair splits below 20-30%, enforcing equity at the cost of efficiency; this holds across cultures but varies slightly, with lower rejection in some non-Western samples. Such rejections challenge pure self-interest, suggesting intrinsic punishment of inequity or reputation concerns even in anonymous one-shot play.[200][201][202] Public goods games simulate voluntary contributions to a shared resource, where free-riding dominates rationally, yet initial contributions average 40-60% of endowments in one-shot anonymous settings, declining over rounds without intervention. Introducing costly punishment opportunities boosts sustained contributions to 50-95%, as peers sanction defectors, aligning outcomes closer to efficient provision despite theoretical instability. These findings underscore conditional cooperation and norm enforcement as causal drivers beyond static equilibria.[203][204][205] Tests of Nash equilibria in coordination and entry games reveal slow initial convergence, with subjects exhibiting level-k thinking or best-response dynamics before approximating predictions after 10-20 rounds. In unprofitable games, behavior adheres more to maxmin strategies than Nash when equilibria yield losses. Overall, while learning supports equilibrium in repeated play, one-shot anomalies highlight bounded rationality limits, informing refinements like quantal response equilibria.[206][207][208]Field Studies and Empirical Validation
Field studies in game theory examine strategic interactions in natural settings using observational data, administrative records, and natural experiments to test theoretical predictions against real-world outcomes. These analyses often employ structural estimation to infer payoffs and equilibria from market behaviors, revealing alignments with concepts like Nash equilibrium in high-stakes environments while highlighting deviations due to incomplete information or repeated play. Empirical game-theoretic analysis (EGTA) integrates historical data with simulations to approximate strategic forms and quantify equilibrium robustness, providing a bridge between abstract models and causal inference in complex systems.[209][210] Spectrum auctions by the U.S. Federal Communications Commission (FCC), initiated in 1994, offer a prominent validation through simultaneous multiple-round ascending (SMRA) formats derived from game-theoretic auction theory. These mechanisms, influenced by Vickrey-Clarke-Groves models, promoted efficient license allocation by revealing bidder values iteratively, with empirical assessments showing revenue exceeding $233 billion by 2023 and allocative efficiencies often above 95% in initial auctions like Auction 1, which raised $612 million for narrowband PCS licenses. Bidders' shading strategies aligned with theoretical incentives to avoid winner's curse, though larger auctions exhibited demand reduction and signals of tacit collusion, reducing efficiency to 80-90% in cases like Auction 41.[211][212][213] In oligopoly markets, field data from concentrated industries test non-cooperative models like Cournot quantity competition, where firms' output choices reflect conjectural variations on rivals' responses. Analyses of U.S. airline routes post-deregulation (1978 onward) demonstrate price coordination via repeated interactions, sustaining markups 20-30% above competitive levels through grim trigger strategies, consistent with folk theorem predictions for discounted infinite horizons but vulnerable to entry or demand shocks that provoke price wars. Cement and ready-mix concrete markets similarly show spatial differentiation enabling supra-competitive pricing, with structural estimates confirming Nash equilibria in quantities but frequent tacit collusion over static benchmarks.[179][214] Labor market bargaining provides empirical tests of dynamic models like Rubinstein's alternating-offer framework, applied to wage negotiations with outside options. Data from NBA free agency (1988-2010) indicate salaries incorporate player-specific alternatives and team budgets, yielding deals where cash-constrained teams extract 5-10% discounts, aligning with subgame perfect equilibria under complete information. Union-firm disputes in manufacturing sectors reveal strike probabilities matching mixed-strategy Nash outcomes, with durations averaging 40 days when impasse values are symmetric, though asymmetric information inflates inefficiencies beyond theoretical minima.[215][216]Behavioral Deviations and Bounded Rationality
Bounded rationality refers to the cognitive limitations that prevent individuals from fully optimizing decisions in complex environments, as introduced by Herbert Simon in his 1957 work Models of Man, where agents "satisfice" by selecting satisfactory options rather than exhaustive maximization due to constraints in information, computation, and time.[217] In game-theoretic contexts, these bounds lead to systematic deviations from equilibrium predictions, as players struggle with iterative strategic reasoning required for concepts like Nash equilibria, often settling for rule-of-thumb strategies or limited foresight.[218] Laboratory experiments reveal pronounced behavioral deviations, particularly in bargaining games. In the ultimatum game, where one player proposes a division of a fixed sum and the other accepts or rejects (with rejection yielding zero for both), subgame perfect equilibrium predicts proposers offering the minimal positive amount and responders accepting any positive offer; however, empirical results show proposers typically offering 40-50% of the stake, with responders rejecting offers below 20-30%, incurring losses to enforce fairness norms.[219] These patterns persist across cultures and stake sizes, indicating intrinsic preferences for equity over pure self-interest, challenging the standard rational actor model.[220] Prospect theory, formulated by Daniel Kahneman and Amos Tversky in 1979, elucidates further deviations by modeling decisions relative to reference points, with loss aversion—where losses loom larger than equivalent gains—altering strategic choices in risky interactions.[221] For instance, in coordination games or auctions, players exhibit framing effects, overvaluing entitlements and rejecting trades that rational utility maximization would endorse, as losses from status quo deviations outweigh potential gains.[222] Heuristics and cognitive biases compound these issues in strategic decision-making, with agents relying on availability or anchoring cues that distort probability assessments and opponent modeling.[223] In repeated games like the prisoner's dilemma, boundedly rational players cooperate more frequently than one-shot defection equilibria suggest, often via simple reciprocal strategies, reflecting limited recursion in anticipating others' bounded cognition.[224] Models incorporating quantal response equilibria, which allow probabilistic errors scaling with choice stakes, better fit data by treating deviations as noisy best responses rather than irrationality per se.[225] Such empirical insights underscore that while classical game theory excels in idealized settings, real-world applications demand adjustments for human cognitive architecture to enhance predictive accuracy.Criticisms and Limitations
Challenges to Rationality Assumptions
Classical game theory posits that agents are fully rational, maximizing expected utility under complete information and common knowledge of rationality, leading to predictions like Nash equilibria.[1] This assumption faces challenges from bounded rationality, where cognitive limitations prevent perfect optimization. Herbert Simon's 1957 concept of bounded rationality argues that decision-makers operate under constraints of incomplete information, finite computational capacity, and time pressures, resulting in satisficing behavior rather than exhaustive maximization.[226] In game-theoretic contexts, these bounds manifest as limited strategic foresight, with players unable to fully anticipate opponents' responses or compute complex equilibria, as evidenced by models incorporating procedural rationality over outcome-based perfection.[218] Empirical experiments reveal systematic deviations from these rational benchmarks, particularly in social interactions. The ultimatum game, introduced by Güth, Schmittberger, and Schwarze in 1982, demonstrates this: a proposer divides a sum between themselves and a responder, who can accept (both receive the split) or reject (both get nothing). Rational theory predicts proposers offering minimal amounts and responders accepting any positive offer, yet experiments consistently show proposers offering around 40-50% and responders rejecting offers below 20-30%, prioritizing fairness and inequity aversion over absolute gain.[227] [228] These patterns hold across diverse populations, with rejection rates correlating to perceived unfairness rather than utility loss alone, challenging the self-interested utility maximization core to game theory.[229] Further challenges arise from behavioral factors like reciprocity, emotions, and social preferences, integrated in psychological game theory. Rational choice falters in scenarios with interdependent utilities, such as trust games or public goods dilemmas, where observed cooperation exceeds defection predictions due to unmodeled motives like altruism or punishment.[230] Bounded rationality models, including level-k cognition, explain these by positing iterative but finite reasoning depths, where players assume opponents use simpler heuristics, aligning predictions closer to data without invoking unbounded computation. While aggregate outcomes may approximate rationality in market settings, individual-level deviations underscore the theory's descriptive limits, prompting refinements like quantal response equilibria to capture probabilistic choice errors.[231]Predictive Shortcomings and Paradoxes
Game theory's predictive accuracy is undermined by the prevalence of multiple equilibria, where rational play can sustain various outcomes without specifying which will emerge in practice. In repeated games, the folk theorem demonstrates that any feasible payoff profile satisfying individual rationality constraints can be supported as a subgame perfect equilibrium through appropriate strategies, rendering unique predictions elusive without additional selection criteria.[232] This multiplicity complicates forecasting, as observed in coordination scenarios like traffic conventions, where both left- and right-side driving constitute Nash equilibria, but coordination relies on historical or focal conventions absent from the core model.[233] Empirical tests further expose shortcomings, as human behavior deviates from equilibrium predictions due to bounded rationality and social preferences. In the ultimatum game, where a proposer divides a stake and the responder accepts or rejects (with both receiving nothing upon rejection), subgame perfect equilibrium anticipates proposers offering the minimal positive amount and responders accepting, maximizing payoffs. Yet, meta-analyses of experiments reveal average offers around 40% of the stake, with rejection rates for offers below 20-30% exceeding 50% in many samples, driven by fairness norms rather than pure self-interest.[234][202] Similarly, the centipede game, a finite extensive-form game of perfect information, employs backward induction to predict immediate termination at the first node to avoid risk, securing a small sure gain over potential larger losses; however, laboratory results show players passing the opportunity 70-90% of the time in early rounds, extending play and yielding higher average payoffs than theory forecasts.[235][236] Paradoxes highlight foundational tensions in game-theoretic reasoning, particularly around induction and common knowledge. The chain store paradox models a multi-store incumbent facing sequential entrants, where backward induction unravels deterrence: in the final period, fighting yields no future benefit, so accommodation prevails, propagating backward to imply perpetual accommodation and no reputation for toughness. This contradicts intuitive deterrence strategies observed in markets, where early fights signal resolve to prevent entries, necessitating refinements like trembling-hand perfection or incomplete information to restore predictive coherence.[237][238] Such issues underscore how strict rationality assumptions falter under empirical scrutiny, as real agents exhibit limited foresight and reputational concerns not captured by pure induction.[239]Methodological and Ethical Critiques
Game theory's methodological foundations have been challenged for requiring overly precise specifications of game structures, including payoffs, strategies, and information sets, which rarely align with the ambiguity and evolving rules of real-world interactions. In practice, interactions often lack the crisp protocols assumed in models, leading to difficulties in accurately representing dynamic social or economic environments where rules themselves emerge endogenously rather than being exogenously fixed.[240] This precision demand can render models computationally intractable for large-scale or multi-stage games, as solving for equilibria grows exponentially with the number of players or decision points, limiting applicability to simplified abstractions rather than comprehensive analyses.[241] A further methodological limitation arises from the prevalence of multiple Nash equilibria in non-trivial games, which introduces indeterminacy: without additional criteria for selection, predictions remain vague, as outcomes depend on arbitrary refinements or ad hoc assumptions about focal points. Empirical testing exacerbates this, as discrepancies between theory and data often stem from unverifiable auxiliary hypotheses about beliefs or utilities rather than core strategic logic, complicating falsification.[240] Critics argue that game theory's axiomatic approach, by prioritizing formal consistency over behavioral realism, struggles to incorporate heterogeneous agents or iterative learning processes observed in experiments, where outcomes deviate systematically from equilibrium predictions due to unmodeled factors like reciprocity or fairness norms.[242] Ethically, game theory has faced scrutiny for potentially sidelining moral deliberation by reducing decisions to utility maximization once payoffs are defined, implying that strategic calculation supplants normative reasoning in cooperative or conflictual scenarios. For instance, in zero-sum or prisoner's dilemma setups, the framework can rationalize defection or aggression as rational without embedding deontological constraints, fostering a view of interactions as inherently adversarial and self-interested.[24] Applications in high-stakes domains, such as nuclear deterrence modeled via mutually assured destruction, highlight risks: while equilibria may deter conflict under perfect rationality, miscalculations or incomplete information could precipitate catastrophic outcomes, raising concerns about overreliance on game-theoretic prescriptions in policy without safeguards for ethical externalities like unintended escalation.[243] Proponents counter that game theory is amoral—neither prescribing nor proscribing behavior—but merely analyzing incentives; yet detractors contend its emphasis on equilibrium strategies may inadvertently legitimize opportunism or inequality in economic designs, such as auctions or bargaining protocols that favor informed players, potentially exacerbating power asymmetries absent redistributive mechanisms. In ethical decision-making contexts, the theory's focus on individual utilities overlooks collective moral goods or intrinsic values, as seen in critiques where repeated games fail to capture long-term trust-building beyond tit-for-tat heuristics, which themselves prioritize retaliation over forgiveness.[244][245] These concerns underscore the need for hybrid approaches integrating game theory with ethical frameworks to mitigate reductive tendencies in applied settings.Recent Developments
Algorithmic and Computational Game Theory
Algorithmic game theory examines the computational aspects of game-theoretic problems, focusing on the design of efficient algorithms to compute solution concepts such as Nash equilibria and the analysis of their computational complexity.[246] This field integrates classical game theory with algorithms and complexity theory, addressing challenges like finding equilibria in large-scale games and designing mechanisms that incentivize truthful behavior under computational constraints.[247] Pioneering work in the area includes the development of approximation algorithms for equilibria and the study of incentives in algorithmic settings, as detailed in the 2007 edited volume Algorithmic Game Theory by Noam Nisan and colleagues, which covers equilibria computation, auctions, and pricing mechanisms.[248] A central result concerns the complexity of computing Nash equilibria: the problem is PPAD-complete, even for finite games with three or more players, implying that no polynomial-time algorithm exists unless PPAD ⊆ P.[249] This completeness was established through reductions from Brouwer's fixed-point theorem, showing that exact computation is intractable for general cases, though polynomial-time algorithms exist for specific classes like two-player zero-sum games via linear programming.[250] PPAD, defined via parity arguments on directed graphs where an odd-degree source implies another odd-degree node, captures total search problems like equilibrium finding without verifiable certificates.[251] Consequently, researchers pursue approximate equilibria, such as ε-Nash equilibria computable in polynomial time for certain congestion games or via iterative methods like fictitious play, though guarantees vary by game structure.[252] Algorithmic mechanism design extends this to creating protocols where self-interested agents reveal private information truthfully, often via computationally bounded incentive-compatible mechanisms like Vickrey-Clarke-Groves (VCG) for auctions.[253] In combinatorial auctions, for instance, VCG achieves optimal social welfare but faces exponential communication and computation costs, prompting approximations like those yielding constant-factor welfare guarantees.[254] The price of anarchy, quantifying the ratio of worst-case Nash equilibrium welfare to optimal, provides bounds on equilibrium inefficiency without explicit computation, as in routing games where it is at most 5/2.[246] These tools enable applications in spectrum auctions and network design, where computational feasibility tempers theoretical ideals.[247]Integration with Machine Learning and AI
Game theory has been integrated into machine learning and artificial intelligence primarily to model strategic interactions among multiple agents, enabling systems to anticipate and respond to adversarial or cooperative behaviors in dynamic environments. In multi-agent reinforcement learning (MARL), game-theoretic concepts such as Nash equilibria serve as benchmarks for training policies that converge to stable outcomes where no agent benefits from unilateral deviation. This synthesis enhances the robustness of AI systems by incorporating payoff matrices and strategy spaces into learning algorithms, allowing agents to optimize joint actions in scenarios like traffic coordination or resource allocation.[255] For instance, MARL frameworks analyze emergent behaviors in environments with non-stationary policies, drawing on extensive-form games to mitigate issues like credit assignment in cooperative settings.[256] A prominent example of this integration is in generative adversarial networks (GANs), introduced in 2014, where training pits a generator against a discriminator in a two-player zero-sum game framed by the minimax theorem. The generator minimizes the discriminator's ability to distinguish real from synthetic data, while the discriminator maximizes classification accuracy, leading to equilibrium where generated outputs approximate true distributions. This adversarial setup, rooted in von Neumann's 1928 minimax theorem, has driven advancements in image synthesis and data augmentation, though convergence to Nash equilibrium remains challenging due to non-convex loss landscapes. Empirical studies show that GAN variants, such as Wasserstein GANs introduced in 2017, stabilize training by modifying the game objective to enforce Lipschitz continuity, improving sample quality in applications like medical imaging. Beyond reinforcement and generative models, game theory informs AI robustness against adversarial attacks, where attackers and defenders engage in Stackelberg games to optimize perturbations within bounded norms. In large language models, recent approaches leverage correlated equilibria to enforce consistency across outputs, reducing hallucinations by simulating multi-agent debates that penalize inconsistent reasoning paths. This method, explored in 2024 research, treats model components as players negotiating truthful responses, yielding measurable gains in factual accuracy on benchmarks like TruthfulQA.[257] Such integrations highlight causal mechanisms where game-theoretic incentives align AI objectives with empirical validation, though scalability to high-dimensional strategy spaces persists as a computational bottleneck.[258]Advances in Multi-Agent Systems
Multi-agent systems (MAS) apply game-theoretic models to analyze and optimize interactions among autonomous agents, emphasizing equilibria that balance cooperation and competition in decentralized environments. Advances since the early 2020s have integrated game theory with computational methods to handle scalability, non-stationarity, and heterogeneous objectives, enabling applications in robotics, traffic management, and distributed AI.[259] These developments address limitations in traditional single-agent approaches by formalizing agent interactions as Markov games, where policies converge to correlated equilibria under partial observability.[260] A key progression involves multi-agent reinforcement learning (MARL), which embeds game-theoretic concepts like best-response dynamics and fictitious play to mitigate the challenges of evolving opponent strategies. For example, meta-algorithms in MARL approximate best responses to policy mixtures via deep reinforcement learning, achieving convergence in imperfect-information settings with up to 10 agents in benchmarks like Hanabi and StarCraft II micromanagement tasks.[261] Recent surveys highlight how value-decomposition networks, informed by cooperative game theory, decompose joint value functions to promote credit assignment, yielding 20-30% performance gains in cooperative domains over independent Q-learning baselines.[262] Decentralized frameworks have advanced through dynamic game formulations for event-triggered control, reducing communication overhead by 50% in simulations of 100-agent swarms while maintaining tracking errors below 5% of nominal values.[263] In 2025, the Multi-Objective Markov Game (MOMG) framework extended stochastic games to accommodate diverse agent utilities, using Pareto frontiers and scalarization techniques to compute scalable equilibria via centralized training with decentralized execution (CTDE), tested on multi-objective pursuit-evasion scenarios.[264] Further innovations combine game theory with model predictive control (MPC) for non-cooperative MAS, where agents optimize Stackelberg or Nash strategies online, demonstrated in autonomous vehicle platoons to resolve deadlocks with response times under 100 ms.[265] These approaches empirically validate robustness against adversarial perturbations, as shown in MARL benchmarks where game-theoretic regularization prevents policy collapse, outperforming naive RL by factors of 2-5 in win rates against mixed-motive opponents.[266] Ongoing challenges include equilibrium selection in infinite-horizon settings, prompting hybrid methods blending evolutionary game theory with neural approximators for real-time deployment.Emerging Applications (e.g., in Pricing, Healthcare)
Game theory has been applied to dynamic pricing in cloud marketplaces, where providers compete by adjusting prices in response to rivals' strategies and demand fluctuations. A 2023 model framed cloud application pricing as a complete-information game among provider committees, enabling dynamic policies that optimize revenue while considering usage patterns and competitor actions, with simulations showing up to 15% efficiency gains over static pricing.[267] In ride-sharing platforms like Uber and Ola, game-theoretic analysis of pricing from 2010 onward revealed initial Bertrand-like competition driving fares toward marginal costs, akin to a Prisoner's Dilemma where mutual aggression erodes profits; however, post-2015 market maturation introduced differentiation, shifting equilibria toward sustainable margins as predicted by repeated games.[268] Retailers increasingly use game theory to avert price wars in oligopolistic markets, modeling competitors' responses to discounts or promotions via Nash equilibria. A 2025 framework demonstrated that anticipating rival reactions in real-time data environments allows firms to maintain 10-20% higher margins by coordinating implicit collusion without explicit agreements, as validated in U.S. grocery sector data from 2020-2024.[269] Systematic reviews from 2024 highlight these strategies' role in e-commerce and energy markets, where evolutionary games incorporate prospect theory to account for bounded rationality in pricing under uncertainty, improving predictive accuracy over traditional econometric models by 25% in backtested scenarios.[270][271] In healthcare, game theory models resource allocation during shortages, treating hospitals as non-cooperative players in multiplayer games to distribute ventilators or ICU beds. During the COVID-19 pandemic, a 2023 single-stage game maximized social welfare by assigning resources based on patient severity scores and facility capacities, reducing mortality estimates by 8-12% compared to first-come-first-served protocols in simulated U.S. hospital networks from March 2020.[272] Evolutionary game theory further analyzes vaccination dynamics, where individual hesitancy creates free-rider incentives undermining herd immunity thresholds of 60-70% for SARS-CoV-2 variants. A 2024 model coupled epidemic spreading with strategic decision-making showed that subsidies shifting payoff matrices increased uptake by 15-20% in populations with 20% initial refusers, as tested on 2021-2023 global data.[273][274] Recent epidemic models integrate game theory with network structures to predict behavioral responses, such as compliance with lockdowns or testing. A 2025 evolutionary game approach quantified how self-interested testing adoption in high-risk groups lowered peak infections by 30% in agent-based simulations calibrated to mpox outbreaks, emphasizing incentives over mandates for sustained cooperation.[192] In inpatient settings, non-zero-sum games address incentive misalignments between providers and payers, with 2023 analyses proposing payment reforms that align strategies to cut readmission rates by 10%, drawing from U.S. Medicare data where traditional fee-for-service equilibria incentivize overutilization.[275] These applications underscore game theory's utility in causal modeling of strategic interactions, though empirical validation remains limited by data granularity in real-time crises.Further reading
The following books are recommended as accessible introductions to basic game theory. These are popular entry-level recommendations, especially in Chinese-speaking communities:- ''Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life'' (《策略思維:商界、政界及日常生活中的策略競爭》) by Avinash Dixit and Barry Nalebuff: A non-mathematical classic using real-life examples and stories to explain strategic thinking.
- ''The Art of Strategy: A Game Theorist's Guide to Success in Business and Life'' (《思辨賽局:看穿局勢、創造優勢的策略智慧》) by Avinash Dixit and Barry Nalebuff: An updated, accessible introduction relying on logic and narratives rather than math, ideal for beginners.
- ''Games of Strategy'' (《策略博弈》) by Avinash Dixit, Susan Skeath, and David Reiley: A beginner-friendly textbook with examples, exercises, and clear explanations.
