Hubbry Logo
Zero-sum gameZero-sum gameMain
Open search
Zero-sum game
Community hub
Zero-sum game
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Zero-sum game
Zero-sum game
from Wikipedia

Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two competing entities, where the result is an advantage for one side and an equivalent loss for the other.[1] In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero.[2]

If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a more significant piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game if all participants value each unit of cake equally. Other examples of zero-sum games in daily life include games like poker, chess, sport and bridge where one person gains and another person loses, which results in a zero-net benefit for every player.[3] In the markets and financial instruments, futures contracts and options are zero-sum games as well.[4]

In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses can be less than or more than zero. A zero-sum game is also called a strictly competitive game, while non-zero-sum games can be either competitive or non-competitive. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality,[5] or with Nash equilibrium. Prisoner's Dilemma is a classic non-zero-sum game.[6]

Definition

[edit]
Choice 1 Choice 2
Choice 1 −A, A B, −B
Choice 2 C, −C −D, D
Generic zero-sum game
Option 1 Option 2
Option 1 2, −2 −2, 2
Option 2 −2, 2 2, −2
Another example of the classic zero-sum game

The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal. Generally, any game where all strategies are Pareto optimal is called a conflict game.[7][8]

Zero-sum games are a specific example of constant sum games where the sum of each outcome is always zero.[9] Such games are distributive, not integrative; the pie cannot be enlarged by good negotiation.

In situation where one decision maker's gain (or loss) does not necessarily result in the other decision makers' loss (or gain), they are referred to as non-zero-sum.[10] Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players is sometimes more or less than what they began with.

The idea of Pareto optimal payoff in a zero-sum game gives rise to a generalized relative selfish rationality standard, the punishing-the-opponent standard, where both players always seek to minimize the opponent's payoff at a favourable cost to themselves rather than prefer more over less. The punishing-the-opponent standard can be used in both zero-sum games (e.g. warfare game, chess) and non-zero-sum games (e.g. pooling selection games).[11] The player in the game has a simple enough desire to maximise the profit for them, and the opponent wishes to minimise it.[12]

Solution

[edit]

For two-player finite zero-sum games, if the players are allowed to play a mixed strategy, the game always has at least one equilibrium solution. The different game theoretic solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. Notice that this is not true for pure strategy.

Example

[edit]
A zero-sum game (Two person)
Blue
Red
A B C
1
−30
30
10
−10
−20
20
2
10
−10
−20
20
20
−20

A game's payoff matrix is a convenient representation. Consider these situations as an example, the two-player zero-sum game pictured at right or above.

The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices.

Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points.

In this example game, both players know the payoff matrix and attempt to maximize the number of their points. Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, and with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. If Blue anticipates Red's reasoning and choice of action 1, Blue may choose action B, so as to win 10 points. If Red, in turn, anticipates this trick and goes for action 2, this wins Red 20 points.

Émile Borel and John von Neumann had the fundamental insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute probably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that Red should choose action 1 with probability 4/7 and action 2 with probability 3/7, and Blue should assign the probabilities 0, 4/7, and 3/7 to the three actions A, B, and C. Red will then win 20/7 points on average per game.

Solving

[edit]

The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix M where element Mi,j is the payoff obtained when the minimizing player chooses pure strategy i and the maximizing player chooses pure strategy j (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of M is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (Raghavan 1994, p. 740) by solving the following linear program to find a vector u:

Minimize:

Subject to the constraints:

u ≥ 0
M u ≥ 1.

The first constraint says each element of the u vector must be nonnegative, and the second constraint says each element of the M u vector must be at least 1. For the resulting u vector, the inverse of the sum of its elements is the value of the game. Multiplying u by that value gives a probability vector, giving the probability that the maximizing player will choose each possible pure strategy.

If the game matrix does not have all positive elements, add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will not affect the equilibrium mixed strategies for the equilibrium.

The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Alternatively, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of M (adding a constant so it is positive), then solving the resulting game.

If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations and thus such games are equivalent to linear programs, in general.[13]

Universal solution

[edit]

If avoiding a zero-sum game is an action choice with some probability for players, avoiding is always an equilibrium strategy for at least one player at a zero-sum game. For any two players zero-sum game where a zero-zero draw is impossible or non-credible after the play is started, such as poker, there is no Nash equilibrium strategy other than avoiding the play. Even if there is a credible zero-zero draw after a zero-sum game is started, it is not better than the avoiding strategy. In this sense, it's interesting to find reward-as-you-go in optimal choice computation shall prevail over all two players zero-sum games concerning starting the game or not.[14]

The most common or simple example from the subfield of social psychology is the concept of "social traps". In some cases pursuing individual personal interest can enhance the collective well-being of the group, but in other situations, all parties pursuing personal interest results in mutually destructive behaviour.

Copeland's review notes that an n-player non-zero-sum game can be converted into an (n+1)-player zero-sum game, where the n+1st player, denoted the fictitious player, receives the negative of the sum of the gains of the other n-players (the global gain / loss).[15]

Zero-sum three-person games

[edit]
Zero-sum three-person game
Zero-sum three-person game

It is clear that there are manifold relationships between players in a zero-sum three-person game, in a zero-sum two-person game, anything one player wins is necessarily lost by the other and vice versa; therefore, there is always an absolute antagonism of interests, and that is similar in the three-person game.[16] A particular move of a player in a zero-sum three-person game would be assumed to be clearly beneficial to him and may disbenefits to both other players, or benefits to one and disbenefits to the other opponent.[16] Particularly, parallelism of interests between two players makes a cooperation desirable; it may happen that a player has a choice among various policies: Get into a parallelism interest with another player by adjusting his conduct, or the opposite; that he can choose with which of other two players he prefers to build such parallelism, and to what extent.[16] The picture on the left shows that a typical example of a zero-sum three-person game. If Player 1 chooses to defence, but Player 2 & 3 chooses to offence, both of them will gain one point. At the same time, Player 1 will lose two-point because points are taken away by other players, and it is evident that Player 2 & 3 has parallelism of interests.

Real life example

[edit]

Economic benefits of low-cost airlines in saturated markets - net benefits or a zero-sum game [17]

[edit]

Studies show that the entry of low-cost airlines into the Hong Kong market brought in $671 million in revenue and resulted in an outflow of $294 million.

Therefore, the replacement effect should be considered when introducing a new model, which will lead to economic leakage and injection. Thus introducing new models requires caution. For example, if the number of new airlines departing from and arriving at the airport is the same, the economic contribution to the host city may be a zero-sum game. Because for Hong Kong, the consumption of overseas tourists in Hong Kong is income, while the consumption of Hong Kong residents in opposite cities is outflow. In addition, the introduction of new airlines can also have a negative impact on existing airlines.

Consequently, when a new aviation model is introduced, feasibility tests need to be carried out in all aspects, taking into account the economic inflow and outflow and displacement effects caused by the model.

Zero-sum games in financial markets

[edit]

Derivatives trading may be considered a zero-sum game, as each dollar gained by one party in a transaction must be lost by the other, hence yielding a net transfer of wealth of zero.[18]

An options contract - whereby a buyer purchases a derivative contract which provides them with the right to buy an underlying asset from a seller at a specified strike price before a specified expiration date – is an example of a zero-sum game. A futures contract – whereby a buyer purchases a derivative contract to buy an underlying asset from the seller for a specified price on a specified date – is also an example of a zero-sum game.[19] This is because the fundamental principle of these contracts is that they are agreements between two parties, and any gain made by one party must be matched by a loss sustained by the other.

If the price of the underlying asset increases before the expiration date the buyer may exercise/ close the options/ futures contract. The buyers gain and corresponding sellers loss will be the difference between the strike price and value of the underlying asset at that time. Hence, the net transfer of wealth is zero.

Swaps, which involve the exchange of cash flows from two different financial instruments, are also considered a zero-sum game.[20] Consider a standard interest rate swap whereby Firm A pays a fixed rate and receives a floating rate; correspondingly Firm B pays a floating rate and receives a fixed rate. If rates increase, then Firm A will gain, and Firm B will lose by the rate differential (floating rate – fixed rate). If rates decrease, then Firm A will lose, and Firm B will gain by the rate differential (fixed rate – floating rate).

Whilst derivatives trading may be considered a zero-sum game, it is important to remember that this is not an absolute truth. The financial markets are complex and multifaceted, with a range of participants engaging in a variety of activities. While some trades may result in a simple transfer of wealth from one party to another, the market as a whole is not purely competitive, and many transactions serve important economic functions.

The stock market is an excellent example of a positive-sum game, often erroneously labelled as a zero-sum game. This is a zero-sum fallacy: the perception that one trader in the stock market may only increase the value of their holdings if another trader decreases their holdings.[21]

The primary goal of the stock market is to match buyers and sellers, but the prevailing price is the one which equilibrates supply and demand. Stock prices generally move according to changes in future expectations, such as acquisition announcements, upside earnings surprises, or improved guidance.[22]

For instance, if Company C announces a deal to acquire Company D, and investors believe that the acquisition will result in synergies and hence increased profitability for Company C, there will be an increased demand for Company C stock. In this scenario, all existing holders of Company C stock will enjoy gains without incurring any corresponding measurable losses to other players.

Furthermore, in the long run, the stock market is a positive-sum game. As economic growth occurs, demand increases, output increases, companies grow, and company valuations increase, leading to value creation and wealth addition in the market.

Complexity

[edit]

It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent.

Extensions

[edit]

Reducing non-zero-sum games to zero-sum games

[edit]

In 1944, John von Neumann and Oskar Morgenstern proved that any non-zero-sum game for n players is equivalent to a zero-sum game with n + 1 players; the (n + 1)th player representing the negative of the total profit among the first n players.[23]

Non-linear utilities

[edit]

Arrow and Hurwicz[24] studied two-player zero-sum games in which the payoff function may be non-linear (as in a concave game). They presented gradient methods for computing the value of such games.

Misunderstandings

[edit]

Zero-sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually with respect to the independence and rationality of the players, as well as to the interpretation of utility functions[further explanation needed]. Furthermore, the word "game" does not imply the model is valid only for recreational games.[5]

Politics is sometimes called zero sum[25][26][27] because in common usage the idea of a stalemate is perceived to be "zero sum"; politics and macroeconomics are not zero-sum games, however, because they do not constitute conserved systems.[28][29] Applying zero-sum game logic to scenarios that are not zero-sum in nature may lead to incorrect conclusions. Zero-sum games are based on the notion that one person's win will result in the other person's loss, so naturally there is competition between the two. There are scenarios, however, where that is not the case. For instance, in some cases both sides cooperating and working together could result in both sides benefitting more than they otherwise would have. By applying zero-sum logic, we in turn create an unnecessary, and potentially harmful, sense of scarcity and hostility.[30] Therefore, it is critical to make sure that zero-sum applications fit the given context.

Zero-sum thinking

[edit]

In psychology, zero-sum thinking refers to the perception that a given situation is like a zero-sum game, where one person's gain is equal to another person's loss. The term is derived from game theory. However, unlike the game theory concept, zero-sum thinking refers to a psychological construct—a person's subjective interpretation of a situation. Zero-sum thinking is captured by the saying "your gain is my loss" (or conversely, "your loss is my gain").

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A zero-sum game is a concept in representing a competitive situation between two or more players where the total gains and losses sum to zero, such that one player's benefits come exclusively at the expense of equivalent losses to the others, with no net creation or destruction of value. This framework models pure conflict, contrasting with non-zero-sum games where can generate mutual benefits. The term originated with mathematician , who introduced the formal analysis of zero-sum games in his 1928 paper "Zur Theorie der Gesellschaftsspiele," proving the for two-person zero-sum games. The states that in such games, there exists an optimal mixed strategy for each player, ensuring a game value vv where the maximizing player can guarantee at least vv and the minimizing player can guarantee at most vv, resolving strategic through equilibrium. Von Neumann's work laid the foundation for modern game theory, later expanded in his 1944 book Theory of Games and Economic Behavior co-authored with , which applied these ideas to and under . Zero-sum games are exemplified by classic contests like chess or rock-paper-scissors, where outcomes are strictly win-lose, and simplified models of poker that abstract bluffing and betting as zero-sum interactions. Beyond recreation, they inform real-world applications in (e.g., certain models of ), military strategy (e.g., modeling deterrence during the ), and (e.g., certain trading competitions), though many practical situations deviate toward non-zero-sum dynamics due to potential for joint gains. Extensions to multi-player or imperfect-information settings continue to drive research in and .

Fundamentals

Definition

In game theory, a zero-sum game models situations involving multiple decision-makers, or players, each selecting from a set of available actions, known as strategies, to maximize their own outcomes, referred to as payoffs, which are typically numerical values representing gains or losses. These elements—players, strategies, and payoffs—form the foundational components of game-theoretic analysis, assuming rational behavior where players aim to optimize their interests based on anticipated actions of others. A zero-sum game is formally defined as a strategic interaction among players where the total payoffs sum to zero, meaning the gains of one player exactly equal the losses of the others, creating a strictly competitive environment with no net creation or destruction of value. In the standard two-player case, this is represented in normal form by a payoff matrix A=(aij)Rm×nA = (a_{ij}) \in \mathbb{R}^{m \times n}, where mm and nn denote the number of pure available to the row player and column player, respectively; here, aija_{ij} is the payoff received by the row player when selecting pure ii and the column player selecting pure jj, while the column player's payoff is simultaneously aij-a_{ij}. Players may also employ mixed , which involve probabilistic distributions over their pure , allowing for randomized play to achieve expected payoffs in simultaneous-move scenarios. The concept of zero-sum games originated with John von Neumann's seminal 1928 paper, which analyzed such games in the context of poker and broader strategic interactions, laying the groundwork for modern game theory.

Key Properties

Zero-sum games represent a special case of constant-sum games, where the total payoff across all players sums to zero for every possible outcome. In general constant-sum games, the payoffs sum to a fixed constant CC; to reduce such a game to zero-sum form without altering strategic incentives, subtract C/2C/2 from each player's payoffs (for two players) or adjust proportionally for more players, ensuring the sum becomes zero while preserving the relative ordering of strategies. This transformation, given by ui=uiciu_i' = u_i - c_i where ci=C\sum c_i = C, maintains the game's equilibrium structure because adding constants to payoffs does not change optimal strategies or Nash equilibria. The adversarial nature of zero-sum games arises from the strict opposition of players' interests, where one player's gain directly equals another's loss, eliminating opportunities for mutual benefit or . Unlike non-zero-sum games, where joint strategies might increase total welfare, zero-sum settings force players to minimize opponents' payoffs to maximize their own, framing interactions as pure conflicts with no Pareto improvements possible. This opposition ensures that rational play involves safeguarding against exploitation, often leading to defensive strategies. In payoff matrices for zero-sum games, a saddle point occurs at an entry aija_{ij} that is the minimum value in its row (the worst outcome for the row player given column jj) and the maximum value in its column (the best outcome for the column player given row ii). This point represents a pure equilibrium, satisfying the condition that the row player's maximin value equals the column player's value: maximinjaij=minjmaxiaij\max_i \min_j a_{ij} = \min_j \max_i a_{ij}. If a saddle point exists, both players can commit to the corresponding pure strategies without regret, as deviations worsen their expected payoff. The value of a zero-sum game, denoted vv, is the expected payoff to the maximizing player (or negative to the minimizing player) when both play optimally, guaranteed by the for finite two-player games. This value bounds the payoff: no player can secure more than vv against optimal opposition, nor less than vv with security strategies. In matrix terms, vv lies between the maximin and , coinciding at points or mixed strategy equilibria. In two-player symmetric zero-sum games, the payoff matrix is skew-symmetric (A=ATA = -A^T), meaning players share identical sets and payoffs are negated transposes, making optimal strategies interchangeable between players. Such implies that if a strategy profile (s,t)(s, t) is optimal, so is (t,s)(t, s), and the game's value is zero, as no player holds an inherent advantage. This structure simplifies analysis, often yielding uniform mixed strategies over symmetric supports.

Solving Methods

Two-Player Games

In two-player zero-sum games, the provides the foundational result for optimal play. Formulated by in 1928, the theorem states that for any finite two-player zero-sum game with payoff matrix A=(aij)A = (a_{ij}), where rows represent player I's pure strategies and columns player II's, there exists a value vv such that maxpminqpTAq=minqmaxppTAq=v\max_p \min_q p^T A q = \min_q \max_p p^T A q = v, with pp and qq denoting mixed strategies (probability distributions over pure strategies). This equality guarantees that player I can secure at least vv against any response by player II, while player II can hold player I to at most vv. Von Neumann's proof relies on applied to the space of mixed strategies, establishing the existence of an equilibrium point where the maximin and values coincide without constructing explicit strategies. The argument involves showing that the continuous function mapping strategy profiles to expected payoffs has a fixed point corresponding to the game's value. Mixed strategies are essential when pure strategies do not suffice, allowing players to randomize over their pure strategies to achieve the value vv. A mixed strategy for player I is a p=(pi)p = (p_i) with ipi=1\sum_i p_i = 1 and pi0p_i \geq 0, and similarly q=(qj)q = (q_j) for player II; the expected payoff is then E[p,q]=ijpiqjaij=pTAqE[p, q] = \sum_i \sum_j p_i q_j a_{ij} = p^T A q. Optimal mixed strategies pp^* and qq^* satisfy pTAq=vp^{*T} A q^* = v, ensuring neither player can improve unilaterally. Pure strategy equilibria occur via saddle points, where a pair of pure strategies (i,j)(i^*, j^*) satisfies aijaijaija_{i^* j} \leq a_{i^* j^*} \leq a_{i j^*} for all i,ji, j, making v=aijv = a_{i^* j^*}. Such points exist if the payoff matrix has a row maximum that is also a column minimum, but randomization is necessary in non-saddle-point games like matching pennies to prevent exploitation. Two-player zero-sum games can be solved by formulating the search for optimal mixed strategies as a linear program (LP). For player I (maximizer), the primal LP is to maximize vv subject to ipiaijv\sum_i p_i a_{ij} \geq v for all jj, ipi=1\sum_i p_i = 1, and pi0p_i \geq 0; the dual for player II minimizes vv subject to jaijqjv\sum_j a_{ij} q_j \leq v for all ii, jqj=1\sum_j q_j = 1, and qj0q_j \geq 0. By strong duality of LPs, the optimal values coincide at the game's value vv, yielding optimal pp^* and qq^*. Finite two-player zero-sum games are solvable in polynomial time, as the LP formulation has size polynomial in the number of pure strategies, and LPs are solvable in polynomial time via interior-point methods.

Multi-Player Games

In multi-player zero-sum games, where the total payoff sums to zero across all participants, the extension of two-player solution concepts encounters significant challenges due to the increased strategic complexity and potential for non-cooperative dynamics among more than two agents. Unlike the two-player case, where von Neumann's minimax theorem guarantees a unique value and optimal strategies, multi-player settings lack such a universal equilibrium structure, often resulting in multiple Nash equilibria with varying payoff distributions or even cycles in best-response dynamics that prevent convergence to a stable outcome. A key illustration of this limitation arises in three-player zero-sum games, where the does not hold in general. Consider a polymatrix representation with players A, B, and C, where interactions are pairwise: A plays a two-action game against B (heads or tails), and B plays against C similarly, with payoffs structured such that matching heads yields +1 for the row player and -1 for the column, while mismatching reverses this, and the overall game is zero-sum. The payoff tensor for this setup reveals non-unique Nash equilibria; for instance, one equilibrium assigns zero payoffs to all players under mixed strategies (A plays heads, B mixes 50-50, C plays heads), while another yields payoffs of -1 for A, 0 for B, and +1 for C (A heads, B tails, C heads). This demonstrates indeterminate outcomes, as no single value exists that all players can guarantee against joint deviations by others. To address payoff allocation in cooperative interpretations of multi-player zero-sum games, the provides a fair division based on each player's average marginal contribution to coalitions. Defined for a game (N, v) with player set N of size n and v (where v(N) = 0 in zero-sum games), the for player i is given by ϕi(v)=1n!πΠ(v(Piπ{i})v(Piπ)),\phi_i(v) = \frac{1}{n!} \sum_{\pi \in \Pi} \left( v(P_i^\pi \cup \{i\}) - v(P_i^\pi) \right), where \Pi denotes all n! of N, and P_i^\pi is the set of players preceding i in permutation \pi. This axiomatic solution (satisfying , , , and null-player properties) ensures payoffs sum to zero and quantifies individual power, though it assumes transferable utility and may not align with non-cooperative equilibria. Coalition formation offers another approach to simplifying multi-player zero-sum games, where subsets of players band together to act as a single entity, effectively reducing the game to a two-"superplayer" zero-sum contest between the coalition S and its complement \bar{S}. The value v(S) of coalition S is then the minimax value of the induced two-player game with payoff \sum_{i \in S} u_i, where u_i are individual utilities. Stability requires that no subcoalition deviates profitably, often analyzed via the core—the set of imputations x satisfying \sum_{i \in T} x_i \geq v(T) for all T \subseteq S—but in essential zero-sum games (v(S) + v(\bar{S}) = 0 and v(S) > 0 for some S), the core is empty, indicating inherent instability and vulnerability to breakdowns. For computational resolution, multi-player zero-sum games are often represented in extensive form to capture sequential moves and information sets, enabling approximation algorithms. Fictitious play, where each player iteratively best-responds to the empirical of others' past actions, converges to equilibria in certain multi-player subclasses like zero-sum polymatrix games or "one-against-all" structures, though it may cycle in general cases. Counterfactual minimization (CFR), extended from two-player imperfect- settings, approximates equilibria by minimizing counterfactual regrets at information sets in multi-player extensive games, with variants like CFR providing scalable learning despite lacking convergence guarantees in non-two-player zero-sum contexts. The exact solution of general n-player zero-sum is computationally intractable, with determining the value or optimal strategies proven NP-hard even in restricted forms like extensive games with , as established in foundational results from the early building on observations dating to the . For n \geq 3, computing equilibria is PPAD-complete in normal-form representations, underscoring the shift from polynomial-time solvability in two players to exponential challenges in multi-player settings.

Examples and Applications

Classic Examples

One of the simplest and most illustrative zero-sum games is rock-paper-scissors, a symmetric two-player game where each player simultaneously chooses one of three actions: rock, , or . The payoff structure is such that rock beats (+1 for the rock player, -1 for the player), beats (+1, -1), and beats rock (+1, -1), with ties resulting in 0 for both. This can be represented by the following payoff matrix for Player 1 (Player 2's payoffs are the negative):
Player 1 \ Player 2RockPaperScissors
Rock0-1+1
Paper+10-1
Scissors-1+10
There is no pure strategy Nash equilibrium, as any pure choice can be exploited by the opponent's best response. The unique mixed strategy equilibrium requires each player to randomize equally with probability 1/3 over the three actions, yielding an expected value of 0 for both players. For instance, if Player 1 plays the mixed strategy (1/3, 1/3, 1/3) against Player 2's pure rock, the expected payoff is (1/3)(0) + (1/3)(-1) + (1/3)(+1) = 0, and similarly for other pure strategies, ensuring no incentive to deviate. Another foundational example is , a 2x2 zero-sum game where two players each show a (heads or tails) simultaneously; Player 1 wins (+1, -1) if they match, and Player 2 wins (+1 for Player 2, -1 for Player 1) if they mismatch. The payoff matrix for Player 1 is:
Player 1 \ Player 2HeadsTails
Heads+1-1
Tails-1+1
No pure equilibrium exists, as each action is dominated by the opponent's counter. The optimal mixed equilibrium is for both players to choose heads or tails with equal probability 1/2, resulting in an of 0. This ensures that, for example, if Player 1 plays heads with probability 1/2 against Player 2's pure heads, the expected payoff is (1/2)(+1) + (1/2)(-1) = 0, preventing exploitation. Chess serves as a classic zero-sum game, modeled with payoffs of +1 for a win, -1 for a loss, and 0 for a draw from the perspective of one player (opponent's payoffs negated). This structure assumes perfect opposition, where one player's success directly diminishes the other's, and the total payoff sums to zero in all outcomes, including draws. A historical example is John von Neumann's poker model, a simplified two-person zero-sum game analyzing bluffing in a betting scenario with continuous hand values drawn uniformly from [0,1]. Player I bets or checks based on hand strength, and Player II calls or folds; optimal strategies involve bluffing with low hands (e.g., probability proportional to hand value) to balance deception and value betting, yielding a game value determined by mixed strategies that prevent exploitation. This model introduced key concepts like in incomplete games.

Real-World Applications

In financial markets, derivatives trading, such as options contracts, exemplifies a zero-sum game where one party's gain directly corresponds to another's loss, excluding transaction costs that render it negative-sum overall. For instance, in a , the buyer's profit from a rising underlying asset price equals the seller's loss, creating a fixed total payoff of zero between counterparties. (HFT), which dominates derivatives markets, amplifies this dynamic; in the 2020s, HFT accounted for over 50% of U.S. trading volume, including significant portions in options and futures, enabling rapid execution but reinforcing the zero-sum competition for infinitesimal price edges. In , arms races during the , particularly U.S.-Soviet missile deployments, operated as zero-sum games, where one superpower's enhancement of security through increased intercontinental ballistic missiles directly diminished the other's perceived safety. From the late 1950s onward, mutual escalations in missile stockpiles—such as the U.S. Minuteman program and Soviet SS-series—created a strategic balance where gains in deterrence for one side equated to vulnerabilities for the other, perpetuating a cycle of retaliation without net expansion in global security. Similarly, trade negotiations, like the U.S.- trade war initiated in 2018, have been framed as zero-sum, with tariffs on billions in goods—such as China's 25% levy on U.S. automobiles—resulting in direct economic losses for exporters mirroring gains for protected domestic industries, though broader welfare effects remain debated. Sports like embody zero-sum games, where one competitor's victory inherently means the opponent's defeat, with the total outcome summing to zero in terms of wins. In a professional bout, the referee's decision or awards the full points or title to one fighter, leaving the other with none, as seen in high-stakes matches where strategic positioning directly transfers advantage. Sealed-bid auctions, including the Vickrey (second-price) format, parallel this among bidders, as the highest bidder wins the item but pays the second-highest bid, creating a zero-sum allocation of the asset where losers receive nothing; this mechanism achieves to open English auctions under independent private values, ensuring truthful bidding as a dominant . In the of low-cost airlines, saturated European markets transform into a zero-sum contest for , where one carrier's gains come at the direct expense of rivals amid limited route capacity and price sensitivity. The 2023 European aviation sector saw low-cost carriers like and serve over 320 million intra- passengers, up 21% from 2022, but intense rivalry in overlapping hubs led to net welfare effects that were mixed, with benefits from lower fares offset by reduced profitability and potential declines in oversupplied regions. analyses highlight how this saturation, driven by post-pandemic recovery, results in zero-sum dynamics for route dominance, prompting hybridization strategies blending low-cost models with legacy features to sustain viability. In AI and , generative adversarial networks (GANs) apply zero-sum game principles through adversarial training between a generator and discriminator, introduced in 2014 as a framework where the generator's success in fooling the discriminator equals the latter's failure to distinguish synthetic from real data. Post-2014 developments have leveraged this zero-sum dynamic—modeled as maximizing E[log D(x)] + E[log(1 - D(G(z)))] for the discriminator while minimizing it for the generator—to advance applications like image synthesis and robotic policy learning, achieving equilibrium when the generator recovers the true data distribution.

Extensions

Reductions from Non-Zero-Sum Games

One common technique for analyzing non-zero-sum games involves introducing an artificial opponent, also known as a fictitious or dummy player, to transform the game into a zero-sum form. In this reduction, an additional passive player is added whose payoff exactly offsets the sum of the original players' payoffs, ensuring the total payoff across all players is zero for every outcome. For constant-sum games, where the original payoffs sum to a fixed constant cc regardless of actions, the dummy player receives a payoff of c-c, making the extended game zero-sum while preserving the strategic incentives of the original players. This method allows modeling general-sum games as zero-sum but does not simplify the solution process, as multi-player zero-sum games lack the straightforward solutions available in two-player cases. For general non-zero-sum games, where the sum of payoffs may vary across outcomes, the dummy player's payoff is defined as the negative of the total payoffs to the original nn players, i.e., un+1=i=1nuiu_{n+1} = -\sum_{i=1}^n u_i. The dummy player has the same strategy set as the original game but acts passively, mirroring the joint actions of the others without influencing their payoffs. While this creates a zero-sum game, the equilibria in the extended game do not directly correspond to Nash equilibria in the original, and solving for them remains computationally challenging. (Note: The concept traces to von Neumann and Morgenstern's foundational work.) An alternative approach uses penalty methods to enforce zero-sum structure without adding players, by adjusting the original utilities through side payments or taxes that redistribute payoffs. Specifically, for an nn-player game with utilities uiu_i, the transformed utilities are ui=ui1nj=1nuju_i' = u_i - \frac{1}{n} \sum_{j=1}^n u_j, ensuring i=1nui=0\sum_{i=1}^n u_i' = 0 for all outcomes. This transformation, rooted in a of total , preserves the ordinal preferences and relative incentives of players, as the adjustment term is an affine shift that does not alter best-response correspondences. In constant-sum cases, where uj=c\sum u_j = c, this simplifies to ui=uicnu_i' = u_i - \frac{c}{n}, directly equivalent to the dummy player reduction up to scaling. Despite these transformations, reductions have limitations, particularly in infinite spaces or games with asymmetric information. In infinite games, such as continuous-action settings, the dummy player or penalty adjustment may not yield compact strategy sets, preventing the application of fixed-point theorems like Brouwer's for equilibrium existence. Asymmetric information structures, like Bayesian games, are not preserved, as the dummy player introduces assumptions that alter signaling or belief updates in the original model. Additionally, in or dynamic games, the reduction can fail to capture epsilon-equilibria, leading to non-convergence or multiple spurious solutions not reflective of the original Nash profiles. Moreover, computationally, solving the extended multi-player zero-sum game is as difficult as finding Nash equilibria in the original, with both problems being PPAD-complete. These concepts find applications in for zero-sum interactions. Replicator dynamics in reduced models of zero-sum games exhibit conserved quantities, such as constant average fitness, allowing analytical solutions for long-run behavior that are intractable in more general cases. For instance, in population models of competing strategies, the zero-sum framework facilitates studying evolutionary stability by leveraging Hamiltonian structures and periodic orbits, reducing for simulating multi-population equilibria.

Non-Linear Utilities

In zero-sum games, risk attitudes are incorporated by modeling players' preferences over outcomes using von Neumann-Morgenstern (VNM) expected utility functions, which are non-linear transformations of monetary gains or losses. A concave utility function, such as u(x)=log(x)u(x) = \log(x) for x>0x > 0, captures risk aversion by satisfying u(αx+(1α)y)αu(x)+(1α)u(y)u(\alpha x + (1-\alpha) y) \geq \alpha u(x) + (1-\alpha) u(y) for 0<α<10 < \alpha < 1, implying that the utility of a sure gain is at least the expected utility of a risky lottery with the same mean, thus altering effective payoffs from the linear zero-sum transfer where one player's gain equals the other's loss. Convex utilities, conversely, model risk-seeking behavior, while linear utilities assume risk neutrality. A representative example of non-linear payoffs involves adapting coordination games like the Battle of the Sexes to incorporate logarithmic utilities, demonstrating shifts in equilibria due to . In the standard linear version, payoffs favor coordination with differing preferences (e.g., payoffs of 2 and 1 for joint choices, 0 otherwise), yielding mixed-strategy Nash equilibria where each player randomizes to balance the opponent's incentives. With constant relative risk aversion via u(x)=log(x+1)u(x) = \log(x+1) (to handle zero payoffs), risk-averse players overweight certain low payoffs relative to risky high ones, stabilizing pure-strategy equilibria (e.g., favoring the higher-payoff coordination) and reducing randomization probabilities compared to the linear case. This illustrates how non-linearity amplifies aversion to mismatch risks, potentially resolving coordination conflicts more decisively. Solution methods adapt the standard minimax approach to maximize expected utility under mixed strategies, preserving the zero-sum structure in outcome space while accounting for non-linearity via VNM lotteries. The minimax value becomes the maximin of expected utilities over strategy distributions, solvable via linear programming if action sets are finite, as mixed strategies linearize the expectation. Stochastic dominance conditions further refine solutions: a strategy dominating another in expected utility (first- or second-order) ensures preference under risk aversion, avoiding dominated options without full computation. Theoretical extensions of the to non-linear utilities rely on continuity and quasi-concavity assumptions for equilibrium in two-player zero-sum settings. Under continuous spaces and concave-convex payoff functions (where each player's expected is concave in their actions and convex in the opponent's), Sion's theorem guarantees a minimax value via fixed-point arguments, generalizing von Neumann's linear case. and Debreu's 1950s work on abstract economies provides foundational tools, using Kakutani's for quasi-concave utilities to prove saddle-point equilibria in non-linear programs equivalent to zero-sum games, ensuring stability without full convexity. These results hold for continuous functions, enabling even when utilities deviate from linearity, as in concave games studied by and Hurwicz via gradient-based saddle-point searches. In , non-linear utilities appear in option under zero-sum hedging scenarios, where a hedger's position offsets a counterparty's exposure. The Black-Scholes model assumes risk-neutral linear , but with risk-averse exponential u(w)=exp(γw)u(w) = -\exp(-\gamma w) (constant absolute risk aversion), utility indifference adjusts the premium to equate expected utility with and without the option, yielding nonlinear PDEs that modify volatility terms for hedging imperfections. For instance, in , the indifference seller's price exceeds the Black-Scholes value by a risk-loading factor proportional to γ\gamma, reflecting aversion to unhedgeable basis in the zero-sum buyer-seller contract.

Misconceptions and Broader Implications

Common Misunderstandings

A common misunderstanding is that all forms of constitute zero-sum games, implying that one party's inherently deprives another of resources without any creation of value. In reality, many competitive interactions, such as voluntary trade, generate mutual benefits by expanding the total available resources, as exemplified by where both parties gain from specialization and exchange, like a trading for a manufacturer's tools. Another frequent error involves assuming zero-sum games preclude draws or ties, portraying them solely as win-lose scenarios. However, zero-sum structures allow for outcomes where payoffs sum to zero without a clear winner, such as stalemates in chess scored as half-points for each player (0.5, 0.5, normalized to sum to 1 but equivalent to zero-sum), or neutral equilibria where both receive zero payoff. Zero-sum games are often misapplied to dynamic, sequential settings by treating them as static payoff matrices, overlooking the need for subgame perfection to resolve backward induction in extensive-form representations. In sequential zero-sum games with perfect information, Nash equilibria coincide with subgame-perfect equilibria, ensuring credible strategies at every decision node, unlike non-zero-sum cases where refinement is necessary to eliminate non-credible threats. Confusion also arises regarding finite versus infinite horizons in repeated zero-sum games, where some erroneously apply folk theorems to suggest sustainable cooperation beyond single-stage play. Strictly zero-sum repeated games forbid such cooperation, as the fixed total payoff prevents equilibria where players jointly deviate for mutual gain; optimal strategies revert to independent single-stage minimax play each period, unlike non-zero-sum settings where folk theorems enable a range of cooperative outcomes via punishment strategies. Outdated views sometimes portray economic applications like the as purely zero-sum contests between platforms and workers, but research reveals hybrid dynamics incorporating non-zero-sum elements, such as platform algorithms fostering worker-platform value creation through matching efficiencies, though competitive bidding can introduce zero-sum wage pressures. Common counterarguments to viewing economics as zero-sum include short-term scenarios like bidding on fixed assets or redistributive financial trades, inequality where the rich seemingly gain at the poor's expense, and mercantilist perspectives on trade as win-lose. These are addressed by noting that economic growth expands total wealth, allowing the poor to achieve absolute improvements despite widening relative gaps, with empirical evidence from free markets demonstrating positive-sum outcomes through innovation, specialization, and voluntary exchange.

Zero-Sum Thinking

Zero-sum thinking refers to a cognitive where individuals perceive social interactions, , or outcomes as inherently competitive, such that one party's gains directly correspond to another's losses, often leading to escalated conflicts in s and disputes. This manifests as a to recognize mutual benefits in exchanges, prompting parties to deny win-win possibilities and instead prioritize defensive or aggressive strategies. For instance, studies from the demonstrate how this mindset contributes to negotiation s by fostering suspicion and reducing , thereby hindering agreements that could create value for all involved. On a societal level, zero-sum thinking permeates political and economic discourse, framing issues like as battles over scarce resources where immigrants' gains supposedly diminish opportunities for natives. In , this perspective correlates with policies and support for redistribution, exacerbating partisan divides as seen in U.S. debates. Economically, it underpins protectionist stances against , viewing imports as threats to domestic jobs rather than opportunities for mutual growth; analyses as of early 2025 highlight how such thinking fuels tariffs and , despite evidence of trade's overall benefits. While predominantly detrimental, has rare positive applications in genuinely adversarial contexts, such as litigation, where the legal system's structure inherently pits parties against each other in a win-lose framework. Here, adopting this mindset can sharpen focus on protecting one's interests and strategically countering opponents, aligning with the adversarial nature of proceedings without encouraging unnecessary escalation. To mitigate , educational interventions drawing from emphasize recognizing positive-sum opportunities through nudges and reframing, helping individuals overcome biases toward cooperation. Cultural variations in reveal higher prevalence in collectivist societies, where interdependent social norms amplify perceptions of resource competition within groups. A seminal study across 37 nations found that belief in zero-sum games correlates with collectivism, contrasting with more individualistic cultures that may emphasize abundance and mutual benefit.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.