Game complexity
View on Wikipedia
Combinatorial game theory measures game complexity in several ways:
- State-space complexity (the number of legal game positions from the initial position)
- Game tree size (total number of possible games)
- Decision complexity (number of leaf nodes in the smallest decision tree for initial position)
- Game-tree complexity (number of leaf nodes in the smallest full-width decision tree for initial position)
- Computational complexity (asymptotic difficulty of a game as it grows arbitrarily large)
These measures involve understanding the game positions, possible outcomes, and computational complexity of various game scenarios.
Measures of game complexity
[edit]State-space complexity
[edit]The state-space complexity of a game is the number of legal game positions reachable from the initial position of the game.[1]
When this is too hard to calculate, an upper bound can often be computed by also counting (some) illegal positions (positions that can never arise in the course of a game).
Game tree size
[edit]The game tree size is the total number of possible games that can be played. This is the number of leaf nodes in the game tree rooted at the game's initial position.
The game tree is typically vastly larger than the state-space because the same positions can occur in many games by making moves in a different order (for example, in a tic-tac-toe game with two X and one O on the board, this position could have been reached in two different ways depending on where the first X was placed). An upper bound for the size of the game tree can sometimes be computed by simplifying the game in a way that only increases the size of the game tree (for example, by allowing illegal moves) until it becomes tractable.
For games where the number of moves is not limited (for example by the size of the board, or by a rule about repetition of position) the game tree is generally infinite.
Decision trees
[edit]A decision tree is a subtree of the game tree, with each position labelled "player A wins", "player B wins", or "draw" if that position can be proved to have that value (assuming best play by both sides) by examining only other positions in the graph. Terminal positions can be labelled directly—with player A to move, a position can be labelled "player A wins" if any successor position is a win for A; "player B wins" if all successor positions are wins for B; or "draw" if all successor positions are either drawn or wins for B. (With player B to move, corresponding positions are marked similarly.)
The following two methods of measuring game complexity use decision trees:
Decision complexity
[edit]Decision complexity of a game is the number of leaf nodes in the smallest decision tree that establishes the value of the initial position.
Game-tree complexity
[edit]Game-tree complexity of a game is the number of leaf nodes in the smallest full-width decision tree that establishes the value of the initial position.[1] A full-width tree includes all nodes at each depth. This is an estimate of the number of positions one would have to evaluate in a minimax search to determine the value of the initial position.
It is hard even to estimate the game-tree complexity, but for some games an approximation can be given by , where b is the game's average branching factor and d is the number of plies in an average game.
Computational complexity
[edit]The computational complexity of a game describes the asymptotic difficulty of a game as it grows arbitrarily large, expressed in big O notation or as membership in a complexity class. This concept doesn't apply to particular games, but rather to games that have been generalized so they can be made arbitrarily large, typically by playing them on an n-by-n board. (From the point of view of computational complexity, a game on a fixed size of board is a finite problem that can be solved in O(1), for example by a look-up table from positions to the best move in each position.)
The asymptotic complexity is defined by the most efficient algorithm for solving the game (in terms of whatever computational resource one is considering). The most common complexity measure, computation time, is always lower-bounded by the logarithm of the asymptotic state-space complexity, since a solution algorithm must work for every possible state of the game. It will be upper-bounded by the complexity of any particular algorithm that works for the family of games. Similar remarks apply to the second-most commonly used complexity measure, the amount of space or computer memory used by the computation. It is not obvious that there is any lower bound on the space complexity for a typical game, because the algorithm need not store game states; however many games of interest are known to be PSPACE-hard, and it follows that their space complexity will be lower-bounded by the logarithm of the asymptotic state-space complexity as well (technically the bound is only a polynomial in this quantity; but it is usually known to be linear).
- The depth-first minimax strategy will use computation time proportional to the game's tree-complexity (since it must explore the whole tree), and an amount of memory polynomial in the logarithm of the tree-complexity (since the algorithm must always store one node of the tree at each possible move-depth, and the number of nodes at the highest move-depth is precisely the tree-complexity).
- Backward induction will use both memory and time proportional to the state-space complexity, as it must compute and record the correct move for each possible position.
Example: tic-tac-toe (noughts and crosses)
[edit]For tic-tac-toe, a simple upper bound for the size of the state space is 39 = 19,683. (There are three states for each of the nine cells.) This count includes many illegal positions, such as a position with five crosses and no noughts, or a position in which both players have a row of three. A more careful count, removing these illegal positions, gives 5,478.[2][3] And when rotations and reflections of positions are considered identical, there are only 765 essentially different positions.
To bound the game tree, there are 9 possible initial moves, 8 possible responses, and so on, so that there are at most 9! or 362,880 total games. However, games may take less than 9 moves to resolve, and an exact enumeration gives 255,168 possible games. When rotations and reflections of positions are considered the same, there are only 26,830 possible games.
The computational complexity of tic-tac-toe depends on how it is generalized. A natural generalization is to m,n,k-games: played on an m by n board with winner being the first player to get k in a row. This game can be solved in DSPACE(mn) by searching the entire game tree. This places it in the important complexity class PSPACE; with more work, it can be shown to be PSPACE-complete.[4]
Complexities of some well-known games
[edit]Due to the large size of game complexities, this table gives the ceiling of their logarithm to base 10. (In other words, the number of digits). All of the following numbers should be considered with caution: seemingly minor changes to the rules of a game can change the numbers (which are often rough estimates anyway) by tremendous factors, which might easily be much greater than the numbers shown.
| Game | Board size
(positions) |
State-space complexity
(as log to base 10) |
Game-tree complexity
(as log to base 10) |
Average game length
(plies) |
Branching factor | Ref | Complexity class of suitable generalized game |
|---|---|---|---|---|---|---|---|
| Tic-tac-toe | 9 | 3 | 5 | 9 | 4 | PSPACE-complete[5] | |
| Sim | 15 | 3 | 8 | 14 | 3.7 | PSPACE-complete[6] | |
| Pentominoes | 64 | 12 | 18 | 10 | 75 | [7][8] | ?, but in PSPACE |
| Connect Four | 42 | 12 (4,531,985,219,092) | 21 | 36 | 4 | [1][9][10] | ?, but in PSPACE |
| Kalah[11] | 14 | 13 | 18 | 50 | [7] | Generalization is unclear | |
| Domineering (8 × 8) | 64 | 15 | 27 | 30 | 8 | [7] | ?, but in PSPACE; in P for certain dimensions[12] |
| Congkak | 14 | 15 | 33 | [7] | |||
| English draughts (8x8) (checkers) | 32 | 20 or 18 | 40 | 70 | 2.8 | [1][13][14] | EXPTIME-complete[15] |
| Awari[16] | 12 | 12 | 32 | 60 | 3.5 | [1] | Generalization is unclear |
| Qubic | 64 | 30 | 34 | 20 | 54.2 | [1] | PSPACE-complete[5] |
| Double dummy bridge[nb 1] | (52) | <17 | <40 | 52 | 5.6 | PSPACE-complete[17] | |
| Fanorona | 45 | 21 | 46 | 44 | 11 | [18] | ?, but in EXPTIME |
| Nine men's morris | 24 | 10 | 50 | 50 | 10 | [1] | ?, but in EXPTIME |
| Tablut | 81 | 27 | [19] | ||||
| International draughts (10x10) | 50 | 30 | 54 | 90 | 4 | [1] | EXPTIME-complete[15] |
| Chinese checkers (2 sets) | 121 | 23 | 180 | [20] | EXPTIME-complete[21] | ||
| Chinese checkers (6 sets) | 121 | 78 | 600 | [20] | EXPTIME-complete[21] | ||
| Reversi (Othello) | 64 | 28 | 58 | 58 | 10 | [1] | PSPACE-complete[22] |
| OnTop (2p base game) | 72 | 88 | 62 | 31 | 23.77 | [23] | |
| Lines of Action | 64 | 23 | 64 | 44 | 29 | [24] | ?, but in EXPTIME |
| Gomoku (15x15, freestyle) | 225 | 105 | 70 | 30 | 210 | [1] | PSPACE-complete[5] |
| Hex (11x11) | 121 | 57 | 98 | 50 | 96 | [7] | PSPACE-complete[5] |
| Chess | 64 | 44 | 123 | 70 | 35 | [25] | EXPTIME-complete (without 50-move drawing rule)[26] |
| Bejeweled and Candy Crush (8x8) | 64 | <50 | 70 | [27] | NP-hard | ||
| GIPF | 37 | 25 | 132 | 90 | 29.3 | [28] | |
| Connect6 | 361 | 172 | 140 | 30 | 46000 | [29] | PSPACE-complete[30] |
| Backgammon | 28 | 20 | 144 | 55 | 250 | [31] | EXPTIME-Hard (for the real life setting where the opponent's strategy and dice rolls are unknown)[32] |
| Xiangqi | 90 | 40 | 150 | 95 | 38 | [1][33][34] | ?, believed to be EXPTIME-complete |
| Abalone | 61 | 25 | 154 | 87 | 60 | [35][36] | PSPACE-hard, and in EXPTIME |
| Havannah | 271 | 127 | 157 | 66 | 240 | [7][37] | PSPACE-complete[38] |
| Twixt | 572 | 140 | 159 | 60 | 452 | [39] | |
| Janggi | 90 | 44 | 160 | 100 | 40 | [34] | ?, believed to be EXPTIME-complete |
| Quoridor | 81 | 42 | 162 | 91 | 60 | [40] | ?, but in PSPACE |
| Carcassonne (2p base game) | 72 | >40 | 195 | 71 | 55 | [41] | Generalization is unclear |
| Amazons (10x10) | 100 | 40 | 212 | 84 | 374 or 299[42] | [43][44] | PSPACE-complete[45] |
| Shogi | 81 | 71 | 226 | 115 | 92 | [33][46] | EXPTIME-complete[47] |
| Thurn and Taxis (2 player) | 33 | 66 | 240 | 56 | 879 | [48] | |
| Go (19x19) | 361 | 170 | 505 | 211 | 250 | [1][49][50] | EXPTIME-complete (without the superko rule)[52] |
| Arimaa | 64 | 43 | 402 | 92 | 17281 | [53][54][55] | ?, but in EXPTIME |
| Stratego | 92 | 115 | 535 | 381 | 21.739 | [56] | |
| Infinite chess | infinite | infinite | infinite | infinite | infinite | [57] | Unknown, but mate-in-n is decidable[58] |
| Magic: The Gathering | [59] | AH-hard[60] | |||||
| Wordle | 5 | 4.113 (12,972) | 6 | [61] | NP-hard, unknown if PSPACE-complete with parametization |
Notes
[edit]- ^ Double dummy bridge (i.e., double dummy problems in the context of contract bridge) is not a proper board game but has a similar game tree, and is studied in computer bridge. The bridge table can be regarded as having one slot for each player and trick to play a card in, which corresponds to board size 52. Game-tree complexity is a very weak upper bound: 13! to the power of 4 players regardless of legality. State-space complexity is for one given deal; likewise regardless of legality but with many transpositions eliminated. The last 4 plies are always forced moves with branching factor 1.
References
[edit]- ^ a b c d e f g h i j k l Victor Allis (1994). Searching for Solutions in Games and Artificial Intelligence (PDF) (Ph.D. thesis). University of Limburg, Maastricht, The Netherlands. ISBN 90-900748-8-0.
- ^ "combinatorics - TicTacToe State Space Choose Calculation". Mathematics Stack Exchange. Retrieved 2020-04-08.
- ^ T, Brian (October 20, 2018). "Btsan/generate_tictactoe". GitHub. Retrieved 2020-04-08.
- ^ Stefan Reisch (1980). "Gobang ist PSPACE-vollständig (Gobang is PSPACE-complete)". Acta Informatica. 13 (1): 59–66. doi:10.1007/bf00288536. S2CID 21455572.
- ^ a b c d Stefan Reisch (1981). "Hex ist PSPACE-vollständig (Hex is PSPACE-complete)". Acta Inform (15): 167–191.
- ^ Slany, Wolfgang (2000). "The complexity of graph Ramsey games". In Marsland, T. Anthony; Frank, Ian (eds.). Computers and Games, Second International Conference, CG 2000, Hamamatsu, Japan, October 26-28, 2000, Revised Papers. Lecture Notes in Computer Science. Vol. 2063. Springer. pp. 186–203. doi:10.1007/3-540-45579-5_12. ISBN 978-3-540-43080-3.
- ^ a b c d e f H. J. van den Herik; J. W. H. M. Uiterwijk; J. van Rijswijck (2002). "Games solved: Now and in the future". Artificial Intelligence. 134 (1–2): 277–311. doi:10.1016/S0004-3702(01)00152-7.
- ^ Orman, Hilarie K. (1996). "Pentominoes: a first player win" (PDF). In Nowakowski, Richard J. (ed.). Games of No Chance: Papers from the Combinatorial Games Workshop held in Berkeley, CA, July 11–21, 1994. Mathematical Sciences Research Institute Publications. Vol. 29. Cambridge University Press. pp. 339–344. ISBN 0-521-57411-0. MR 1427975.
- ^ John Tromp (2010). "John's Connect Four Playground".
- ^ Edelkamp, Stefan, and Peter Kissmann. “Symbolic Classification of General Two-Player Games.” KI 2008: Advances in Artificial Intelligence, edited by Andreas R. Dengel et al., vol. 5243, Springer Berlin Heidelberg, 2008, pp. 185–92. DOI.org (Crossref), https://doi.org/10.1007/978-3-540-85845-4_23.
- ^ See van den Herik et al for rules.
- ^ Lachmann, Michael; Moore, Cristopher; Rapaport, Ivan (2002). "Who wins Domineering on rectangular boards?". In Nowakowski, Richard (ed.). More Games of No Chance: Proceedings of the 2nd Combinatorial Games Theory Workshop held in Berkeley, CA, July 24–28, 2000. Mathematical Sciences Research Institute Publications. Vol. 42. Cambridge University Press. pp. 307–315. ISBN 0-521-80832-4. MR 1973019.
- ^ Jonathan Schaeffer; et al. (July 6, 2007). "Checkers is Solved". Science. 317 (5844): 1518–1522. Bibcode:2007Sci...317.1518S. doi:10.1126/science.1144079. PMID 17641166. S2CID 10274228.
- ^ Schaeffer, Jonathan (2007). "Game over: Black to play and draw in checkers" (PDF). ICGA Journal. 30 (4): 187–197. doi:10.3233/ICG-2007-30402. Archived from the original (PDF) on 2016-04-03.
- ^ a b J. M. Robson (1984). "N by N checkers is Exptime complete". SIAM Journal on Computing. 13 (2): 252–267. doi:10.1137/0213018.
- ^ See Allis 1994 for rules
- ^ Bonnet, Edouard; Jamain, Florian; Saffidine, Abdallah (2013). "On the complexity of trick-taking card games". In Rossi, Francesca (ed.). IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China, August 3-9, 2013. IJCAI/AAAI. pp. 482–488.
- ^ M.P.D. Schadd; M.H.M. Winands; J.W.H.M. Uiterwijk; H.J. van den Herik; M.H.J. Bergsma (2008). "Best Play in Fanorona leads to Draw" (PDF). New Mathematics and Natural Computation. 4 (3): 369–387. doi:10.1142/S1793005708001124.
- ^ Andrea Galassi (2018). "An Upper Bound on the Complexity of Tablut".
- ^ a b G.I. Bell (2009). "The Shortest Game of Chinese Checkers and Related Problems". Integers. 9. arXiv:0803.1245. Bibcode:2008arXiv0803.1245B. doi:10.1515/INTEG.2009.003. S2CID 17141575.
- ^ a b Kasai, Takumi; Adachi, Akeo; Iwata, Shigeki (1979). "Classes of pebble games and complete problems". SIAM Journal on Computing. 8 (4): 574–586. doi:10.1137/0208046. MR 0573848. Proves completeness of the generalization to arbitrary graphs.
- ^ Iwata, Shigeki; Kasai, Takumi (1994). "The Othello game on an board is PSPACE-complete". Theoretical Computer Science. 123 (2): 329–340. doi:10.1016/0304-3975(94)90131-7. MR 1256205.
- ^ Robert Briesemeister (2009). Analysis and Implementation of the Game OnTop (PDF) (Thesis). Maastricht University, Dept of Knowledge Engineering.
- ^ Mark H.M. Winands (2004). Informed Search in Complex Games (PDF) (Ph.D. thesis). Maastricht University, Maastricht, The Netherlands. ISBN 90-5278-429-9.
- ^ The size of the state space and game tree for chess were first estimated in Claude Shannon (1950). "Programming a Computer for Playing Chess" (PDF). Philosophical Magazine. 41 (314). Archived from the original (PDF) on 2010-07-06. Shannon gave estimates of 1043 and 10120 respectively, smaller than the upper bound in the table, which is detailed in Shannon number.
- ^ Fraenkel, Aviezri S.; Lichtenstein, David (1981). "Computing a perfect strategy for chess requires time exponential in ". Journal of Combinatorial Theory, Series A. 31 (2): 199–214. doi:10.1016/0097-3165(81)90016-9. MR 0629595.
- ^ Gualà, Luciano; Leucci, Stefano; Natale, Emanuele (2014). "Bejeweled, Candy Crush and other match-three games are (NP-)hard". 2014 IEEE Conference on Computational Intelligence and Games, CIG 2014, Dortmund, Germany, August 26-29, 2014. IEEE. pp. 1–8. arXiv:1403.5830. doi:10.1109/CIG.2014.6932866. ISBN 978-1-4799-3547-5.
- ^ Diederik Wentink (2001). Analysis and Implementation of the game Gipf (PDF) (Thesis). Maastricht University.
- ^ Chang-Ming Xu; Ma, Z.M.; Jun-Jie Tao; Xin-He Xu (2009). "Enhancements of proof number search in connect6". 2009 Chinese Control and Decision Conference. p. 4525. doi:10.1109/CCDC.2009.5191963. ISBN 978-1-4244-2722-2. S2CID 20960281.
- ^ Hsieh, Ming Yu; Tsai, Shi-Chun (October 1, 2007). "On the fairness and complexity of generalized k -in-a-row games". Theoretical Computer Science. 385 (1–3): 88–100. doi:10.1016/j.tcs.2007.05.031.
- ^ Tesauro, Gerald (May 1, 1992). "Practical issues in temporal difference learning". Machine Learning. 8 (3–4): 257–277. doi:10.1007/BF00992697.
- ^ Witter, R.T. (2021). Backgammon Is Hard. In: Du, DZ., Du, D., Wu, C., Xu, D. (eds) Combinatorial Optimization and Applications. COCOA 2021. Lecture Notes in Computer Science(), vol 13135. Springer, Cham. https://doi.org/10.1007/978-3-030-92681-6_38
- ^ a b Shi-Jim Yen, Jr-Chang Chen; Tai-Ning Yang; Shun-Chin Hsu (March 2004). "Computer Chinese Chess" (PDF). International Computer Games Association Journal. 27 (1): 3–18. doi:10.3233/ICG-2004-27102. S2CID 10336286. Archived from the original (PDF) on 2007-06-14.
- ^ a b Donghwi Park (2015). "Space-state complexity of Korean chess and Chinese chess". arXiv:1507.06401 [math.GM].
- ^ Chorus, Pascal. "Implementing a Computer Player for Abalone Using Alpha-Beta and Monte-Carlo Search" (PDF). Dept of Knowledge Engineering, Maastricht University. Retrieved 2012-03-29.
- ^ Kopczynski, Jacob S (2014). Pushy Computing: Complexity Theory and the Game Abalone (Thesis). Reed College.
- ^ Joosten, B. "Creating a Havannah Playing Agent" (PDF). Retrieved 2012-03-29.
- ^ E. Bonnet; F. Jamain; A. Saffidine (March 25, 2014). "Havannah and TwixT are PSPACE-complete". arXiv:1403.6518 [cs.CC].
- ^ Kevin Moesker (2009). Txixt: Theory, Analysis, and Implementation (PDF) (Thesis). Faculty of Humanities and Sciences of Maastricht University.
- ^ Lisa Glendenning (May 2005). Mastering Quoridor (PDF). Computer Science (B.Sc. thesis). University of New Mexico. Archived from the original (PDF) on 2012-03-15.
- ^ Cathleen Heyden (2009). Implementing a Computer Player for Carcassonne (PDF) (Thesis). Maastricht University, Dept of Knowledge Engineering.
- ^ The lower branching factor is for the second player.
- ^ Kloetzer, Julien; Iida, Hiroyuki; Bouzy, Bruno (2007). "The Monte-Carlo approach in Amazons" (PDF). Computer Games Workshop, Amsterdam, the Netherlands, 15-17 June 2007. pp. 185–192.
- ^ P. P. L. M. Hensgens (2001). "A Knowledge-Based Approach of the Game of Amazons" (PDF). Universiteit Maastricht, Institute for Knowledge and Agent Technology.
- ^ R. A. Hearn (February 2, 2005). "Amazons is PSPACE-complete". arXiv:cs.CC/0502013.
- ^ Hiroyuki Iida; Makoto Sakuta; Jeff Rollason (January 2002). "Computer shogi". Artificial Intelligence. 134 (1–2): 121–144. doi:10.1016/S0004-3702(01)00157-6.
- ^ H. Adachi; H. Kamekawa; S. Iwata (1987). "Shogi on n × n board is complete in exponential time". Trans. IEICE. J70-D: 1843–1852.
- ^ F.C. Schadd (2009). Monte-Carlo Search Techniques in the Modern Board Game Thurn and Taxis (PDF) (Thesis). Maastricht University. Archived from the original (PDF) on 2021-01-14.
- ^ John Tromp; Gunnar Farnebäck (2007). "Combinatorics of Go". This paper derives the bounds 48<log(log(N))<171 on the number of possible games N.
- ^ John Tromp (2016). "Number of legal Go positions".
- ^ "Statistics on the length of a go game".
- ^ J. M. Robson (1983). "The complexity of Go". Information Processing; Proceedings of IFIP Congress. pp. 413–417.
- ^ Christ-Jan Cox (2006). "Analysis and Implementation of the Game Arimaa" (PDF).
- ^ David Jian Wu (2011). "Move Ranking and Evaluation in the Game of Arimaa" (PDF).
- ^ Brian Haskin (2006). "A Look at the Arimaa Branching Factor".
- ^ A.F.C. Arts (2010). Competitive Play in Stratego (PDF) (Thesis). Maastricht.
- ^ CDA Evans and Joel David Hamkins (2014). "Transfinite game values in infinite chess". arXiv:1302.4377 [math.LO].
- ^ Stefan Reisch, Joel David Hamkins, and Phillipp Schlicht (2012). "The mate-in-n problem of infinite chess is decidable". Conference on Computability in Europe: 78–88. arXiv:1201.5597.
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ Alex Churchill, Stella Biderman, and Austin Herrick (2020). "Magic: the Gathering is Turing Complete". arXiv:1904.09828 [cs.AI].
{{cite arXiv}}: CS1 maint: multiple names: authors list (link) - ^ Stella Biderman (2020). "Magic: the Gathering is as Hard as Arithmetic". arXiv:2003.05119 [cs.AI].
- ^ Lokshtanov, Daniel; Subercaseaux, Bernardo (May 14, 2022). "Wordle is NP-hard". arXiv:2203.16713 [cs.CC].
See also
[edit]External links
[edit]Game complexity
View on GrokipediaMeasures of Game Complexity
State-Space Complexity
State-space complexity refers to the total number of distinct legal positions or configurations that can be reached in a game from its initial state, often denoted as |S| to represent the cardinality of the state space.[3] This measure captures the breadth of possible game states, excluding duplicates or unreachable configurations due to game rules. The concept was introduced in the context of combinatorial game theory by Claude Shannon in his seminal 1950 analysis of chess, where he estimated the state-space complexity to highlight the challenges of computational simulation.[4] Shannon's work laid the foundation for evaluating game scale in artificial intelligence and theoretical computer science. For board games like chess, state-space complexity is typically calculated through combinatorial enumeration, approximating the product of possible placements for each piece type while subtracting illegal positions (e.g., those violating rules like pawn promotion or king exposure).[3] In Shannon's chess estimation, this yielded roughly positions, derived from arrangements of up to 32 pieces on 64 squares, adjusted for symmetries and constraints such as castling rights.[4] Exact counts often require sophisticated algorithms to enumerate valid states, as naive products overestimate due to rule-based invalidations. This metric is crucial as it quantifies the "breadth" of the game world, serving as a prerequisite for assessing search spaces in AI solvers, where exhaustive exploration becomes infeasible beyond small |S|.[3] A larger state space generally implies a potentially expanded game tree, though the latter accounts for sequential paths rather than unique positions alone. For rough estimations in games with average branching factor and maximum depth , |S| is sometimes approximated as , but precise enumeration is preferred when computationally viable to avoid under- or overestimation.[4]Branching Factor
The branching factor, denoted as $ b $, represents the average number of legal moves available from any given position in a game, serving as a key metric for assessing the breadth of the search space in game trees.[5] In formal terms, it is calculated as the total number of edges in the game tree divided by the total number of non-leaf nodes, where edges correspond to legal transitions between positions.[6] This distinguishes the effective branching factor, which counts only valid legal moves under the game's rules, from a raw branching factor that might consider all conceivable actions without constraints, though the former is standard in game analysis due to its relevance to actual play.[5] For instance, in chess, Claude Shannon estimated the average branching factor at approximately 30 legal moves per position, drawing from empirical data on typical games.[7] This figure, sometimes cited as 30–35 to account for variations, underscores the rapid expansion of possibilities in complex games.[5] Branching factors vary across a game's phases: the initial branching factor is often higher due to more open positions and piece mobility, the average reflects overall play, and the terminal branching factor decreases near endgame as fewer moves remain viable.[8] These variations contribute to the exponential growth of the game tree, where the number of positions at depth $ d $ scales roughly as $ b^d $, amplifying search challenges in deeper explorations.[9] The branching factor is a critical parameter in algorithms like minimax search, where the time complexity is $ O(b^d) $, with $ d $ as the search depth, highlighting the computational effort required for evaluating moves.[10] This metric, introduced in early AI literature during the 1950s, enabled foundational analyses of game solvability.[7] However, it assumes a uniform branching factor across the tree, an idealization that rarely holds in practice, as actual values fluctuate by position, game stage, and rules, potentially leading to over- or underestimations of complexity.[5]Game-Tree Size
In combinatorial game theory and artificial intelligence, the game-tree size refers to the total number of nodes in the full game tree, which encompasses all possible sequences of moves from the initial position to terminal states in a finite, perfect-information game. This measure captures the exhaustive structure of decision paths, where each node represents a position after a sequence of moves, and the tree branches according to legal actions available to players. For finite games without cycles, the game tree is acyclic, and its size is exact; however, approximations are often used for practical estimation, such as for the number of leaf nodes, where is the average branching factor and is the maximum depth (or average game length in plies).[3][11] Unlike state-space complexity, which counts the number of unique positions regardless of how they are reached, game-tree size accounts for all paths through the tree, including duplicates arising from transpositions—situations where different move sequences lead to the same board position. This distinction arises because the game tree models the sequential nature of play, preserving the order of moves, whereas the state space abstracts away path history to focus on positional configurations. As a result, game-tree size can be exponentially larger than the state space due to these redundant paths, emphasizing the combinatorial explosion of possible playouts rather than just distinct configurations.[11][12] The size of the game tree can be computed exactly for small games but is typically approximated for larger ones assuming a uniform branching factor. For a uniform tree of depth , the total number of nodes is given by the geometric series:Computational Complexity
Computational complexity in game theory refers to the computational resources, specifically time and space, required to determine an optimal strategy for a game, including computing the game's value under perfect play and the corresponding moves from any position. This analysis classifies the problem of solving a game within standard complexity classes, such as PSPACE or EXPTIME, based on the input size (e.g., board dimensions). For trivial games with a fixed, small number of positions, such as those solvable by exhaustive enumeration in constant time, the complexity is . In contrast, many finite, perfect-information games without chance elements are EXPTIME-complete, requiring exponential time in the worst case relative to the input size.[16] A landmark classification is the proof that generalized chess on an board is EXPTIME-complete, implying that computing a perfect strategy demands time at least exponential in .[17] Retrograde analysis, a backward induction method commonly used to solve endgames by propagating values from terminal positions, has a time complexity of , where denotes the number of reachable states and the average branching factor; since grows exponentially with board size, the overall complexity remains exponential. Space complexity in such analyses is typically , directly tied to storing values for each state, though optimizations like bit-packing can reduce this. Briefly, this space requirement scales with the underlying state-space size, underscoring the memory challenges in large games. Key factors influencing computational complexity include the choice of search strategy and memory management techniques. Depth-first search algorithms, such as minimax, explore the game tree recursively and achieve a time complexity of , where is the effective branching factor and the maximum depth to terminal positions; this contrasts with breadth-first search, which demands space for storing all nodes at the frontier, rendering it infeasible for deep trees. Transposition tables address redundancies by hashing positions to store and reuse previously computed values, potentially reducing both time and space by avoiding recomputation of equivalent subtrees, with space usage bounded by the table size (often a fraction of via hashing).[10] These complexities explain why most non-trivial games, including chess, remain unsolved despite advances in hardware: exhaustive evaluation of the game tree would require approximately operations, exceeding feasible computational limits even with optimizations like alpha-beta pruning.[7] Parallel computing has provided practical speedups, enabling distributed evaluation of subtrees across multiple processors and reducing wall-clock time for partial solutions, as demonstrated in scalable algorithms for game-tree search. However, such parallelism yields at most polynomial speedups and does not alter the fundamental exponential theoretical lower bounds for classes like EXPTIME.[18]Decision Tree Measures
Decision Complexity
Decision complexity measures the minimal computational effort required to determine the optimal outcome from the initial position in a game under perfect play by both players. It is defined as the number of leaf nodes in the smallest decision tree that establishes the value (win, loss, or draw) of the starting position, representing only the essential evaluations needed after applying optimal pruning techniques. This metric is substantially smaller than the full game-tree size, as it eliminates irrelevant branches that do not influence the final decision.[11] The concept was introduced in the early 1990s to differentiate the resources for optimal decision-making from those for exhaustive exploration, highlighting how strategic insights reduce search demands in two-player zero-sum games.[19] Pruning in this tree relies on game rules, such as terminal conditions for wins, losses, or draws, to cut off subtrees where bounds on possible outcomes exceed or undercut the current best, ensuring focus on decision-relevant paths. In practice, alpha-beta pruning approximates this structure by maintaining lower (alpha) and upper (beta) bounds during minimax search, dynamically discarding branches that cannot improve the result.[20] Calculating decision complexity exactly involves constructing the minimal proof tree, often via dynamic programming methods like retrograde analysis, which evaluates positions backward from terminals for small games such as Connect-Four or Awari. For larger games, approximations assume perfect move ordering in alpha-beta search, yielding a time complexity of roughly $ b^{d/2} $, where $ b $ is the average branching factor and $ d $ is the effective depth to a decision; this provides a theoretical lower bound on evaluations needed.[11][20] In AI applications, decision complexity underpins the efficiency of game-playing agents, where static evaluation functions approximate leaf values in the pruned tree to enable deeper searches within computational limits, balancing accuracy against resource constraints in algorithms like minimax with alpha-beta. This measure informs the scalability of perfect-information game solvers and guides heuristic developments for complex domains.[11]Game-Tree Complexity
Game-tree complexity measures the scale of the search space in a game under the assumption that both players pursue optimal strategies, focusing on the effective number of nodes in the minimax search tree rather than all possible sequences of moves. This concept arises in the context of two-player zero-sum games, where the minimax algorithm evaluates positions by maximizing one player's score while assuming the opponent minimizes it, effectively pruning irrelevant branches to determine the value of a position. The theoretical foundation stems from John von Neumann's minimax theorem, which proves that in finite two-player zero-sum games with perfect information, there exists a value of the game and optimal strategies for both players, enabling the construction of such decision trees.[21] In a naive minimax search without optimizations, the game tree size grows exponentially as $ b^d $, where $ b $ is the average branching factor and $ d $ is the depth in plies (half-moves). However, alpha-beta pruning, which eliminates branches that cannot influence the final decision, significantly reduces this under optimal move ordering. Analysis shows that with perfect ordering, the number of nodes examined approximates $ b^{\lceil d/2 \rceil} + b^{\lfloor d/2 \rfloor} - 1 $; for typical cases with good but imperfect ordering, the effective complexity is bounded by $ O(b^{3d/4}) $, providing a substantial reduction compared to the unpruned tree while still capturing optimal play.[22] This measure is particularly suited to games like chess, where Claude Shannon estimated the unpruned game-tree complexity at approximately $ 10^{120} $ for a full game, but pruning makes deeper searches feasible in practice.[14] The approach inherently handles two-player zero-sum scenarios with perfect information, where each decision anticipates the opponent's best response. Transposition tables further refine the tree by detecting and merging equivalent positions reached via different move sequences, avoiding redundant evaluations and lowering the effective node count.[22] Limitations include its reliance on perfect information; in imperfect-information games like poker, the complexity escalates due to the need to average over hidden states and information sets, often requiring alternative methods beyond standard minimax trees.[14]Examples of Game Complexity
Tic-Tac-Toe Analysis
Tic-tac-toe, also known as noughts and crosses, is a two-player abstract strategy game played on a 3×3 grid. Players alternate turns placing their distinct symbols—typically X for the first player and O for the second—in empty cells, with the goal of forming an unbroken line of three symbols horizontally, vertically, or diagonally. The game ends in a win for the player achieving this, a loss for the opponent, or a draw if the board fills without a winner; it is finite, impartial, and features perfect information with no element of chance. As a solved game, its outcome under optimal play is predetermined, providing a foundational example for studying game complexity.[23] The state-space complexity of tic-tac-toe measures the number of distinct legal board positions reachable from the start, which totals 5,478 distinct legal board positions. When accounting for symmetries like rotations and reflections, this reduces to 765 unique positions. This figure excludes invalid configurations, such as those with unequal numbers of symbols beyond one or multiple winners. The average branching factor, the mean number of legal moves available per position across the game, is approximately 5, reflecting the decreasing options as the board fills (starting at 9 and dropping to around 4 by mid-game). The full game-tree size, representing the total number of possible play sequences or leaf nodes, is 255,168, capturing all valid paths to terminal states. Decision complexity, which evaluates the reduced tree after applying symmetries and basic pruning to eliminate redundant or invalid branches, yields about 26,830 nodes essential for determining optimal decisions.[24][25][26][27] Retrograde analysis solves tic-tac-toe by starting from terminal positions and propagating outcomes backward: wins are positions from which any move leads to a loss for the opponent, losses are those where all moves allow an opponent win, and draws are the remainder. This backward induction reveals that perfect play forces a draw, as the first player cannot secure a win against optimal responses. A simplified game tree diagram illustrates this, with the root node branching to three symmetry classes (center, corner, edge), each leading to subtrees that converge on drawing lines under best play. Tic-tac-toe was solved by hand in the 19th century through manual enumeration of all possibilities, confirming the draw outcome long before computational aids. It became one of the earliest games implemented on computers in the 1950s, with A. S. Douglas's OXO program in 1952 demonstrating automated play on the EDSAC machine, marking a milestone in AI and game-solving history.[28][29]Complexities of Well-Known Games
The complexities of well-known combinatorial games span orders of magnitude, reflecting differences in board size, rules, and move options that impact computational solvability. State-space complexity quantifies reachable positions, branching factor the average moves per position, and game-tree size the total possible playouts, with estimates serving as proxies for search effort required. These metrics, derived from combinatorial bounds and simulations, reveal why some games like checkers and Othello have been solved while others like chess and Go resist full analysis.[11]| Game | State-Space Complexity | Branching Factor | Game-Tree Size | Computational Status |
|---|---|---|---|---|
| Chess | 35 | Unsolved | ||
| Checkers | 8 | Solved (draw, 2007) | ||
| Go | 250 | Unsolved | ||
| Othello | 10 | Solved (draw, 2023) |
