Hubbry Logo
Computer chessComputer chessMain
Open search
Computer chess
Community hub
Computer chess
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Computer chess
Computer chess
from Wikipedia
1990s pressure-sensory chess computer with LCD screen

Computer chess includes both hardware (dedicated computers) and software capable of playing chess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Computer chess applications that play at the level of a chess grandmaster or higher are available on hardware from supercomputers to smart phones. Standalone chess-playing machines are also available. Stockfish, Leela Chess Zero, GNU Chess, Fruit, and other free open source applications are available for various platforms.

Computer chess applications, whether implemented in hardware or software, use different strategies than humans to choose their moves: they use heuristic methods to build, search and evaluate trees representing sequences of moves from the current position and attempt to execute the best such sequence during play. Such trees are typically quite large, thousands to millions of nodes. The computational speed of modern computers, capable of processing tens of thousands to hundreds of thousands of nodes or more per second, along with extension and reduction heuristics that narrow the tree to mostly relevant nodes, make such an approach effective.

The first chess machines capable of playing chess or reduced chess-like games were software programs running on digital computers early in the vacuum-tube computer age (1950s). The early programs played so poorly that even a beginner could defeat them. Within 40 years, in 1997, chess engines running on super-computers or specialized hardware were capable of defeating even the best human players. By 2006, programs running on desktop PCs had attained the same capability. In 2006, Monty Newborn, Professor of Computer Science at McGill University, declared: "the science has been done". Nevertheless, solving chess is not currently possible for modern computers due to the game's extremely large number of possible variations.[1]

Computer chess was once considered the "Drosophila of AI", the edge of knowledge engineering. The field is now considered a scientifically completed paradigm, and playing chess is a mundane computing activity.[2]

Availability and playing strength

[edit]
Computer chess IC bearing the name of developer Frans Morsch (see Mephisto)

In the past, stand-alone chess machines (usually microprocessors running software chess programs; occasionally specialized hardware) were sold. Today, chess engines may be installed as software on ordinary devices like smartphones and PCs,[3] either alone or alongside GUI programs such as Chessbase and the mobile apps for Chess.com and Lichess (both primarily websites).[4] Examples of free and open source engines include Stockfish[5] and Leela Chess Zero[6] (Lc0). Chess.com maintains its own proprietary engine named Torch.[7] Some chess engines, including Stockfish, have web versions made in languages like WebAssembly and JavaScript.[8] Most chess programs and sites offer the ability to analyze positions and games using chess engines, and some offer the ability to play against engines (which can be set to play at custom levels of strength) as though they were normal opponents.

Hardware requirements for chess engines are minimal, but performance will vary with processor speed, and memory, needed to hold large transposition tables. Most modern chess engines, such as Stockfish, rely on efficiently updatable neural networks, tailored to be run exclusively on CPUs,[9][10] but Lc0 uses networks reliant on GPU performance.[11][12] Top engines such as Stockfish can be expected to beat the world's best players reliably, even when running on consumer-grade hardware.[13]

Types and features of chess software

[edit]

Perhaps the most common type of chess software are programs that simply play chess. A human player makes a move on the board, the AI calculates and plays a subsequent move, and the human and AI alternate turns until the game ends. The chess engine, which calculates the moves, and the graphical user interface (GUI) are sometimes separate programs. Different engines can be connected to the GUI, permitting play against different styles of opponent. Engines often have a simple text command-line interface, while GUIs may offer a variety of piece sets, board styles, or even 3D or animated pieces. Because recent engines are so capable, engines or GUIs may offer some way of handicapping the engine's ability, to improve the odds for a win by the human player. Universal Chess Interface (UCI) engines such as Fritz or Rybka may have a built-in mechanism for reducing the Elo rating of the engine (via UCI's uci_limitstrength and uci_elo parameters). Some versions of Fritz have a Handicap and Fun mode for limiting the current engine or changing the percentage of mistakes it makes or changing its style. Fritz also has a Friend Mode where during the game it tries to match the level of the player.

Screenshot of Chess, a component of macOS

Chess databases allow users to search through a large library of historical games, analyze them, check statistics, and formulate an opening repertoire. Chessbase (for PC) is a common program for these purposes amongst professional players, but there are alternatives such as Shane's Chess Information Database (Scid) [14] for Windows, Mac or Linux, Chess Assistant[15] for PC,[16] Gerhard Kalab's Chess PGN Master for Android[17] or Giordano Vicoli's Chess-Studio for iOS.[18]

Programs such as Playchess allow players to play against one another over the internet.

Chess training programs teach chess. Chessmaster had playthrough tutorials by IM Josh Waitzkin and GM Larry Christiansen. Stefan Meyer-Kahlen offers Shredder Chess Tutor based on the Step coursebooks of Rob Brunia and Cor Van Wijgerden. Former World Champion Magnus Carlsen's Play Magnus company released a Magnus Trainer app for Android and iOS. Chessbase has Fritz and Chesster for children. Convekta provides a large number of training apps such as CT-ART and its Chess King line based on tutorials by GM Alexander Kalinin and Maxim Blokh.

There is also software for handling chess problems.

Computers versus humans

[edit]

After discovering refutation screening—the application of alpha–beta pruning to optimizing move evaluation—in 1957, a team at Carnegie Mellon University predicted that a computer would defeat the world human champion by 1967.[19] It did not anticipate the difficulty of determining the right order to evaluate moves. Researchers worked to improve programs' ability to identify killer heuristics, unusually high-scoring moves to reexamine when evaluating other branches, but into the 1970s most top chess players believed that computers would not soon be able to play at a Master level.[20] In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years,[21] and in 1976 Senior Master and professor of psychology Eliot Hearst of Indiana University wrote that "the only way a current computer program could ever win a single game against a master player would be for the master, perhaps in a drunken stupor while playing 50 games simultaneously, to commit some once-in-a-year blunder".[20]

In the late 1970s chess programs began defeating highly skilled human players.[20] The year of Hearst's statement, Northwestern University's Chess 4.5 at the Paul Masson American Chess Championship's Class B level became the first to win a human tournament. Levy won his bet in 1978 by beating Chess 4.7, but it achieved the first computer victory against a Master-class player at the tournament level by winning one of the six games.[21] In 1980, Belle began often defeating Masters. By 1982 two programs played at Master level and three were slightly weaker.[20]

The sudden improvement without a theoretical breakthrough was unexpected, as many did not expect that Belle's ability to examine 100,000 positions a second—about eight plies—would be sufficient. The Spracklens, creators of the successful microcomputer program Sargon, estimated that 90% of the improvement came from faster evaluation speed and only 10% from improved evaluations. New Scientist stated in 1982 that computers "play terrible chess ... clumsy, inefficient, diffuse, and just plain ugly", but humans lost to them by making "horrible blunders, astonishing lapses, incomprehensible oversights, gross miscalculations, and the like" much more often than they realized; "in short, computers win primarily through their ability to find and exploit miscalculations in human initiatives".[20]

By 1982, microcomputer chess programs could evaluate up to 1,500 moves a second and were as strong as mainframe chess programs of five years earlier, able to defeat a majority of amateur players. While only able to look ahead one or two plies more than at their debut in the mid-1970s, doing so improved their play more than experts expected; seemingly minor improvements "appear to have allowed the crossing of a psychological threshold, after which a rich harvest of human error becomes accessible", New Scientist wrote.[20] While reviewing SPOC in 1984, BYTE wrote that "Computers—mainframes, minis, and micros—tend to play ugly, inelegant chess", but noted Robert Byrne's statement that "tactically they are freer from error than the average human player". The magazine described SPOC as a "state-of-the-art chess program" for the IBM PC with a "surprisingly high" level of play, and estimated its USCF rating as 1700 (Class B).[22]

At the 1982 North American Computer Chess Championship, Monroe Newborn predicted that a chess program could become world champion within five years; tournament director and International Master Michael Valvo predicted ten years; the Spracklens predicted 15; Ken Thompson predicted more than 20; and others predicted that it would never happen. The most widely held opinion, however, stated that it would occur around the year 2000.[23] In 1989, Levy was defeated by Deep Thought in an exhibition match. Deep Thought, however, was still considerably below World Championship level, as the reigning world champion, Garry Kasparov, demonstrated in two strong wins in 1989. It was not until a 1996 match with IBM's Deep Blue that Kasparov lost his first game to a computer at tournament time controls in Deep Blue versus Kasparov, 1996, game 1. This game was, in fact, the first time a reigning world champion had lost to a computer using regular time controls. However, Kasparov regrouped to win three and draw two of the remaining five games of the match, for a convincing victory.

In May 1997, an updated version of Deep Blue defeated Kasparov 3½–2½ in a return match. A documentary mainly about the confrontation was made in 2003, titled Game Over: Kasparov and the Machine.

abcdefgh
8
h7 white rook
f6 black queen
h6 black king
d5 white queen
g5 white knight
d4 black pawn
a3 white pawn
b3 white pawn
f3 black pawn
g3 white pawn
h3 white pawn
f2 black knight
h2 white king
e1 black rook
8
77
66
55
44
33
22
11
abcdefgh
Final position

With increasing processing power and improved evaluation functions, chess programs running on commercially available workstations began to rival top-flight players. In 1998, Rebel 10 defeated Viswanathan Anand, who at the time was ranked second in the world, by a score of 5–3. However, most of those games were not played at normal time controls. Out of the eight games, four were blitz games (five minutes plus five seconds Fischer delay for each move); these Rebel won 3–1. Two were rapid games (fifteen minutes for each side) that Rebel won as well (1½–½). Finally, two games were played as regular tournament games with classic time controls (forty moves in two hours, one hour sudden death); here it was Anand who won ½–1½.[24] In fast games, computers played better than humans, but at classical time controls – at which a player's rating is determined – the advantage was not so clear.

In the early 2000s, commercially available programs such as Junior and Fritz were able to draw matches against former world champion Garry Kasparov and classical world champion Vladimir Kramnik.

In October 2002, Vladimir Kramnik and Deep Fritz competed in the eight-game Brains in Bahrain match, which ended in a draw. Kramnik won games 2 and 3 by "conventional" anti-computer tactics – play conservatively for a long-term advantage the computer is not able to see in its game tree search. Fritz, however, won game 5 after a severe blunder by Kramnik. Game 6 was described by the tournament commentators as "spectacular". Kramnik, in a better position in the early middlegame, tried a piece sacrifice to achieve a strong tactical attack, a strategy known to be highly risky against computers who are at their strongest defending against such attacks. True to form, Fritz found a watertight defense and Kramnik's attack petered out leaving him in a bad position. Kramnik resigned the game, believing the position lost. However, post-game human and computer analysis has shown that the Fritz program was unlikely to have been able to force a win and Kramnik effectively sacrificed a drawn position. The final two games were draws. Given the circumstances, most commentators still rate Kramnik the stronger player in the match.[citation needed]

In January 2003, Kasparov played Junior, another chess computer program, in New York City. The match ended 3–3.

In November 2003, Kasparov played X3D Fritz. The match ended 2–2.

In 2005, Hydra, a dedicated chess computer with custom hardware and sixty-four processors and also winner of the 14th IPCCC in 2005, defeated seventh-ranked Michael Adams 5½–½ in a six-game match (though Adams' preparation was far less thorough than Kramnik's for the 2002 series).[25]

In November–December 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer won; the match ended 2–4. Kramnik was able to view the computer's opening book. In the first five games Kramnik steered the game into a typical "anti-computer" positional contest. He lost one game (overlooking a mate in one), and drew the next four. In the final game, in an attempt to draw the match, Kramnik played the more aggressive Sicilian Defence and was crushed.

There was speculation that interest in human–computer chess competition would plummet as a result of the 2006 Kramnik-Deep Fritz match.[26] According to Newborn, for example, "the science is done".[27]

Human–computer chess matches showed the best computer systems overtaking human chess champions in the late 1990s. For the 40 years prior to that, the trend had been that the best machines gained about 40 points per year in the Elo rating while the best humans only gained roughly 2 points per year.[28] The highest rating obtained by a computer in human competition was Deep Thought's USCF rating of 2551 in 1988 and FIDE no longer accepts human–computer results in their rating lists. Specialized machine-only Elo pools have been created for rating machines, but such numbers, while similar in appearance, are not directly compared.[29] In 2016, the Swedish Chess Computer Association rated computer program Komodo at 3361.

Chess engines continue to improve. In 2009, chess engines running on slower hardware reached the grandmaster level. A mobile phone won a category 6 tournament with a performance rating 2898: chess engine Hiarcs 13 running inside Pocket Fritz 4 on the mobile phone HTC Touch HD won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009.[30] Pocket Fritz 4 searches fewer than 20,000 positions per second.[31] This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second.

Advanced Chess is a form of chess developed in 1998 by Kasparov where a human plays against another human, and both have access to computers to enhance their strength. The resulting "advanced" player was argued by Kasparov to be stronger than a human or computer alone. This has been proven in numerous occasions, such as at Freestyle Chess events.

Players today are inclined to treat chess engines as analysis tools rather than opponents.[32] Chess grandmaster Andrew Soltis stated in 2016 "The computers are just much too good" and that world champion Magnus Carlsen won't play computer chess because "he just loses all the time and there's nothing more depressing than losing without even being in the game."[33]

Computer methods

[edit]

Since the era of mechanical machines that played rook and king endings and electrical machines that played other games like hex in the early years of the 20th century, scientists and theoreticians have sought to develop a procedural representation of how humans learn, remember, think and apply knowledge, and the game of chess, because of its daunting complexity, became the "Drosophila of artificial intelligence (AI)".[Note 1] The procedural resolution of complexity became synonymous with thinking, and early computers, even before the chess automaton era, were popularly referred to as "electronic brains". Several different schema were devised starting in the latter half of the 20th century to represent knowledge and thinking, as applied to playing the game of chess (and other games like checkers):

Using "ends-and-means" heuristics a human chess player can intuitively determine optimal outcomes and how to achieve them regardless of the number of moves necessary, but a computer must be systematic in its analysis. Most players agree that looking at least five moves ahead (ten plies) when necessary is required to play well. Normal tournament rules give each player an average of three minutes per move. On average there are more than 30 legal moves per chess position, so a computer must examine a quadrillion possibilities to look ahead ten plies (five full moves); one that could examine a million positions a second would require more than 30 years.[20]

The earliest attempts at procedural representations of playing chess predated the digital electronic age, but it was the stored program digital computer that gave scope to calculating such complexity. Claude Shannon, in 1949, laid out the principles of algorithmic solution of chess. In that paper, the game is represented by a "tree", or digital data structure of choices (branches) corresponding to moves. The nodes of the tree were positions on the board resulting from the choices of move. The impossibility of representing an entire game of chess by constructing a tree from first move to last was immediately apparent: there are an average of 36 moves per position in chess and an average game lasts about 35 moves to resignation (60-80 moves if played to checkmate, stalemate, or other draw). There are 400 positions possible after the first move by each player, about 200,000 after two moves each, and nearly 120 million after just 3 moves each.

So a limited lookahead (search) to some depth, followed by using domain-specific knowledge to evaluate the resulting terminal positions was proposed. A kind of middle-ground position, given good moves by both sides, would result, and its evaluation would inform the player about the goodness or badness of the moves chosen. Searching and comparing operations on the tree were well suited to computer calculation; the representation of subtle chess knowledge in the evaluation function was not. The early chess programs suffered in both areas: searching the vast tree required computational resources far beyond those available, and what chess knowledge was useful and how it was to be encoded would take decades to discover.

The developers of a chess-playing computer system must decide on a number of fundamental implementation issues. These include:

  • Graphical user interface (GUI) – how moves are entered and communicated to the user, how the game is recorded, how the time controls are set, and other interface considerations
  • Board representation – how a single position is represented in data structures;
  • Search techniques – how to identify the possible moves and select the most promising ones for further examination;
  • Leaf evaluation – how to evaluate the value of a board position, if no further search will be done from that position.

Adriaan de Groot interviewed a number of chess players of varying strengths, and concluded that both masters and beginners look at around forty to fifty positions before deciding which move to play. What makes the former much better players is that they use pattern recognition skills built from experience. This enables them to examine some lines in much greater depth than others by simply not considering moves they can assume to be poor. More evidence for this being the case is the way that good human players find it much easier to recall positions from genuine chess games, breaking them down into a small number of recognizable sub-positions, rather than completely random arrangements of the same pieces. In contrast, poor players have the same level of recall for both.

The equivalent of this in computer chess are evaluation functions for leaf evaluation, which correspond to the human players' pattern recognition skills, and the use of machine learning techniques in training them, such as Texel tuning, stochastic gradient descent, and reinforcement learning, which corresponds to building experience in human players. This allows modern programs to examine some lines in much greater depth than others by using forwards pruning and other selective heuristics to simply not consider moves the program assume to be poor through their evaluation function, in the same way that human players do. The only fundamental difference between a computer program and a human in this sense is that a computer program can search much deeper than a human player could, allowing it to search more nodes and bypass the horizon effect to a much greater extent than is possible with human players.

Graphical user interface

[edit]

Computer chess programs usually support a number of common de facto standards. Nearly all of today's programs can read and write game moves as Portable Game Notation (PGN), and can read and write individual positions as Forsyth–Edwards Notation (FEN). Older chess programs often only understood long algebraic notation, but today users expect chess programs to understand standard algebraic chess notation.

Starting in the late 1990s, programmers began to develop separately engines (with a command-line interface which calculates which moves are strongest in a position) or a graphical user interface (GUI) which provides the player with a chessboard they can see, and pieces that can be moved. Engines communicate their moves to the GUI using a protocol such as the Chess Engine Communication Protocol (CECP) or Universal Chess Interface (UCI). By dividing chess programs into these two pieces, developers can write only the user interface, or only the engine, without needing to write both parts of the program. (See also chess engine.)

Developers have to decide whether to connect the engine to an opening book and/or endgame tablebases or leave this to the GUI.

Board representations

[edit]

The data structure used to represent each chess position is key to the performance of move generation and position evaluation. Methods include pieces stored in an array ("mailbox" and "0x88"), piece positions stored in a list ("piece list"), collections of bit-sets for piece locations ("bitboards"), and huffman coded positions for compact long-term storage.

Search techniques

[edit]

Computer chess programs consider chess moves as a game tree. In theory, they examine all moves, then all counter-moves to those moves, then all moves countering them, and so on, where each individual move by one player is called a "ply". This evaluation continues until a certain maximum search depth or the program determines that a final "leaf" position has been reached (e.g. checkmate).

[edit]

One particular type of search algorithm used in computer chess are minimax search algorithms, where at each ply the "best" move by the player is selected; one player is trying to maximize the score, the other to minimize it. By this alternating process, one particular terminal node whose evaluation represents the searched value of the position will be arrived at. Its value is backed up to the root, and that evaluation becomes the valuation of the position on the board. This search process is called minimax.

A naive implementation of the minimax algorithm can only search to a small depth in a practical amount of time, so various methods have been devised to greatly speed the search for good moves. Alpha–beta pruning, a system of defining upper and lower bounds on possible search results and searching until the bounds coincided, is typically used to reduce the search space of the program.

In addition, various selective search heuristics, such as quiescence search, forward pruning, search extensions and search reductions, are also used as well. These heuristics are triggered based on certain conditions in an attempt to weed out obviously bad moves (history moves) or to investigate interesting nodes (e.g. check extensions, passed pawns on seventh rank, etc.). These selective search heuristics have to be used very carefully however. If the program overextends, it wastes too much time looking at uninteresting positions. If too much is pruned or reduced, there is a risk of cutting out interesting nodes.

[edit]

Monte Carlo tree search (MCTS) is a heuristic search algorithm which expands the search tree based on random sampling of the search space. A version of Monte Carlo tree search commonly used in computer chess is PUCT, Predictor and Upper Confidence bounds applied to Trees.

DeepMind's AlphaZero and Leela Chess Zero uses MCTS instead of minimax. Such engines use batching on graphics processing units in order to calculate their evaluation functions and policy (move selection), and therefore require a parallel search algorithm as calculations on the GPU are inherently parallel. The minimax and alpha-beta pruning algorithms used in computer chess are inherently serial algorithms, so would not work well with batching on the GPU. On the other hand, MCTS is a good alternative, because the random sampling used in Monte Carlo tree search lends itself well to parallel computing, and is why nearly all engines which support calculations on the GPU use MCTS instead of alpha-beta.

Other optimizations

[edit]

Many other optimizations can be used to make chess-playing programs stronger. For example, transposition tables are used to record positions that have been previously evaluated, to save recalculation of them. Refutation tables record key moves that "refute" what appears to be a good move; these are typically tried first in variant positions (since a move that refutes one position is likely to refute another). The drawback is that transposition tables at deep ply depths can get quite large – tens to hundreds of millions of entries. IBM's Deep Blue transposition table in 1996, for example was 500 million entries. Transposition tables that are too small can result in spending more time searching for non-existent entries due to threshing than the time saved by entries found. Many chess engines use pondering, searching to deeper levels on the opponent's time, similar to human beings, to increase their playing strength.

Of course, faster hardware and additional memory can improve chess program playing strength. Hyperthreaded architectures can improve performance modestly if the program is running on a single core or a small number of cores. Most modern programs are designed to take advantage of multiple cores to do parallel search. Other programs are designed to run on a general purpose computer and allocate move generation, parallel search, or evaluation to dedicated processors or specialized co-processors.

History

[edit]

The first paper on chess search was by Claude Shannon in 1950.[34] He predicted the two main possible search strategies which would be used, which he labeled "Type A" and "Type B",[35] before anyone had programmed a computer to play chess.

Type A programs would use a "brute force" approach, examining every possible position for a fixed number of moves using a pure naive minimax algorithm. Shannon believed this would be impractical for two reasons.

First, with approximately thirty moves possible in a typical real-life position, he expected that searching the approximately 109 positions involved in looking three moves ahead for both sides (six plies) would take about sixteen minutes, even in the "very optimistic" case that the chess computer evaluated a million positions every second. (It took about forty years to achieve this speed.) A later search algorithm called alpha–beta pruning, a system of defining upper and lower bounds on possible search results and searching until the bounds coincided, reduced the branching factor of the game tree logarithmically, but it still was not feasible for chess programs at the time to exploit the exponential explosion of the tree.

Second, it ignored the problem of quiescence, trying to only evaluate a position that is at the end of an exchange of pieces or other important sequence of moves ('lines'). He expected that adapting minimax to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further. He expected that adapting type A to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further.

This led naturally to what is referred to as "selective search" or "type B search", using chess knowledge (heuristics) to select a few presumably good moves from each position to search, and prune away the others without searching. Instead of wasting processing power examining bad or trivial moves, Shannon suggested that type B programs would use two improvements:

  1. Employ a quiescence search.
  2. Employ forward pruning; i.e. only look at a few good moves for each position.

This would enable them to look further ahead ('deeper') at the most significant lines in a reasonable time. However, early attempts at selective search often resulted in the best move or moves being pruned away. As a result, little or no progress was made for the next 25 years dominated by this first iteration of the selective search paradigm. The best program produced in this early period was Mac Hack VI in 1967; it played at the about the same level as the average amateur (C class on the United States Chess Federation rating scale).

Meanwhile, hardware continued to improve, and in 1974, brute force searching was implemented for the first time in the Northwestern University Chess 4.0 program. In this approach, all alternative moves at a node are searched, and none are pruned away. They discovered that the time required to simply search all the moves was much less than the time required to apply knowledge-intensive heuristics to select just a few of them, and the benefit of not prematurely or inadvertently pruning away good moves resulted in substantially stronger performance.

In the 1980s and 1990s, progress was finally made in the selective search paradigm, with the development of quiescence search, null move pruning, and other modern selective search heuristics. These heuristics had far fewer mistakes than earlier heuristics did, and was found to be worth the extra time it saved because it could search deeper and widely adopted by many engines. While many modern programs do use alpha-beta search as a substrate for their search algorithm, these additional selective search heuristics used in modern programs means that the program no longer does a "brute force" search. Instead they heavily rely on these selective search heuristics to extend lines the program considers good and prune and reduce lines the program considers bad, to the point where most of the nodes on the search tree are pruned away, enabling modern programs to search very deep.

In 2006, Rémi Coulom created Monte Carlo tree search, another kind of type B selective search. In 2007, an adaption of Monte Carlo tree search called Upper Confidence bounds applied to Trees or UCT for short was created by Levente Kocsis and Csaba Szepesvári. In 2011, Chris Rosin developed a variation of UCT called Predictor + Upper Confidence bounds applied to Trees, or PUCT for short. PUCT was then used in AlphaZero in 2017, and later in Leela Chess Zero in 2018.

Knowledge versus search (processor speed)

[edit]

In the 1970s, most chess programs ran on super computers like Control Data Cyber 176s or Cray-1s, indicative that during that developmental period for computer chess, processing power was the limiting factor in performance. Most chess programs struggled to search to a depth greater than 3 ply. It was not until the hardware chess machines of the 1980s, that a relationship between processor speed and knowledge encoded in the evaluation function became apparent.

It has been estimated that doubling the computer speed gains approximately fifty to seventy Elo points in playing strength (Levy & Newborn 1991:192).

Leaf evaluation

[edit]

For most chess positions, computers cannot look ahead to all possible final positions. Instead, they must look ahead a few plies and compare the possible positions, known as leaves. The algorithm that evaluates leaves is termed the "evaluation function", and these algorithms are often vastly different between different chess programs. Evaluation functions typically evaluate positions in hundredths of a pawn (called a centipawn), where by convention, a positive evaluation favors White, and a negative evaluation favors Black. However, some evaluation function output win/draw/loss percentages instead of centipawns.

Historically, handcrafted evaluation functions consider material value along with other factors affecting the strength of each side. When counting up the material for each side, typical values for pieces are 1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen. (See Chess piece relative value.) The king is sometimes given an arbitrarily high value such as 200 points (Shannon's paper) to ensure that a checkmate outweighs all other factors (Levy & Newborn 1991:45). In addition to points for pieces, most handcrafted evaluation functions take many factors into account, such as pawn structure, the fact that a pair of bishops are usually worth more, centralized pieces are worth more, and so on. The protection of kings is usually considered, as well as the phase of the game (opening, middle or endgame). Machine learning techniques such as Texel turning, stochastic gradient descent, or reinforcement learning are usually used to optimise handcrafted evaluation functions.

Most modern evaluation functions make use of neural networks. The most common evaluation function in use today is the efficiently updatable neural network, which is a shallow neural network whose inputs are piece-square tables. Piece-square tables are a set of 64 values corresponding to the squares of the chessboard, and there typically exists a piece-square table for every piece and colour, resulting in 12 piece-square tables and thus 768 inputs into the neural network. In addition, some engines use deep neural networks in their evaluation function. Neural networks are usually trained using some reinforcement learning algorithm, in conjunction with supervised learning or unsupervised learning.

The output of the evaluation function is a single scalar, quantized in centipawns or other units, which is, in the case of handcrafted evaluation functions, a weighted summation of the various factors described, or in the case of neural network based evaluation functions, the output of the head of the neural network. The evaluation putatively represents or approximates the value of the subtree below the evaluated node as if it had been searched to termination, i.e. the end of the game. During the search, an evaluation is compared against evaluations of other leaves, eliminating nodes that represent bad or poor moves for either side, to yield a node which by convergence, represents the value of the position with best play by both sides.

Endgame tablebases

[edit]

Endgame play had long been one of the great weaknesses of chess programs because of the depth of search needed. Some otherwise master-level programs were unable to win in positions where even intermediate human players could force a win.

To solve this problem, computers have been used to analyze some chess endgame positions completely, starting with king and pawn against king. Such endgame tablebases are generated in advance using a form of retrograde analysis, starting with positions where the final result is known (e.g., where one side has been mated) and seeing which other positions are one move away from them, then which are one move from those, etc. Ken Thompson was a pioneer in this area.

The results of the computer analysis sometimes surprised people. In 1977 Thompson's Belle chess machine used the endgame tablebase for a king and rook against king and queen and was able to draw that theoretically lost ending against several masters (see Philidor position#Queen versus rook). This was despite not following the usual strategy to delay defeat by keeping the defending king and rook close together for as long as possible. Asked to explain the reasons behind some of the program's moves, Thompson was unable to do so beyond saying the program's database simply returned the best moves.

Most grandmasters declined to play against the computer in the queen versus rook endgame, but Walter Browne accepted the challenge. A queen versus rook position was set up in which the queen can win in thirty moves, with perfect play. Browne was allowed 2½ hours to play fifty moves, otherwise a draw would be claimed under the fifty-move rule. After forty-five moves, Browne agreed to a draw, being unable to force checkmate or win the rook within the next five moves. In the final position, Browne was still seventeen moves away from checkmate, but not quite that far away from winning the rook. Browne studied the endgame, and played the computer again a week later in a different position in which the queen can win in thirty moves. This time, he captured the rook on the fiftieth move, giving him a winning position.[36][37]

Other positions, long believed to be won, turned out to take more moves against perfect play to actually win than were allowed by chess's fifty-move rule. As a consequence, for some years the official FIDE rules of chess were changed to extend the number of moves allowed in these endings. After a while, the rule reverted to fifty moves in all positions – more such positions were discovered, complicating the rule still further, and it made no difference in human play, as they could not play the positions perfectly.

Over the years, other endgame database formats have been released including the Edward Tablebase, the De Koning Database and the Nalimov Tablebase which is used by many chess programs such as Rybka, Shredder and Fritz. Tablebases for all positions with six pieces are available.[38] Some seven-piece endgames have been analyzed by Marc Bourzutschky and Yakov Konoval.[39] Programmers using the Lomonosov supercomputers in Moscow have completed a chess tablebase for all endgames with seven pieces or fewer (trivial endgame positions are excluded, such as six white pieces versus a lone black king).[40][41] In all of these endgame databases it is assumed that castling is no longer possible.

Many tablebases do not consider the fifty-move rule, under which a game where fifty moves pass without a capture or pawn move can be claimed to be a draw by either player. This results in the tablebase returning results such as "Forced mate in sixty-six moves" in some positions which would actually be drawn because of the fifty-move rule. One reason for this is that if the rules of chess were to be changed once more, giving more time to win such positions, it will not be necessary to regenerate all the tablebases. It is also very easy for the program using the tablebases to notice and take account of this 'feature' and in any case if using an endgame tablebase will choose the move that leads to the quickest win (even if it would fall foul of the fifty-move rule with perfect play). If playing an opponent not using a tablebase, such a choice will give good chances of winning within fifty moves.

The Nalimov tablebases, which use state-of-the-art compression techniques, require 7.05 GB of hard disk space for all five-piece endings. To cover all the six-piece endings requires approximately 1.2 TB. It is estimated that a seven-piece tablebase requires between 50 and 200 TB of storage space.[42]

Endgame databases featured prominently in 1999, when Kasparov played an exhibition match on the Internet against the rest of the world. A seven piece Queen and pawn endgame was reached with the World Team fighting to salvage a draw. Eugene Nalimov helped by generating the six piece ending tablebase where both sides had two Queens which was used heavily to aid analysis by both sides.

The most popular endgame tablebase is syzygy which is used by most top computer programs like Stockfish, Leela Chess Zero, and Komodo. It is also significantly smaller in size than other formats, with 7-piece tablebases taking only 18.4 TB.[43]

For a current state-of-the art chess engine like Stockfish, a table base only provides a very minor increase in playing strength (approximately 3 Elo points for syzygy 6men as of Stockfish 15).[44]

Opening book

[edit]

Chess engines, like human beings, may save processing time as well as select variations known to be strong via referencing an opening book stored in a database. Opening books cover the opening moves of a game to variable depth, depending on opening and variation, but usually to the first 10-12 moves (20-24 ply). In the early eras of computer chess, trusting variations studied in-depth by human grandmasters for decades was superior to the weak performance of mid-20th-century engines. And even in the contemporary era, allowing computer engines to extensively analyze various openings at their leisure beforehand, and then simply consult the results when in a game, speeds up their play.

In the 1990s, some theorists believed that chess engines of the day had much of their strength in memorized opening books and knowledge dedicated to known positions, and thus believed a valid anti-computer tactic would be to intentionally play some out-of-book moves in order to force the chess program to think for itself. This seems to have been a dubious assumption even then; Garry Kasparov tried it via using the non-standard Mieses Opening at the 1997 Deep Blue versus Garry Kasparov Game 1 match, but lost. This tactic became even weaker as time passed; the opening books stored in computer databases can be far more extensive than even the best prepared humans, meaning computers will be well-prepared for even rare variations and know the correct play. More generally, the play of engines even in fully unknown situations (as comes up in variants such as Chess960) is still exceptionally strong, so the lack of an opening book isn't even a major disadvantage for tactically sharp chess engines, who can discover strong moves in unfamiliar board variations accurately.

In contemporary engine tournaments, engines are often told to play situations from a variety of openings, including unbalanced ones, to reduce the draw rate and to add more variety to the games.[45]

Computer chess rating lists

[edit]

CEGT,[46] CSS,[47] SSDF,[48] WBEC,[49] REBEL,[50] FGRL,[51] and IPON[52] maintain rating lists allowing fans to compare the strength of engines. Various versions of Stockfish, Komodo, Leela Chess Zero, and Fat Fritz dominate the rating lists in the early 2020s.

CCRL (Computer Chess Rating Lists) is an organisation that tests computer chess engines' strength by playing the programs against each other. CCRL was founded in 2006 to promote computer-computer competition and tabulate results on a rating list.[53]

The organisation runs three different lists: 40/40 (40 minutes for every 40 moves played), 40/4 (4 minutes for every 40 moves played), and 40/4 FRC (same time control but Chess960).[Note 2] Pondering (or permanent brain) is switched off and timing is adjusted to the AMD64 X2 4600+ (2.4 GHz) CPU by using Crafty 19.17 BH as a benchmark. Generic, neutral opening books are used (as opposed to the engine's own book) up to a limit of 12 moves into the game alongside 4 or 5 man tablebases.[53][54][55]

History

[edit]

Pre-computer age

[edit]
El Ajedrecista

The idea of creating a chess-playing machine dates back to the eighteenth century. Around 1769, the chess playing automaton called The Turk, created by Hungarian inventor Farkas Kempelen, became famous before being exposed as a hoax. Before the development of digital computing, serious trials based on automata such as El Ajedrecista of 1912, built by Spanish engineer Leonardo Torres Quevedo, which played a king and rook versus king ending, were too complex and limited to be useful for playing full games of chess. The field of mechanical chess research languished until the advent of the digital computer in the 1950s.

Early software age: selective search and Botvinnik

[edit]

Since then, chess enthusiasts and computer engineers have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. One of the few chess grandmasters to devote himself seriously to computer chess was former World Chess Champion Mikhail Botvinnik, who wrote several works on the subject. Botvinnik's interest in Computer Chess started in the 50s, favouring chess algorithms based on Shannon's selective type B strategy, as discussed along with Max Euwe 1958 in Dutch Television. Working with relatively primitive hardware available in the Soviet Union in the early 1960s, Botvinnik had no choice but to investigate software move selection techniques; at the time only the most powerful computers could achieve much beyond a three-ply full-width search, and Botvinnik had no such machines. In 1965 Botvinnik was a consultant to the ITEP team in a US-Soviet computer chess match which won a correspondence chess match against the Kotok-McCarthy-Program led by John McCarthy in 1967.(see Kotok-McCarthy). Later he advised the team that created the chess program Kaissa at Moscow's Institute of Control Sciences. Botvinnik had his own ideas to model a Chess Master's Mind. After publishing and discussing his early ideas on attack maps and trajectories at Moscow Central Chess Clubin 1966, he found Vladimir Butenko as supporter and collaborator. Butenko first implemented the 15x15 vector attacks board representation on a M-20 computer, determining trajectories. After Botvinnik introduced the concept of Zones in 1970, Butenko refused further cooperation and began to write his own program, dubbed Eureka. In the 70s and 80s, leading a team around Boris Stilman, Alexander Yudin, Alexander Reznitskiy, Michael Tsfasman and Mikhail Chudakov, Botvinnik worked on his own project 'Pioneer' - which was an Artificial Intelligence-based chess project. In the 90s, Botvinnik already in his 80s, he worked on the new project 'CC Sapiens'.

[edit]

One developmental milestone occurred when the team from Northwestern University, which was responsible for the Chess series of programs and won the first three ACM Computer Chess Championships (1970–72), abandoned type B searching in 1973. The resulting program, Chess 4.0, won that year's championship and its successors went on to come in second in both the 1974 ACM Championship and that year's inaugural World Computer Chess Championship, before winning the ACM Championship again in 1975, 1976 and 1977. The type A implementation turned out to be just as fast: in the time it used to take to decide which moves were worthy of being searched, it was possible just to search all of them. In fact, Chess 4.0 set the paradigm that was and still is followed essentially by all modern Chess programs today, and that had been successfully started by the Russian ITEP in 1965.

Rise of chess machines

[edit]

In 1978, an early rendition of Ken Thompson's hardware chess machine Belle, entered and won the North American Computer Chess Championship over the dominant Northwestern University Chess 4.7.

Microcomputer revolution

[edit]

Technological advances by orders of magnitude in processing power have made the brute force approach far more incisive than was the case in the early years. The result is that a very solid, tactical AI player aided by some limited positional knowledge built in by the evaluation function and pruning/extension rules began to match the best players in the world. It turned out to produce excellent results, at least in the field of chess, to let computers do what they do best (calculate) rather than coax them into imitating human thought processes and knowledge. In 1997 Deep Blue, a brute-force machine capable of examining 500 million nodes per second, defeated World Champion Garry Kasparov, marking the first time a computer has defeated a reigning world chess champion in standard time control.

Super-human chess

[edit]

In 2016, NPR asked experts to characterize the playing style of computer chess engines. Murray Campbell of IBM stated that "Computers don't have any sense of aesthetics... They play what they think is the objectively best move in any position, even if it looks absurd, and they can play any move no matter how ugly it is." Grandmasters Andrew Soltis and Susan Polgar stated that computers are more likely to retreat than humans are.[33]

Neural network revolution

[edit]

While neural networks have been used in the evaluation functions of chess engines since the late 1980s, with programs such as NeuroChess, Morph, Blondie25, Giraffe, AlphaZero, and MuZero,[56][57][58][59][60] neural networks did not become widely adopted by chess engines until the arrival of efficiently updatable neural networks in the summer of 2020. Efficiently updatable neural networks were originally developed in computer shogi in 2018 by Yu Nasu,[61][62] and had to be first ported to a derivative of Stockfish called Stockfish NNUE on 31 May 2020,[63] and integrated into the official Stockfish engine on 6 August 2020,[64][65] before other chess programmers began to adopt neural networks into their engines.

Some people, such as the Royal Society's Venki Ramakrishnan, believe that AlphaZero led to the widespread adoption of neural networks in chess engines.[66] However, AlphaZero influenced very few engines to begin using neural networks, and those tended to be new experimental engines such as Leela Chess Zero, which began specifically to replicate the AlphaZero paper. The deep neural networks used in AlphaZero's evaluation function required expensive graphics processing units, which were not compatible with existing chess engines. The vast majority of chess engines only use central processing units, and computing and processing information on the GPUs require special libraries in the backend such as Nvidia's CUDA, which none of the engines had access to. Thus the vast majority of chess engines such as Komodo and Stockfish continued to use handcrafted evaluation functions until efficiently updatable neural networks were ported to computer chess from computer shogi in 2020, which did not require either the use of GPUs or libraries like CUDA at all. Even then, the neural networks used in computer chess are fairly shallow, and the deep reinforcement learning methods pioneered by AlphaZero are still extremely rare in computer chess.

Timeline

[edit]
  • 1769 – Wolfgang von Kempelen builds the Turk. Presented as a chess-playing automaton, it is secretly operated by a human player hidden inside the machine.
  • 1868 – Charles Hooper presents the Ajeeb automaton – which also has a human chess player hidden inside.
  • 1912 – Leonardo Torres y Quevedo builds El Ajedrecista, a machine that could play King and Rook versus King endgames.
  • 1941 – Predating comparable work by at least a decade, Konrad Zuse develops computer chess algorithms in his Plankalkül programming formalism. Because of the circumstances of the Second World War, however, they were not published, and did not come to light, until the 1970s.
  • 1948 – Norbert Wiener's book Cybernetics describes how a chess program could be developed using a depth-limited minimax search with an evaluation function.
  • 1950 – Claude Shannon publishes "Programming a Computer for Playing Chess", one of the first papers on the algorithmic methods of computer chess.
  • 1951 – Alan Turing is first to publish a program, developed on paper, that was capable of playing a full game of chess (dubbed Turochamp).[67][68]
  • 1952 – Dietrich Prinz develops a program that solves chess problems.
abcdef
6a6 black rookb6 black knightc6 black queend6 black kinge6 black knightf6 black rook6
5a5 black pawnb5 black pawnc5 black pawnd5 black pawne5 black pawnf5 black pawn5
4a4b4c4d4e4f44
3a3b3c3d3e3f33
2a2 white pawnb2 white pawnc2 white pawnd2 white pawne2 white pawnf2 white pawn2
1a1 white rookb1 white knightc1 white queend1 white kinge1 white knightf1 white rook1
abcdef
Los Alamos chess. This simplified version of chess was played in 1956 by the MANIAC I computer.
  • 1956 – Los Alamos chess is the first program to play a chess-like game, developed by Paul Stein and Mark Wells for the MANIAC I computer.
  • 1956 – John McCarthy invents the alpha–beta search algorithm.
  • 1957 – The first programs that can play a full game of chess are developed, one by Alex Bernstein[69] and one by Russian programmers using a BESM.
  • 1958 – NSS becomes the first chess program to use the alpha–beta search algorithm.
  • 1962 – The first program to play credibly, Kotok-McCarthy, is published at MIT.
  • 1963 – Grandmaster David Bronstein defeats an M-20 running an early chess program.[70]
  • 1966–67 – The first chess match between computer programs is played. Moscow Institute for Theoretical and Experimental Physics (ITEP) defeats Kotok-McCarthy at Stanford University by telegraph over nine months.
  • 1967 – Mac Hack VI, by Richard Greenblatt et al. introduces transposition tables and employs dozens of carefully tuned move selection heuristics; it becomes the first program to defeat a person in tournament play. Mac Hack VI played about C class level.
  • 1968 – Scottish chess champion David Levy makes a 500 pound bet with AI pioneers John McCarthy and Donald Michie that no computer program would win a chess match against him within 10 years.
  • 1970 – Monty Newborn and the Association for Computing Machinery organize the first North American Computer Chess Championships in New York.
  • 1971 – Ken Thompson, an American Computer scientist at Bell Labs and creator of the Unix operating system, writes his first chess-playing program called "chess" for the earliest version of Unix.[71]
  • 1974 – David Levy, Ben Mittman and Monty Newborn organize the first World Computer Chess Championship which is won by the Russian program Kaissa.
  • 1975 – After nearly a decade of only marginal progress since the high-water mark of Greenblatt's MacHack VI in 1967, Northwestern University Chess 4.5 is introduced featuring full-width search, and innovations of bitboards and iterative deepening. It also reinstated a transposition table as first seen in Greenblatt's program. It was thus the first program with an integrated modern structure and became the model for all future development. Chess 4.5 played strong B-class and won the 3rd World Computer Chess Championship the next year.[72] Northwestern University Chess and its descendants dominated computer chess until the era of hardware chess machines in the early 1980s.
  • 1976 – In December, Canadian programmer Peter R. Jennings releases Microchess, the first game for microcomputers to be sold.[73]
Released in 1977, Boris was one of the first chess computers to be widely marketed. It ran on a Fairchild F8 8-bit microprocessor with only 2.5 KiB ROM and 256 byte RAM.
  • 1977 – In March, Fidelity Electronics releases Chess Challenger, the first dedicated chess computer to be sold. The International Computer Chess Association is founded by chess programmers to organize computer chess championships and report on research and advancements on computer chess in their journal. Also that year, Applied Concepts released Boris, a dedicated chess computer in a wooden box with plastic chess pieces and a folding board.
  • 1978 – David Levy wins the bet made 10 years earlier, defeating Chess 4.7 in a six-game match by a score of 4½–1½. The computer's victory in game four is the first defeat of a human master in a tournament.[21]
  • 1979 – Frederic Friedel organizes a match between IM David Levy and Chess 4.8, which is broadcast on German television. Levy and Chess 4.8, running on a CDC Cyber 176, the most powerful computer in the world, fought a grueling 89 move draw.
  • 1980 – Fidelity computers win the World Microcomputer Championships each year from 1980 through 1984. In Germany, Hegener & Glaser release their first Mephisto dedicated chess computer. The USCF prohibits computers from competing in human tournaments except when represented by the chess systems' creators.[74] The Fredkin Prize, offering $100,000 to the creator of the first chess machine to defeat the world chess champion, is established.
  • 1981 – Cray Blitz wins the Mississippi State Championship with a perfect 5–0 score and a performance rating of 2258. In round 4 it defeats Joe Sentef (2262) to become the first computer to beat a master in tournament play and the first computer to gain a master rating.
  • 1984 – The German Company Hegener & Glaser's Mephisto line of dedicated chess computers begins a long streak of victories (1984–1990) in the World Microcomputer Championship using dedicated computers running programs ChessGenius and Rebel.
  • 1986 – Software Country (see Software Toolworks) released Chessmaster 2000 based on an engine by David Kittinger, the first edition of what was to become the world's best selling line of chess programs.
  • 1987 – Frederic Friedel and physicist Matthias Wüllenweber found Chessbase, releasing the first chess database program. Stuart Cracraft releases GNU Chess, one of the first 'chess engines' to be bundled with a separate graphical user interface (GUI), chesstool.[75]
  • 1988 – HiTech, developed by Hans Berliner and Carl Ebeling, wins a match against grandmaster Arnold Denker 3½–½. Deep Thought shares first place with Tony Miles in the Software Toolworks Championship, ahead of former world champion Mikhail Tal and several grandmasters including Samuel Reshevsky, Walter Browne and Mikhail Gurevich. It also defeats grandmaster Bent Larsen, making it the first computer to beat a GM in a tournament. Its rating for performance in this tournament of 2745 (USCF scale) was the highest obtained by a computer player.[76][77]
  • 1989 – Deep Thought demolishes David Levy in a 4-game match 0–4, bringing to an end his famous series of wagers starting in 1968.
  • 1990 – On April 25, former world champion Anatoly Karpov lost in a simul to Hegener & Glaser's Mephisto Portorose M68030 chess computer.[78]
  • 1991 – The ChessMachine based on Ed Schröder's Rebel wins the World Microcomputer Chess Championship
  • 1992 – ChessMachine wins the 7th World Computer Chess Championship, the first time a microcomputer beat mainframes. GM John Nunn releases Secrets of Rook Endings, the first book based on endgame tablebases developed by Ken Thompson.
  • 1993 – Deep Thought-2 loses a four-game match against Bent Larsen. Chess programs running on personal computers surpass Mephisto's dedicated chess computers to win the Microcomputer Championship, marking a shift from dedicated chess hardware to software on multipurpose personal computers.
  • 1995 – Fritz 3, running on a 90 Mhz Pentium PC, beats Deep Thought-2 dedicated chess machine, and programs running on several super-computers, to win the 8th World Computer Chess Championships in Hong Kong. This marks the first time a chess program running on commodity hardware defeats specialized chess machines and massive super-computers, indicating a shift in emphasis from brute computational power to algorithmic improvements in the evolution of chess engines.
  • 1996 – IBM's Deep Blue loses a six-game match against Garry Kasparov, 2–4.
  • 1997 – Deep(er) Blue, a highly modified version of the original, wins a six-game match against Garry Kasparov, 3.5–2.5.
  • 2000 – Stefan Meyer-Kahlen and Rudolf Huber draft the Universal Chess Interface, a protocol for GUIs to talk to engines that would gradually become the main form new engines would take.
  • 2002 – Vladimir Kramnik ties an eight-game match against Deep Fritz.
  • 2003 – Kasparov draws a six-game match against Deep Junior and draws a four-game match against X3D Fritz.
  • 2004 – a team of computers (Hydra, Deep Junior and Fritz) wins 8½–3½ against a strong human team formed by Veselin Topalov, Ruslan Ponomariov and Sergey Karjakin, who had an average Elo rating of 2681. Fabien Letouzey releases the source code for Fruit 2.1, an engine quite competitive with the top closed-source engines of the time. This leads many authors to revise their code, incorporating the new ideas.
  • 2005 – Rybka wins the IPCCC tournament and very quickly afterwards becomes the strongest engine.[79]
  • 2006 – The world champion, Vladimir Kramnik, is defeated 4–2 by Deep Fritz.
  • 2009 – Pocket Fritz. 4 running on a smartphone, wins Copa Mercosur, an International Master level tournament, scoring 9½/10 and earning a performance rating of 2900.[30] A group of pseudonymous Russian programmers release the source code of Ippolit, an engine seemingly stronger than Rybka. This becomes the basis for the engines Robbolito and Ivanhoe, and many engine authors adopt ideas from it.
  • 2010 – Before the World Chess Championship 2010, Topalov prepares by sparring against the supercomputer Blue Gene with 8,192 processors capable of 500 trillion (5 × 1014) floating-point operations per second.[80] Rybka developer, Vasik Rajlich, accuses Ippolit of being a clone of Rybka.
  • 2011 – The ICGA strips Rybka of its WCCC titles.[81][82]
  • 2017 – AlphaZero, a neural net-based digital automaton, beats Stockfish 28–0, with 72 draws, in a 100-game match.
  • 2018 – Efficiently updatable neural network (NNUE) evaluation is invented for computer shogi.[83]
  • 2019 – Leela Chess Zero (LCZero v0.21.1-nT40.T8.610), a chess engine based on AlphaZero, defeats Stockfish 19050918 in a 100-game match with the final score 53.5 to 46.5 to win TCEC season 15.[84]
  • 2020 – NNUE is added to Stockfish evaluation, noticeably increasing its strength.[64][65]

Categorizations

[edit]

Dedicated hardware

[edit]

These chess playing systems include custom hardware with approx. dates of introduction (excluding dedicated microcomputers):

Commercial dedicated computers

[edit]
Boris Diplomat (1979) travel chess computer
Fidelity Voice Chess Challenger (1979), the first talking chess computer
Speech output from Voice Chess Challenger
Milton Bradley Grandmaster (1983), the first commercial self-moving chess computer
Novag Super Constellation (1984), known for its human-like playing style
DGT Centaur (2019), a modern chess computer based on Stockfish running on a Raspberry Pi

In the late 1970s to early 1990s, there was a competitive market for dedicated chess computers. This market changed in the mid-1990s when computers with dedicated processors could no longer compete with the fast processors in personal computers.

  • Boris in 1977 and Boris Diplomat in 1979, chess computers including pieces and board, sold by Applied Concepts Inc.
  • Chess Challenger, a line of chess computers sold by Fidelity Electronics from 1977 to 1992.[85] These models won the first four World Microcomputer Chess Championships.[citation needed]
  • ChessMachine, an ARM-based dedicated computer, which could run two engines:
  • Excalibur Electronics sells a line of beginner strength units.
  • Mephisto, a line of chess computers sold by Hegener & Glaser. The units won six consecutive World Microcomputer Chess Championships.[citation needed]
  • Novag sold a line of tactically strong computers, including the Constellation, Sapphire, and Star Diamond brands.
  • Phoenix Chess Systems makes limited edition units based around StrongARM and XScale processors running modern engines and emulating classic engines.
  • Saitek sells mid-range units of intermediate strength. They bought out Hegener & Glaser and its Mephisto brand in 1994.

Recently, some hobbyists have been using the Multi Emulator Super System to run the chess programs created for Fidelity or Hegener & Glaser's Mephisto computers on modern 64-bit operating systems such as Windows 10.[87] The author of Rebel, Ed Schröder has also adapted three of the Hegener & Glaser Mephisto's he wrote to work as UCI engines.[88]

DOS programs

[edit]

These programs can be run on MS-DOS, and can be run on 64-bit Windows 10 via emulators such as DOSBox or Qemu:[89]

Notable theorists

[edit]

Well-known computer chess theorists include:

Solving chess

[edit]

The prospects of completely solving chess are generally considered to be rather remote. It is widely conjectured that no computationally inexpensive method to solve chess exists even in the weak sense of determining with certainty the value of the initial position, and hence the idea of solving chess in the stronger sense of obtaining a practically usable description of a strategy for perfect play for either side seems unrealistic today. However, it has not been proven that no computationally cheap way of determining the best move in a chess position exists, nor even that a traditional alpha–beta searcher running on present-day computing hardware could not solve the initial position in an acceptable amount of time. The difficulty in proving the latter lies in the fact that, while the number of board positions that could happen in the course of a chess game is huge (on the order of at least 1043[91] to 1047), it is hard to rule out with mathematical certainty the possibility that the initial position allows either side to force a mate or a threefold repetition after relatively few moves, in which case the search tree might encompass only a very small subset of the set of possible positions. It has been mathematically proven that generalized chess (chess played with an arbitrarily large number of pieces on an arbitrarily large chessboard) is EXPTIME-complete,[92] meaning that determining the winning side in an arbitrary position of generalized chess provably takes exponential time in the worst case; however, this theoretical result gives no lower bound on the amount of work required to solve ordinary 8x8 chess.

Martin Gardner's Minichess, played on a 5×5 board with approximately 1018 possible board positions, has been solved; its game-theoretic value is 1/2 (i.e. a draw can be forced by either side), and the forcing strategy to achieve that result has been described.

Progress has also been made from the other side: as of 2012, all 7 and fewer pieces (2 kings and up to 5 other pieces) endgames have been solved.

Chess engines

[edit]

A "chess engine" is software that calculates and orders which moves are the strongest to play in a given position. Engine authors focus on improving the play of their engines, often just importing the engine into a graphical user interface (GUI) developed by someone else. Engines communicate with the GUI by standardized protocols such as the nowadays ubiquitous Universal Chess Interface developed by Stefan Meyer-Kahlen and Franz Huber. There are others, like the Chess Engine Communication Protocol developed by Tim Mann for GNU Chess and Winboard. Chessbase has its own proprietary protocol, and at one time Millennium 2000 had another protocol used for ChessGenius. Engines designed for one operating system and protocol may be ported to other OS's or protocols. Chess engines are regularly matched against each other at dedicated chess engine tournaments.

Chess web apps

[edit]

In 1997, the Internet Chess Club released its first Java client for playing chess online against other people inside one's webbrowser.[93] This was probably one of the first chess web apps. Free Internet Chess Server followed soon after with a similar client.[94] In 2004, International Correspondence Chess Federation opened up a web server to replace their email-based system.[95] Chess.com started offering Live Chess in 2007.[96] Chessbase/Playchess has long had a downloadable client, and added a web-based client in 2013.[97]

Another popular web app is tactics training. The now defunct Chess Tactics Server opened its site in 2006,[98] followed by Chesstempo the next year,[99] and Chess.com added its Tactics Trainer in 2008.[100] Chessbase added a tactics trainer web app in 2015.[101]

Chessbase took their chess game database online in 1998.[102] Another early chess game databases was Chess Lab, which started in 1999.[103] New In Chess had initially tried to compete with Chessbase by releasing a NICBase program for Windows 3.x, but eventually, decided to give up on software, and instead focus on their online database starting in 2002.[104]

One could play against the engine Shredder online from 2006.[105] In 2015, Chessbase added a play Fritz web app,[106] as well as My Games for storing one's games.[107]

Starting in 2007, Chess.com offered the content of the training program, Chess Mentor, to their customers online.[108] Top GMs such as Sam Shankland and Walter Browne have contributed lessons.

Impact of AI on chess

[edit]

Revolutionizing chess strategy

[edit]

The introduction of artificial intelligence transformed the game of chess, particularly at the elite levels. AI greatly influenced defensive strategies. It has the capacity to compute every potential move without concern, unlike human players who are bound to emotional and psychological impacts from factors such as stress or tiredness. As a result, many positions once considered not defensible are now recognized as defensible.

After studying millions of games, chess engines made new analysis and improved the existing theories of opening. These improvements led to the creation of new ideas and changed the way players think throughout all parts of the game.[109] In classical chess, elite players commonly initiate games by making 10 to 15 opening moves that align with established analyses or leading engine recommendations.[110]

Cheating and fair play

[edit]

Unlike traditional over-the-board tournaments where handheld metal detectors are employed in order to counter players attempts at using electronic assistance, fair-play monitoring in online chess is much more challenging.

During the 2020 European Online Chess Championship, which saw a record participation of nearly 4000 players over 80 participants were disqualified for cheating—most from beginner and youth categories.[111] The event underscored the growing need for advanced detection methods in online competitions.

In response to these issues, chess platforms such as Chess.com developed AI-based statistical models which track improbable moves by a player and compare them to moves that could be made by an engine. Expert examination is conducted for all suspected cases, and the findings are published on a regular basis. FIDE introduced AI behavior-tracking technology to strengthen anti-cheating measures in online events.[112]

Challenges in cheat detection

[edit]

AI-based detection systems use a combination of machine learning to track suspicious player actions in different games. This is done by measuring discrepancies between the real moves and the predicted moves derived from the available statistics. Players of unusually high skill level or unusual strategies that can imitate moves characteristic of automated chess systems. Each case is examined by a human expert to ensure that the decision is correct before any actions are made to guarantee fairness and accuracy.[112]

Aligning AI with humans

[edit]

The Maia Chess project was begun in 2020 by the University of Toronto, Cornell University, and Microsoft Research. Maia Chess is a neural network constructed to impersonate a human's manner of playing chess based on skill. Each Maia models was tested on 9 sets of 500,000 positions each, covering rating levels from 1100 to 1900. They perform best when predicting moves made by players at their targeted rating level, with lower Maias accurately predicting moves from lower-rated players (around 1100) and higher Maias doing the same for higher-rated players (around 1900). The primary goal of Maia is to develop an AI chess engine that imitates human decision-making rather than focusing on optimal moves. Through personalization across different skill levels, Maia is able to simulate game styles typical for each level more accurately.[113][114]

Chess and LLMs

[edit]

While considered something done more for entertainment than for serious play, people have discovered that large language models (LLMs) of the type created in 2018 and beyond such as GPT-3 can be prompted into producing chess moves given proper language prompts. While inefficient compared to native chess engines, the fact that LLMs can track the board state at all beyond the opening rather than simply recite chess-like phrases in a dreamlike state was considered greatly surprising. LLM play has a number of quirks compared to engine play; for example, engines don't generally "care" how a board state was arrived at. However, LLMs seem to produce different quality moves for a chess position reached via strong play compared to the same board state produced via a set of strange preceding moves (which will generally produce weaker and more random moves).[115]

See also

[edit]

Notes

[edit]

References

[edit]

Sources

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Computer chess is a subfield of focused on the development of computer programs and algorithms capable of playing the game of chess at various levels of proficiency, ranging from simple rule-based systems to advanced models that surpass human grandmasters. These systems employ techniques such as search trees, alpha-beta pruning, and to evaluate board positions, predict outcomes, and select optimal moves, enabling them to compete against humans or other computers in matches and tournaments. Since its inception in the mid-20th century, computer chess has served as a benchmark for AI progress, highlighting advancements in computational power, search algorithms, and self-improving learning methods. The origins of computer chess trace back to theoretical foundations laid by pioneers like , who in 1950 proposed a basic algorithm for a machine to play chess by simulating human decision-making processes. Claude Shannon's 1950 paper further analyzed the of chess, estimating the vast number of possible games—around 10^120—and outlining search strategies that would become central to the field. Early computer chess programs were developed in the early 1950s, with a notable example emerging at in 1956, where the MANIAC computer ran a simplified version of the game on a 6x6 board without bishops or queens, known as "Los Alamos Chess" or "Anti-Clerical Chess," which successfully defeated a human player. Earlier mechanical precursors, such as Leonardo Torres y Quevedo's in 1912, demonstrated automated endgame solving for king-and-rook versus king scenarios, foreshadowing digital implementations. Key milestones in computer chess include the rise of dedicated hardware and software in the 1970s and 1980s, with programs like Chess 4.5 achieving strong amateur levels by 1977 through refined evaluation functions and endgame databases. The field's breakthrough came in 1997 when IBM's Deep Blue supercomputer defeated world champion in a six-game match, winning 3.5–2.5 after evaluating up to 200 million positions per second using custom VLSI chips and selective search extensions. This victory marked the first time a computer bested a reigning human champion under standard tournament conditions, accelerating interest in AI and demonstrating the power of brute-force computation combined with expert heuristics. In the modern era, computer chess has evolved beyond traditional search-based engines to incorporate and , exemplified by DeepMind's in 2017, which learned chess from scratch through self-play and defeated the top conventional engine 8 in a 100-game match with a score of 28 wins, 72 draws, and no losses. Subsequent innovations, such as the adoption of efficient evaluation functions (NNUE) in engines like around 2020, have further enhanced performance through hybrid traditional and neural methods. Open-source engines like , continually improved by a global community, now achieve Elo ratings exceeding 3500 as of 2025—far beyond the human peak of 2882—on standard hardware, rendering top-level human-computer matches obsolete since the last human victory in 2005. Ongoing research explores completely, with endgame tablebases already covering positions up to seven pieces, though the full game's complexity remains unsolved.

Overview

Availability and Playing Strength

Computer chess programs are widely available in both free and commercial forms, accessible across diverse platforms to cater to players of all levels. Stockfish, an open-source engine, can be downloaded for free and runs on desktop operating systems including Windows, macOS, and Linux, as well as mobile devices via iOS and Android apps; it is also integrated into web-based platforms like Lichess and Chess.com for online analysis and play. Leela Chess Zero, another free and open-source engine inspired by neural network architectures like AlphaZero, is primarily available for desktop use through graphical user interfaces (GUIs) that support the Universal Chess Interface (UCI) protocol, with optional mobile and web integrations via compatible software. Komodo, a commercial engine developed by the Komodo Chess team (now under Chess.com) and available for purchase, supports Windows, macOS, and Linux platforms, often bundled with chess GUIs like ChessBase for enhanced functionality. Top chess engines demonstrate superhuman playing strength, consistently outperforming the world's strongest human grandmasters. As of 2025, holds the highest rating among traditional engines at approximately 3644 Elo on the Computer Chess Rating Lists (CCRL) 40/40 benchmark, far exceeding the peak human rating of 2882 achieved by . Komodo Dragon rates around 3625 Elo on the same list, while achieves competitive performance near 3600 Elo under optimal hardware conditions, as seen in events like the Top Chess Engine Championship (TCEC). These engines routinely defeat grandmasters in exhibition matches, with winning over 99% of games against top humans when set to full strength. Running modern chess engines requires modest hardware for basic use, but peak performance demands more robust setups. Stockfish operates efficiently on standard multi-core CPUs found in contemporary laptops or desktops, analyzing billions of positions per second without specialized components. , however, benefits significantly from a dedicated (GPU) for its evaluations, with high-end cards like the RTX 40-series enabling deeper searches. Komodo performs well on similar CPU setups but can leverage cloud resources for intensive analysis. Cloud-based options, such as Chessify and the ChessBase Engine Cloud, allow users to offload computations to remote servers, providing access to top engines like Stockfish without local hardware strain, ideal for mobile or low-spec devices. Dedicated chess computers integrate engines with physical boards for tactile play. The Chessnut Evo features a smart electronic board with full piece recognition, built-in Maia engine for human-like AI coaching, and connectivity to online platforms like Lichess, running on an internal battery for up to 10 hours of use. Engine playing strength saw rapid growth through the , with Elo ratings climbing from around 3000 in 2010 to over 3500 by 2020, but has since plateaued near 3600-3650 as hardware and algorithmic gains diminish, shifting focus to efficiency and integration refinements.

Types and Features of Chess Software

Computer chess software encompasses a variety of classifications tailored to different user needs and technological capabilities. Dedicated hardware refers to standalone chess computers designed exclusively for playing or analyzing chess, featuring built-in processors and minimalistic interfaces without requiring external devices. Examples include portable models like the series, which combine physical boards with embedded engines for offline play. In contrast, software engines operate on general-purpose computers and are divided into open-source variants, such as , which allow free modification and community contributions, and proprietary ones like Komodo, developed by commercial entities with closed codebases for specialized performance optimizations. Hybrid systems integrate hardware and software, such as the Chessnut Evo, an electronic chessboard with built-in AI that connects to online platforms and uses piece recognition for seamless interaction. Beyond core gameplay, chess software offers advanced features to enhance analysis and learning. Analysis tools include blunder detection, which scans games to identify critical errors and suggest improvements, as seen in applications like the Chess Blunder Trainer that convert personal game mistakes into interactive puzzles. Variant support enables play in non-standard rulesets, such as Chess960 or , often integrated into engines for exploring alternative strategies. Training modes provide puzzles derived from real games to build tactical skills and coaching functions that offer personalized feedback on positional weaknesses. Interfaces vary from graphical user interfaces (GUIs) like those in Chess.com's analysis board, which visualize moves and evaluations intuitively, to command-line versions for advanced users integrating engines via protocols like UCI. The evolution of chess software has progressed from command-line DOS programs in the 1980s and 1990s, which ran on personal computers with limited graphics, to modern cross-platform applications accessible via desktops, mobiles, and web browsers. Early DOS-based engines like those in WinBoard emphasized raw computational power, while contemporary versions, such as Lichess.org's web app, support real-time online play and cloud-based analysis across devices. Mobile apps like Chess.com's iOS and Android versions extend this accessibility, incorporating touch interfaces and offline modes. Unique aspects of modern chess software include efforts to emulate human-like playstyles, exemplified by Allie, a 2025 AI bot developed at and trained on 91 million human games from to predict and replicate realistic decision-making rather than optimal superhuman moves. This approach fosters more engaging training by mimicking common human errors and strategies at various skill levels. Some engines also incorporate opening books—precomputed databases of expert openings—to guide initial moves, though customization remains a key differentiator in human-aligned systems like Allie.

History

Pre-Computer Developments

The fascination with mechanical devices capable of playing chess dates back to the , when inventors created elaborate automata that simulated autonomous play but relied on hidden human operators. One of the most famous examples was The Turk, constructed in 1770 by Hungarian inventor as a life-sized figure dressed in Ottoman robes, seated behind a chessboard on a large cabinet filled with gears and levers. The device toured and the Americas, defeating notable opponents including and Napoleon Bonaparte, before being exposed as a containing a concealed expert player who manipulated the figure's arm via magnets and a pantograph system. In the 1870s, similar pseudo-automata emerged, such as Mephisto, built around 1878 by English inventor Charles Godfrey Gumpel as a devilish figure that played chess using electro-mechanical controls operated remotely by chess master Isidor Gunsberg from an adjacent room. Advancing beyond hoaxes, early 20th-century engineers explored genuine electromechanical solutions for limited chess scenarios. Spanish inventor Leonardo Torres y Quevedo developed , an electromechanical chess machine first constructed in and demonstrated at the in 1914, capable of playing the endgame of and rook versus lone by automatically calculating legal moves and delivering without human intervention. Using electromagnetic relays, dials, and gears to represent the board and pieces, the device evaluated positions logically and selected optimal moves, though it required manual setup for the opponent's placement. An improved version, built by Torres y Quevedo's son Gonzalo in 1922 under his father's guidance, incorporated algorithmic decision-making and was later played against by mathematician in 1951, highlighting its role as a precursor to automated computation. Theoretical foundations for computer chess solidified in the mid-20th century with Claude Shannon's seminal 1950 paper, "Programming a Computer for Playing Chess," which outlined how digital machines could simulate chess play through systematic evaluation of positions. Shannon introduced the minimax algorithm as a core strategy, where the program alternates maximizing its own advantages and minimizing the opponent's over a search tree of possible moves, backed by an evaluation function assessing material, position, and mobility. He also quantified chess's immense complexity, estimating approximately 1012010^{120} possible game variations from the starting position—derived from an average of about 30 legal moves per turn over 40 moves—underscoring the need for efficient search methods rather than exhaustive enumeration. Chess grandmaster , a world champion and early advocate for computational approaches, drew from human problem-solving techniques to influence pre-computer , emphasizing selective search over brute-force analysis. In works like his 1984 book Computers in Chess: Solving Inexact Search Problems, Botvinnik advocated algorithms mimicking grandmaster intuition, focusing on promising lines based on positional patterns and long-range planning to prune irrelevant branches in the vast . His ideas on inexact search—prioritizing depth in critical variations while approximating others—bridged human cognitive strategies with emerging machine methods, laying groundwork for software implementations in the post-war era. The earliest computer chess programs emerged in the mid-1950s amid limited computational resources, prioritizing simplified rules and shallow searches. , developed in 1956 by a team including James Kister, Paul Stein, and on the computer at Los Alamos Scientific Laboratory, operated on a reduced 6x6 board without queens or bishops to manage complexity. It searched only two moves deep, taking approximately 12 minutes per move on hardware capable of 11,000 operations per second, and demonstrated the ability to defeat a weak opponent while committing typical novice errors. By the late 1960s, programs advanced to full-board play with selective search techniques. Mac Hack VI, created in 1967 by Richard Greenblatt and colleagues at MIT on a , became the first to compete in human tournaments and defeat a novice player rated 1510 by the during the Massachusetts Amateur Championship that year. Evaluating roughly 100 positions per second, it won two games and drew two in the event, earning an honorary USCF membership and establishing computer chess as a viable pursuit. These programs employed the algorithm as a foundational framework. Soviet chess grandmaster pioneered knowledge-driven approaches in the 1950s and 1960s with programs like Pionir and the later Pioneer, aiming to replicate grandmaster through selective search. His methods used chess principles—such as positional and assessment—to prioritize promising move branches, avoiding exhaustive analysis of irrelevant positions and emphasizing strategic depth over breadth. Botvinnik's work, detailed in his 1970 book Computers, Chess and Long-Range Planning, influenced early AI by integrating domain expertise to compensate for hardware constraints. Hardware limitations, with even advanced 1960s systems like the IBM 7090 evaluating only about 1,100 positions per second, necessitated a shift from pure search strategies to knowledge-based selective methods, as outlined by in his seminal 1950 paper "Programming a Computer for Playing Chess." This Type B approach focused on plausible lines guided by heuristics, enabling playable performance without full enumeration of the game's vast possibilities. Key milestones included Mac Hack VI's 1967 tournament debut, paving the way for the first dedicated computer chess event, the 1970 North American Computer Chess Championship organized by the Association for Computing Machinery.

Dedicated Hardware and Microcomputer Era

The emergence of dedicated chess hardware in the late 1970s marked a pivotal shift, transforming computer chess from experimental academic projects into accessible consumer products. Fidelity Electronics introduced the Chess Challenger in 1977, recognized as the first commercial microcomputer-based chess playing machine, featuring a dedicated processor and a physical board for gameplay. This device, priced affordably for the era, allowed non-experts to play against a computer opponent at home, sparking widespread interest. Building on this foundation, companies like Novag and Hegener + Glaser expanded the market with innovative dedicated devices throughout the . Novag released its Chess Champion MK I in 1978, utilizing a 8-bit processor at 1.78 MHz with 2 KB ROM and 1 KB RAM, which became an early commercial success through partnerships and distribution in the U.S. Similarly, the Mephisto series, launched by Hegener + Glaser starting in 1980 with models like Mephisto I-III programmed by Elmar Henne and Thomas Niessen, offered modular designs that combined hardware boards with swappable program modules, enhancing replayability and strength. These machines emphasized portability and user-friendly interfaces, contributing to the proliferation of chess computers in households and clubs. The microcomputer boom further democratized access, as programs adapted to affordable personal computers like the . Sargon, developed by Dan and Kathe Spracklen in 1978 initially on a Wavemate III and soon ported to the by Hayden Software, represented a landmark in software for home systems, achieving strong play with selective search algorithms and fitting within the era's limited 8 KB RAM constraints. By the early 1980s, ports to PC compatibles, such as early versions of Sargon and other engines, enabled chess on standard desktops, broadening participation beyond specialized hardware. Key milestones underscored the era's technological strides. In 1978, engineers and Joe Condon unveiled Belle, a custom hardware chess machine that combined specialized processors for move generation and evaluation, eventually reaching master-level performance by the early through iterative hardware upgrades. On the competitive front, Fidelity's Sensory Voice Chess Challenger claimed the inaugural World Microcomputer Chess Championship in 1980 in , demonstrating the viability of commercial hardware in tournament settings. Commercially, the 1980s saw peak sales for dedicated chess computers, with the industry surpassing $100 million in revenue by 1982, driven by innovations like LCD displays for portable models and voice output for interactive feedback. Devices such as the Voice Sensory Chess Challenger, introduced in 1979, incorporated to announce moves and game status, while LCD-equipped portables like Mattel's 1980 Computer Chess reduced power needs and costs, making chess computers a staple . These features not only boosted sales but also enhanced learning, as machines provided hints, game replays, and adjustable difficulty levels.

Brute-Force Search Dominance

The transition to brute-force search in computer chess began in the late 1980s with programs emphasizing full-width evaluation over selective, knowledge-heavy methods. Deep Thought, developed at starting in 1985 as the ChipTest project, utilized custom hardware to perform deeper searches, achieving up to 1 million positions per second by 1988. This approach culminated in Deep Thought's milestone victory over grandmaster in a 1988 simultaneous exhibition and its first win against a grandmaster in a regulation game in 1989. Building on Deep Thought, IBM's Deep Blue represented a leap in parallel processing for exhaustive search. Unveiled in 1996 and upgraded for 1997, it featured 30 RS/6000 SP nodes with 480 custom VLSI chess processors, enabling evaluation of 200 million positions per second through coordinated brute-force computation across 32 processors. In May 1997, Deep Blue defeated world champion 3.5–2.5 in a six-game match in , marking the first time a computer bested a reigning human champion under tournament conditions. This success highlighted hardware accelerators' role in optimizing search depth, with Deep Blue's custom chips dedicated to move generation and evaluation. Alpha-beta pruning served as a key enabler, allowing these systems to efficiently prune irrelevant branches in full-width searches. By the , brute-force dominance extended to software engines running on commodity hardware, amplified by , which roughly doubled transistor counts and computing speed every two years, facilitating deeper searches with reduced dependence on complex heuristics. , developed by ChessBase, exemplified this era; versions like Fritz 8 (2002) and Fritz 10 (2006) topped independent ratings lists, achieving over 2800 Elo on standard PCs by leveraging incremental updates and parallel search. Similarly, Shredder by Stefan Meyer-Kahlen secured the in 1999 and 2003, and repeatedly led the SSDF rating list in the early , with Shredder 7 (2003) scoring eight points ahead of rivals on varied hardware. Supercomputer integrations marked further milestones, underscoring brute force's scalability. In 2004, the FPGA-based Hydra cluster, comprising 64 processors analyzing 200 million positions per second, defeated grandmasters Evgeny Vladimirov (3–1) and (2.5–1.5). Deep Fritz, running on a 32-processor cluster, won 4–2 against world champion in the 2006 World Chess Challenge in , including two decisive victories after four draws. Around 2010, traditional engines plateaued, with annual Elo gains dropping from 100+ points per decade in the 1990s–2000s to near stagnation, as hardware scaling slowed and search depths hit practical limits around 20–25 plies on elite configurations exceeding 3200 Elo.

Neural Network Advancements

The advent of in the marked a in computer chess, moving beyond hand-crafted evaluation functions and toward self-supervised learning systems that could acquire strategic knowledge autonomously. These advancements leveraged to train networks solely through , enabling engines to surpass traditional programs by developing intuitive positional understanding rather than relying on exhaustive computation. A seminal breakthrough came with , developed by DeepMind and released in 2017, which learned chess from scratch using without any prior human knowledge beyond the rules. Starting from random play, AlphaZero trained by playing millions of games against itself, employing a to guide for move selection. In a 100-game match against 8, the leading traditional engine at the time, AlphaZero scored 28 wins, 72 draws, and 0 losses, demonstrating superior tactical and strategic play. Inspired by , (LCZero) emerged in 2018 as an open-source project aiming to replicate its self-learning approach through crowdsourced . Volunteers worldwide contributed computational resources to train LCZero's neural networks via , allowing it to evolve without proprietary hardware. By 2019, LCZero had achieved competitive strength against top engines, showcasing the feasibility of democratizing advanced AI chess through effort. The integration of neural networks into established engines further accelerated progress, exemplified by 's adoption of NNUE () in 2020. This hybrid model combined a lightweight —trained on positions evaluated by traditional —for fast position assessment with classical alpha-beta search, achieving high efficiency on standard hardware. NNUE's design, originally from , allowed incremental updates during search, reducing computational overhead while enhancing evaluation accuracy. These neural advancements propelled computer chess engines to unprecedented performance levels, with top programs like NNUE and LCZero routinely exceeding 3600 Elo in standardized benchmarks such as the Computer Chess Rating Lists (CCRL). Beyond raw strength, they uncovered novel strategies, such as aggressive queen development in closed positions and counterintuitive pawn sacrifices, expanding the boundaries of in ways previously unimaginable.

Recent AI Innovations (2017–2026)

In 2024, the and Efficient Chess AI Challenge, hosted on , pushed the boundaries of resource-constrained AI by requiring participants to develop chess agents operating under strict CPU and memory limits, such as 1 GB RAM and limited compute time per move, to promote sustainable and accessible computing. The competition, launched during the 2024 , emphasized elegant algorithms over brute-force computation, with a $50,000 prize pool attracting global developers. The top entry, by competitor linrock, achieved an Elo-equivalent score of 2055.7, demonstrating high performance through optimized adaptations of open-source engines like , tailored to fit the constraints without relying on massive pre-computed tables. Building briefly on the AlphaZero architecture introduced in 2017, recent innovations have extended principles to create more human-like chess AI. In 2025, Carnegie Mellon University's Allie, developed by Ph.D. student Yiming Zhang, marked a shift toward AI that mimics human playstyles rather than optimal winning strategies. Trained on 91 million human games from , Allie uses a transformer-based model to replicate typical errors, blunders, and stylistic preferences at various skill levels, enabling more instructive and engaging training sessions for players. Deployed on platforms like , it adjusts its play to match opponents' ratings, fostering natural gameplay and analysis without the superhuman precision of traditional engines. AI-focused tournaments highlighted competitive advancements in 2024 and 2025. The 2024 in , the final edition after 50 years, saw , Stoofvlees, and Raptor tie for first with 5.5 points, showcasing refined hybrid engines combining search and neural evaluation. In August 2025, the Game Arena AI Chess Exhibition Tournament pitted large language models against each other in a knockout format, where OpenAI's o3 model dominated, defeating xAI's 4 4-0 in the final and outperforming entrants from , , and DeepSeek. This event underscored LLMs' growing reasoning capabilities in strategic games, streamed live to evaluate AI progress beyond specialized chess engines. Hardware-software integrations advanced accessibility in 2025 with devices like the Chessnut Evo, an e-board featuring onboard coaching via the Maia engine. Powered by a built-in NPU for image recognition and move simulation, Evo supports platforms like and while providing real-time analysis and personalized training based on millions of human games, allowing users to practice against adaptive AI without external hardware. Complementing this, LLM integrations gained prominence; for instance, in a July 2025 demonstration, defeated in 53 moves without losing a piece, exposing limitations in general-purpose AI for deep strategic depth despite its conversational strengths. In January 2026, software engineer Guillermo Rauch organized an informal autonomous chess match between xAI's Grok-4-fast-reasoning and OpenAI's GPT-5.2, hosted at v0-chess-match.vercel.app. The match ran overnight with multiple games, during which Grok-4-fast-reasoning won 19 of the last 20 encounters. This demonstration highlighted ongoing advancements in large language models' reasoning capabilities for complex strategic tasks such as chess. Ongoing trends from 2017 to 2026 emphasize efficiency, inclusivity, and expansion beyond standard chess. Sustainable computing, exemplified by the FIDE-Google challenge, prioritizes low-energy AI to reduce environmental impact in and deployment. Esports integration has surged, with AI enhancing broadcasts through real-time analysis and hybrid human-AI events, as seen in growing platforms like Chess.com's tournaments. Additionally, AI development for chess variants—such as Chess960 and custom rulesets—has accelerated via tools like ChessCraft and Omnichess, enabling players to design and compete in novel games against adaptive opponents. These directions reflect a broader push toward diverse, human-centered AI applications in the field.

Technical Methods

Board Representations and User Interfaces

In computer chess, board representations are data structures used to encode the state of a chess position, including piece locations, colors, and other game elements, to facilitate efficient computation during search and evaluation. Early and simpler approaches often employ array-based methods, such as the mailbox representation, which models the board as a 10x12 grid (120 elements) surrounding an 8x8 core to simplify move generation by providing buffer zones for . A related variant, the 0x88 representation, uses a 128-element array in a 16x8 layout, where each square index combines 4-bit rank and file values; this allows rapid off-board move detection via bitwise AND with 0x88 (136 in decimal), as invalid destinations yield a non-zero result in the upper bits. These array methods enable straightforward square access and are particularly accessible for implementing basic move validation and piece placement. Bitboards represent a more advanced, piece-centric approach, utilizing 64-bit integers where each bit corresponds to one of the 64 squares, with separate bitboards for each piece type and color (typically 12 in total) to indicate occupancy. This structure leverages bitwise operations—such as AND for intersections, OR for unions, and shifts for directional attacks—to perform parallel computations across multiple squares, making it highly efficient for generating attacks, pawn structures, and connectivity checks in modern engines. For instance, sliding piece moves can be precomputed using techniques like magic bitboards, which employ multiplication and masking to index attack tables dynamically. Bitboards were first proposed by Mikhail Shura-Bura in 1952 and gained prominence in programs like Kaissa (), with significant refinements in rotated bitboards by Robert Hyatt in the . The choice between array-based representations like mailbox or 0x88 and bitboards involves key trade-offs in speed, flexibility, and implementation complexity. Array methods offer simpler code for beginners, with intuitive indexing and minimal overhead for single-square operations, but they require sequential loops for multi-square tasks, leading to slower on modern hardware. Bitboards, conversely, excel in speed through hardware-optimized bitwise instructions on 64-bit processors, reducing time by up to an for set operations, though they demand proficiency in and may necessitate hybrid use with arrays for individual square queries. Modern engines like predominantly adopt bitboards for their scalability in deep searches, while array formats suit educational or resource-constrained implementations. User interfaces in computer chess provide visual and interactive layers for human engagement, separating the underlying from end-user input and output. Graphical user interfaces (GUIs) typically feature resizable boards with piece graphics, supporting intuitive move entry via drag-and-drop or click-to-select mechanics, alongside tools for game navigation, notation display, and time controls. , a free open-source GUI compatible with UCI and Winboard protocols, exemplifies this by integrating multiple engines, opening books, and endgame tablebases, while supporting hardware like DGT boards for physical piece input. ChessBase, a commercial suite, employs a ribbon-based interface for seamless database management, engine analysis, and annotated game creation, with features like synchronization and video integration for professional training. Web-based platforms like offer browser-accessible interfaces with responsive boards, where users drag pieces or use algebraic entry, enhanced by real-time analysis boards and study tools for collaborative review. The evolution of these interfaces has progressed from text-based ASCII diagrams in 1950s programs, which displayed positions via character grids on terminals, to sophisticated graphical systems in the 1980s with high-resolution 2D boards using APIs like VGA. Contemporary developments include 3D renderings via OpenGL for immersive views and augmented reality (AR) integrations, such as the CheckMate system, which overlays virtual animations on tangible 3D-printed pieces using head-mounted displays like HoloLens for remote play with haptic feedback and move highlighting. These AR interfaces enhance accessibility and engagement by projecting interactive boards onto real surfaces, though they remain experimental compared to standard 2D GUIs.

Search Algorithms

Search algorithms in computer chess form the core mechanism for exploring the game's vast , enabling programs to select optimal moves by simulating future positions. These algorithms balance computational efficiency with search depth, as the of chess—averaging around 35 legal moves per position—exponentially increases the number of nodes to evaluate, reaching billions at moderate depths. Early approaches relied on recursive , while modern variants incorporate probabilistic methods to handle and scale to performance. The foundational algorithm is search, introduced by in 1950, which recursively evaluates positions by assuming perfect play from both sides: the maximizing player (typically ) chooses moves to maximize the score, while the minimizing player (black) selects those to minimize it. In practice, proceeds depth-first to a fixed limit, evaluating leaf nodes with a function before backpropagating values up the tree. This full-width search, or Type A in Shannon's classification, exhaustively examines all branches but becomes infeasible beyond a few plies due to time constraints. To mitigate this, alpha-beta pruning enhances by maintaining two values—alpha (best score for maximizer) and beta (best for minimizer)—and cutting off branches that cannot influence the root decision. Formally, during search, if the current best for the minimizer (beta) is less than or equal to the current best for the maximizer (alpha), the subtree is pruned: if βα, cutoff\text{if } \beta \leq \alpha, \text{ cutoff} and Ronald Moore analyzed this in 1975, proving it examines no more nodes than in the worst case while typically reducing the effective to the of the original, allowing deeper searches of 10–15 plies on early hardware. Alpha-beta remains the backbone of traditional engines like , where leaf evaluations provide static scores for non-terminal positions. Several optimizations further refine alpha-beta search. Iterative deepening, pioneered by David Slate and Lawrence Atkin in their 1977 Chess 4.5 program, conducts successive depth-limited searches starting from shallow depths and incrementally increasing until time expires, reusing move orders from prior iterations to improve efficiency. This approach ensures principal variation accuracy even if interrupted, at a modest 10–20% overhead compared to fixed-depth search. Transposition tables, first implemented by Richard Greenblatt in Mac Hack VI (1967), cache search results using to detect identical positions reached via different move orders, avoiding redundant and enabling exact or lower/upper bound cutoffs. Late move reductions (LMR) heuristically decrease depth for later-ordered moves in a branch—typically by 1–2 plies after the first few—since poor moves rarely yield cutoffs; if the reduced search fails low, it is re-searched fully, as detailed in game-tree reviews from the . These techniques collectively allow contemporary engines to probe 20+ plies selectively. A occurred in 2017 with , which employs (MCTS) instead of alpha-beta, combining tree-based planning with random simulations (rollouts) to estimate move values probabilistically. MCTS iterates four steps: selection (traverse to a promising using upper bounds), expansion (add child nodes), simulation (play out to a terminal state via policy-guided random moves), and (update statistics along the path). Guided by a for both policy (move probabilities) and value (win estimates), self-trains via , achieving superhuman strength in hours without domain knowledge. This simulation-based method scales to millions of playouts per second on GPUs, contrasting with deterministic pruning. Historically, computer chess evolved from selective search—Shannon's Type B, focusing on plausible lines via heuristics—to brute-force dominance by the , as hardware advances and alpha-beta enabled exhaustive exploration deeper than intuition-based selection, culminating in Deep Blue's 1997 victory. Today, hybrid engines blend these, with MCTS variants exploring beyond traditional limits.

Evaluation and Knowledge Integration

In computer chess, the serves as a to score leaf nodes in the search tree, approximating the desirability of a position when further search is not feasible. Traditional evaluation functions are typically expressed as a weighted sum of multiple terms—a form of polynomial function—that assess key positional elements. Material balance is computed by assigning fixed centipawn values to pieces, such as 100 for a pawn, 300 for a or , 500 for a rook, and 900 for a queen, reflecting their relative strengths derived from empirical analysis and historical precedents in . Additional terms incorporate positional factors like piece mobility (penalizing restricted pieces and rewarding central control), king safety (evaluating pawn shelter, open lines to the king, and attack potential), and (scoring connected pawns, isolated weaknesses, and advancement). These components, first outlined in foundational work, enable a static assessment that balances immediate advantages with long-term strategic viability. The integration of domain-specific knowledge into evaluation has long been debated against reliance on exhaustive search, a tension rooted in the 1960s when early programs like MacHack VI emphasized hand-crafted heuristics to compensate for limited computational power, incorporating over 50 rules for material, position, and control to achieve amateur-level play. This approach prioritized knowledge to guide shallow searches, but as hardware advanced, the debate shifted toward favoring deeper brute-force exploration over intricate heuristics, with studies showing diminishing returns for additional knowledge amid improving search efficiency. Modern engines resolve this through hybrids like NNUE (Efficiently Updatable Neural Network), introduced in Stockfish's 2020 update (version 12), which uses a lightweight trained on millions of positions to approximate traditional while enabling faster computation than full deep networks. Processor speed profoundly influences this balance, as evaluation complexity competes with search depth for computational cycles; simpler, faster evaluations allow more nodes to be explored, a critical in resource-constrained environments like mobile devices, where NNUE's incremental updates ensure sub-millisecond scoring to maintain playability, versus supercomputers that afford deeper searches with marginally slower but richer heuristics. In high-end setups, such as those used in championships, engines allocate up to 80% of cycles to search, leveraging raw speed to outperform knowledge-heavy alternatives on slower hardware. Advancements in neural evaluation, exemplified by , employ separate policy and value networks: the policy network outputs move probabilities to guide selection, while the estimates win probabilities from a position, trained end-to-end via self-play reinforcement learning without predefined heuristics. This approach, achieving superhuman performance after nine hours of training on specialized hardware, integrates implicit chess knowledge through vast simulation data, surpassing traditional methods by capturing subtle strategic nuances like long-term pawn breaks and king .

Specialized Databases

Specialized databases in computer chess encompass precomputed resources that store extensive move sequences and position evaluations, allowing engines to access proven strategies without performing real-time calculations. These databases significantly enhance performance in the opening and endgame phases, where exhaustive analysis is feasible offline. Opening books and endgame tablebases represent the primary types, drawing from vast game collections and retrograde computation methods, respectively. Opening books consist of curated sequences of moves derived from large databases of human and computer games, guiding engines through the initial stages of play to avoid suboptimal openings. For instance, the ChessBase Mega Database 2025, containing over 11.7 million games from 1475 to 2025, serves as a foundational source for generating such books, enabling the compilation of millions of opening lines evaluated by win rates and popularity. These books are typically stored in efficient formats like PolyGlot, developed by Fabien Letouzey, which uses binary files to encode positions, moves, and weights for quick retrieval during gameplay. Dynamic opening books extend this by adapting selections to an opponent's style, such as favoring aggressive lines against defensive players, through opponent modeling techniques that analyze prior moves or patterns. This approach, explored in early research on asymmetric search, improves book efficacy by up to 20-30% in tournament settings against varied human opponents. Endgame tablebases provide perfect play evaluations for positions with few pieces remaining, computed via retrograde analysis that works backward from terminal positions to determine wins, losses, draws, and optimal move sequences. The seminal Nalimov tablebases, introduced in the late 1990s by Eugene Nalimov, pioneered compressed storage formats that reduced 5-piece endgames to about one-eighth the size of earlier uncompressed versions, making them practical for local use. By , the Lomonosov 7-piece tablebases were completed using supercomputing resources, covering all approximately 424 trillion unique legal 7-piece positions in an uncompressed size of around 140 terabytes, though modern compressed variants like Syzygy reduce this to 18.4 terabytes. These tablebases classify outcomes exactly—such as distance-to-mate in moves—and are probed by engines at shallow search depths to branches or select optimal moves, often resolving endgames that would otherwise require deep computation. As of , 8-piece tablebases remain in progress, with partial computations covering select configurations but full resolution hindered by an estimated 10-15 petabytes of storage needs; efforts like those by Marc Bourzutschky have solved subsets, revealing new theoretical draws and wins in complex pawn endgames. Advances in accessibility have made these resources more integrable, with cloud-based probing allowing engines to query tablebases remotely without local storage. For example, the Syzygy tablebases by Ronald de Man support online access via platforms like , extending to chess variants such as Chess960 for variant-specific perfect play. In evaluation functions, tablebase results briefly inform static assessments by providing ground-truth distances, supplementing scoring without altering core computation.

Performance and Evaluation

Rating Systems and Benchmarks

The performance of computer chess engines is primarily evaluated using Elo-based rating systems adapted from human chess ratings, which quantify relative strength through win-draw-loss outcomes in matches. These systems provide standardized benchmarks by pitting engines against each other in controlled tournaments, allowing for consistent comparisons across versions and architectures. Two prominent lists are the Computer Chess Rating Lists (CCRL) and the Swedish Chess Computer Association (SSDF) ratings, both of which update periodically to reflect advancements in engine development. The CCRL maintains multiple rating lists based on extensive engine-versus-engine testing, with monthly updates derived from millions of games. Engines are tested in round-robin tournaments on normalized hardware, typically an Intel i7-4770k processor, to ensure fair comparisons; for instance, the primary 40/15 list simulates 40 moves in 15 minutes per side, using a general opening book up to 12 moves and 3-4-5 piece endgame tablebases, with pondering disabled. Ratings are computed using Bayesian Elo (BayesElo), which accounts for uncertainty in smaller sample sizes. As of November 2025, the top engines on the CCRL 40/15 list include Stockfish 17.1 at 3644 Elo, followed closely by ShashChess Santiago at 3642 Elo, demonstrating the narrow margins at the elite level. CCRL also produces variant-specific ratings, such as for Fischer Random Chess (FRC), where engines like Stockfish lead with adjusted scores around 3600 Elo under similar protocols. In contrast, the SSDF rating list employs a ladder-based testing protocol, where new or updated engines challenge a sequence of established reference opponents in 40-game matches (80 games total per matchup, alternating colors) to slot into the hierarchy, mimicking human conditions more closely than full round-robins. Games follow a time control of 40 moves in 2 hours, followed by 20 moves per additional hour, played on dedicated PCs connected via for ; hardware is normalized per test (e.g., AMD 7 1800X at 3.6 GHz for recent PC engines), with results including error margins for reliability. The SSDF list, last updated December 31, 2023, ranked Stockfish 16 at 3582 Elo, with Leela Chess Zero competitive but not leading in the final update; it highlights testing on longer controls but has not been maintained recently. These benchmarks reveal superhuman performance, with top engines exceeding 3500 Elo—far above the human peak of around 2880—but come with limitations inherent to closed rating pools. Engine Elo ratings inflate relative to human scales because they derive solely from matches among increasingly strong programs, lacking the diverse opposition humans face; direct comparability requires human-computer encounters, which are infrequent and show engines winning over 90% against grandmasters above 2600 Elo. Additionally, single-core or fixed-hardware normalizations in lists like CCRL help isolate software improvements but may not reflect multi-core or modern hardware deployments in practice.

Human-Computer Matches

One of the earliest notable human-computer chess encounters occurred in the late with Mac Hack VI, developed at MIT by Richard Greenblatt and colleagues. This program achieved a USCF rating of approximately 1650 and became the first chess software to defeat a human opponent in a setting, marking a milestone in demonstrating computational viability against amateur players. By the late 1980s, programs like Deep Thought, created by Carnegie Mellon researchers Feng-hsiung Hsu and Murray Campbell, began challenging grandmasters. In 1988, Deep Thought tied for first place in the Software Toolworks Championship alongside Grandmaster Tony Miles, scoring draws and wins against several elite players with an average opponent rating of 2492, earning it a provisional USCF rating of 2550. In 1989, it defeated Bent Larsen in an exhibition and also beat International Master David Levy, though it lost both games to World Champion in a two-game match. These results highlighted the program's growing tactical depth but also its limitations in strategic endgames. The 1997 rematch between Deep Blue—an supercomputer enhanced from Deep Thought—and Kasparov stands as a landmark event. Played in over six games, Deep Blue secured victory with a score of 3.5–2.5, winning the decisive sixth game after three draws and two earlier wins for each side; this was the first time a computer defeated a reigning world champion under standard tournament conditions. In 2005, the supercomputer Hydra dominated British Grandmaster Michael Adams in a six-game match in , winning 5.5–0.5 with five straight victories and one draw, underscoring hardware-accelerated search's edge over human calculation. During the 2000s, , developed by Vasik Rajlich, participated in several exhibitions against grandmasters, often prevailing in classical time controls due to its superior positional evaluation, though humans occasionally scored in faster variants like . Since 2005, no human has won a game against top-tier chess engines in standard tournament play, with the last such victory being former World Champion Ruslan Ponomariov's defeat of Deep Fritz in the 2005 "" event. Exhibitions in the 2010s, such as those involving or Komodo against grandmasters like , further illustrated computers' tactical superiority, with engines consistently exploiting deep combinations that humans overlooked under time pressure. Engine ratings, now exceeding 3500 Elo, far surpass the top human level of around 2850, closing the performance gap decisively. These matches revealed key contrasts: computers excel in exhaustive calculation and tactical precision, evaluating millions of positions per second, while humans leverage intuition and long-term planning in ambiguous middlegames. Following Deep Blue's triumph, the focus in chess shifted from adversarial contests to collaboration, as exemplified by Garry Kasparov's advocacy for ""—where humans pair with engines to outperform either alone—emphasizing hybrid strengths over pure opposition.

Competitions and Championships

The evolution of computer chess competitions has been marked by a series of prestigious tournaments dedicated exclusively to pitting algorithms against one another, fostering advancements in search efficiency and evaluation functions. The inaugural major event, the 1970 North American Computer Chess Championship organized by the Association for Computing Machinery (ACM), saw Chess 3.0, developed by students at , emerge victorious with a perfect score in its three games, setting the stage for dedicated machine-only contests. This paved the way for the (WCCC), established in 1974 under the International Computer Games Association (ICGA), which became the premier offline tournament for chess programs. The WCCC ran annually through the and into the , with events held in various global locations until its final edition in 2024 in , , commemorating its 50th anniversary. Early editions highlighted specialized hardware and algorithmic innovations, such as the 1974 win by the Soviet program Kaissa in , which utilized advanced alpha-beta pruning. The 2024 tournament ended in a three-way tie for first between Stoofvlees, , and Raptor, underscoring the dominance of collaborative development in modern eras. These milestones reflect a progression from resource-constrained university projects to performers, with the WCCC influencing ratings by establishing benchmarks for top-tier play. Complementing the WCCC, the Top Chess Engine Championship (TCEC), launched in as an online league, has grown into a multi-division format with seasonal cycles, emphasizing long-term matches to test engine robustness. TCEC features divisions from novice to premier levels, with time controls typically set at 90 minutes plus 5 seconds per move in the top tier, allowing for deep computational analysis on standardized hardware. This structure has spotlighted rivalries, such as Stockfish's repeated triumphs over in the 2020s, promoting transparency through public broadcasts. In the 2020s, additional AI-focused events like the Game Arena AI Exhibition in 2025 highlighted battles between large model-based systems, where OpenAI's o3 model defeated xAI's 4 with a 4-0 score in the final, showcasing rapid advancements in integration for chess. Following this, an informal autonomous chess match in 2026 organized by Guillermo Rauch pitted xAI's Grok-4-fast-reasoning against OpenAI's GPT-5.2, with Grok-4-fast-reasoning winning 19 of the last 20 games. Such "AI wars" represent informal yet high-profile clashes, often without strict hardware caps, contrasting earlier competitions. Rules across these tournaments have evolved from hardware limitations—such as fixed processor speeds in the 1970s WCCC—to post-2010 emphases on software parity, with open-source engines like dominating due to community-driven optimizations and shared codebases. Time controls generally range from 5 minutes plus increments for speed variants to 75 minutes plus 15 seconds per move in standard play, ensuring fair evaluation of strategic depth over brute force.

Applications and Societal Impact

Modern Engines and Online Platforms

In the landscape of modern computer chess, stands as the preeminent open-source engine, renowned for its exceptional strength and continuous development by a global community. As of November 2025, it holds the highest rating on the Computer Chess Rating Lists (CCRL) at 3644 Elo, surpassing all competitors in standardized benchmarks. incorporates neural enhancements through its NNUE () evaluation variant, which has significantly boosted its positional understanding while maintaining computational efficiency. Komodo Dragon represents a leading commercial engine with a focus on human-like strategic knowledge, blending deep search algorithms with an extensive library of positional patterns derived from grandmaster games. It has competed in the Top Chess Engine Championship (TCEC), often praised for its intuitive playstyle that aids analysis over raw tactical dominance. Meanwhile, Houdini persists as a legacy commercial engine, valued for its sophisticated search and that emphasized , though it has fallen out of the top competitive tiers by 2025 due to halted development. Online platforms have integrated these engines to enhance accessibility for players worldwide, with .org offering seamless analysis through its distributed network, enabling free, cloud-based evaluation of games directly in the browser. Users can request multi-threaded analysis for positions during play or review, supporting variants like Chess960 alongside standard chess. Chess.com similarly embeds cloud engines for analysis and play, utilizing to provide real-time move suggestions and game reviews, with server-side processing allowing deeper searches than local hardware permits. These integrations facilitate browser-based matches against engines or human opponents, democratizing high-level computation without requiring downloads. Mobile applications extend this functionality with real-time engine assistance, as seen in the Chess.com and Lichess apps, which offer on-device or cloud-synced analysis for positions scanned via camera or manual input. Tools like Chessvision.ai enable instant board scanning and Stockfish evaluation on smartphones, supporting live game assistance during over-the-board play. In the esports realm, 2025 marked a pivotal year with chess's inclusion in the Esports World Cup, where Magnus Carlsen highlighted the sport's affinity for digital platforms, stating that "chess is made for the digital age" due to its visual simplicity and global streaming potential. Carlsen's victory in the inaugural event underscored trends toward hybrid online-offline competitions, drawing over 259,000 peak viewers. Accessibility varies across platforms, with providing all engine features— including unlimited analysis and access—for free, funded solely by donations and emphasizing open-source principles. In contrast, offers basic free analysis but reserves premium cloud depths, ad-free experiences, and advanced insights for subscribers starting at $5 monthly. Developers leverage APIs like Lichess's for embedding engine evaluations in custom apps, while Stockfish's port (Stockfish.js) enables browser-based integration without server dependencies. This ecosystem supports both casual users and innovators building tools or .

Influence on Strategy and Training

The advent of powerful chess engines has profoundly transformed strategy by uncovering optimal lines of play that were previously obscure or undiscovered. For instance, following IBM's Deep Blue's victory over in 1997, the Berlin Defense in the opening surged in popularity among top players, as engine analysis highlighted its solidity and counterattacking potential, shifting it from a niche choice to a mainstay in elite repertoires. Similarly, DeepMind's , trained via self-play , introduced novel tactical motifs and positional ideas, such as aggressive pawn sacrifices in closed positions, that deviated from classical human theory and inspired grandmasters like [Magnus Carlsen](/page/Magnus Carlsen) to refine their understanding of middlegame imbalances. In training, chess engines have become indispensable tools for game analysis and skill development. Players routinely use open-source engines like to dissect their own games, identifying subtle errors in evaluation or missed opportunities that human intuition might overlook, thereby accelerating improvement in tactical and strategic awareness. Additionally, endgame tablebases—exhaustive databases of perfect play in positions with up to seven pieces—enable the automated generation of training puzzles, allowing learners to practice precise calculation in critical scenarios without manual setup. The integration of human and AI capabilities has fostered collaborative approaches to chess preparation, bridging gaps in intuition and computation. Grandmasters often employ engines to explore variations beyond their instinctive grasp, such as probing deep into complex opening lines where human foresight falters, resulting in more robust tournament strategies. During the 2024 World Chess Championship between and D. Gukesh, both players reportedly utilized AI-assisted analysis to investigate novel ideas in the , marking a milestone in the normalization of such hybrid methods at the highest level. Early pioneers like laid foundational work in computer chess theory, advocating for algorithmic evaluation functions that mimicked human judgment as far back as the , influencing subsequent engine designs. In contemporary contexts, theorists such as have advanced this legacy through the development of the Komodo engine, which incorporates human-like strategic knowledge via weighted evaluation terms for factors like king safety and , providing players with interpretable insights that enhance training efficacy.

Cheating Challenges and Detection

The rise of powerful computer chess engines has facilitated cheating in both over-the-board (OTB) and online tournaments, where players illicitly consult engines for assistance during games. A prominent example is the 2022 Sinquefield Cup scandal involving grandmaster , who defeated world champion in the third round, prompting Carlsen to withdraw and imply Niemann's involvement in cheating; a subsequent investigation found no direct evidence of OTB cheating in that event but concluded Niemann had likely cheated in over 100 online games prior. This incident heightened scrutiny on engine-assisted play, leading to widespread media coverage and debates within the chess community. Post-2020, the online chess boom—fueled by the Netflix series The Queen's Gambit—saw cheating incidents surge, with platforms like reporting bans increasing from 5,000–6,000 per month in to nearly 17,000 in August 2020 alone, as thousands of players daily used engines to gain unfair advantages in casual and rated matches. Detection techniques primarily rely on statistical analysis of moves compared to top engine recommendations, flagging suspicion when correlation exceeds thresholds like 90% match rates over extended sequences. FIDE employs the software developed by computer science professor , which computes an Individual Player Rating (IPR) based on move quality relative to the player's historical Elo and engine outputs, using z-scores above 4.5 as a detection threshold for potential cheating; this system analyzes critical moments and patterns to distinguish engine use from natural play. Online platforms integrate similar tools, monitoring for anomalies such as rapid tab-switching to engine interfaces or unexplained performance spikes. Challenges in detection include cheaters' strategies to humanize moves, such as consulting engines only for 1–3 critical decisions per game or selecting second- or third-best engine suggestions to avoid perfect correlation. Hardware concealment poses another hurdle, with devices like smartphones hidden in clothing, earpieces, or smartwatches enabling remote engine access; notable cases include grandmaster Igors Rausis, caught in 2019 using a phone in a bathroom during a tournament, and visually impaired players employing prompters for Morse-coded moves. In response to evolving tactics, 2025 tournament protocols have been updated, incorporating stricter guidelines like mandatory metal detectors, signal jammers, isolated playing areas, and pre-game device scans, as implemented in events such as the Freestyle Chess Grand Slam Tour in . Countermeasures encompass advanced anti-cheating software and supplementary methods like psychological profiling. Chess.com's Fair Play system employs to cross-reference move accuracy, time patterns, and behavioral data against engine baselines, automatically reviewing flagged games and issuing bans for confirmed violations. Psychological profiling involves assessing player confidence, decision-making inconsistencies, and post-game interviews to identify stress indicators of cheating, as explored in studies of high-profile accusations where behavioral anomalies complement statistical evidence. These layered approaches aim to preserve amid engines' , which exceeds 3500 Elo and tempts misuse by enabling near-perfect play without detection.

Future Directions in AI Chess

Efforts to fully solve chess, meaning determining the outcome under perfect play from the starting position, continue to advance through endgame tablebases. As of 2025, comprehensive tablebases such as Lomonosov and Syzygy have solved all positions involving up to seven pieces, including the kings, enabling perfect play in these late-game scenarios. These databases store billions of positions, revealing intricate win, loss, or draw outcomes that were previously unknown. However, extending this to the entire game remains daunting due to the estimated 10^43 to 10^46 legal positions possible in chess, far exceeding current computational feasibility. For comparison, the game of was fully solved in 2007, proving it a draw with perfect play after analyzing approximately 5×10^20 positions over nearly two decades of computation. To enhance efficiency and global accessibility, research is shifting toward low-power AI chess engines that operate on resource-constrained devices, such as smartphones or edge hardware, without sacrificing significant strength. This approach aims to democratize high-level chess for players in underserved regions lacking access to high-end . A key initiative is the and Efficient Chess AI Challenge launched in November 2024 on , which awarded $50,000 to encourage development of AI agents using limited compute and memory, emphasizing innovative algorithms over brute force. Early participants demonstrated engines achieving blitz ratings above 2800 on platforms like while running on modest hardware, highlighting potential for widespread adoption in education and casual play. Human-AI synergy is evolving through large language models (LLMs) like and , which facilitate casual play and teaching by generating human-like moves and explanations at amateur to intermediate levels. In 2025 demonstrations, such as the Game Arena AI Exhibition, OpenAI's o3 model defeated and other LLMs in a , showcasing their ability to compete in dynamic settings while providing interpretable reasoning for . These tools support ethical alignment in strategy discovery by modeling styles—via projects like Maia-2, which trains neural networks on millions of games to predict and explain moves across skill levels, reducing the black-box nature of traditional engines and promoting fair learning. Open questions persist, particularly in determining perfect play for the full game, where it remains unknown whether chess is a forced draw, win for , or win for , given the immense state-space complexity. Integration with other games, such as Go-chess hybrids, is an emerging trend, with AI techniques like those from —originally for Go—adapting to multi-domain environments via , potentially yielding novel variants that blend strategic elements for broader AI research.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.