Hubbry Logo
Chess rating systemChess rating systemMain
Open search
Chess rating system
Community hub
Chess rating system
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Chess rating system
Chess rating system
from Wikipedia

A chess rating system is a system used in chess to estimate the strength of a player, based on their performance versus other players. They are used by organizations such as FIDE, the US Chess Federation (USCF or US Chess), International Correspondence Chess Federation, and the English Chess Federation. Most of the systems are used to recalculate ratings after a tournament or match but some are used to recalculate ratings after individual games. Popular online chess sites such as Chess.com, Lichess, and Internet Chess Club also implement rating systems. In almost all systems, a higher number indicates a stronger player. In general, players' ratings go up if they perform better than expected and down if they perform worse than expected. The magnitude of the change depends on the rating of their opponents. The Elo rating system is currently the most widely used (though it has many variations and improvements). The Elo-like ratings systems have been adopted in many other contexts, such as other games like Go, in online competitive gaming, and in dating apps.[1]

The first modern rating system was used by the Correspondence Chess League of America in 1939. Soviet player Andrey Khachaturov proposed a similar system in 1946.[2] The first one that made an impact on international chess was the Ingo system in 1948. The USCF adopted the Harkness system in 1950. Shortly after, the British Chess Federation started using a system devised by Richard W. B. Clarke. The USCF switched to the Elo rating system in 1960, which was adopted by FIDE in 1970.[3]

Ingo system

[edit]

This was the system of the West German Chess Federation from 1948 until 1992, designed by Anton Hoesslinger and published in 1948. It was replaced by an Elo system, Deutsche Wertungszahl. It influenced some other rating systems.

New players receive a high, fixed starting score. Players' new ratings centre on the average rating of entrants to their competition: then if having achieved better than a net draw set of result, minus the number of percentage points it is over 50% (e.g. a 12–4 or 24–8 wins-to-losses result is, as ever, noted as a 75% tournament outcome) – if having achieved worse than this then the number, again in percent, is added to the average of the tournament entrants' scores; thus in all cases recalibrating all players after each tournament completely. A consequence is at most 50 points gained or shed per tournament (namely by a totally winning or totally losing participant) away from the tournament average. Unlike other modern, nationally used chess systems, lower numbers indicate better performance.[4]

Harkness system

[edit]

This system was noted in Chess Review by tournament organizer Kenneth Harkness, who expounded his invention of it in articles of 1956, 14 years later. It was used by the USCF from 1950 to 1960 and also by other organizations.

When players compete in a tournament, the average rating of their competition is calculated. If a player scores 50%, they receive the average competition rating as their performance rating. If they score more than 50%, their new rating is the competition average plus 10 points per percentage point exceeding 50. If they score less, their new rating is the competition average minus 10 points per percentage point shy of 50.[5]

Example

[edit]

A player with a rating of 1600 plays in an eleven-round tournament and scores 2½–8½ (22.7%) against competition with an average rating of 1850. This is 27.3% below 50% (50–22.7%), so their new rating is 1850 − (10 × 27.3) = 1577.[6]

English Chess Federation system

[edit]

The ECF grading system was used by the English Chess Federation until 2020. It was published in 1958 by Richard W. B. Clarke. Each game has a large potential effect. Points (grades) are never immediately effective for every game won, lost or drawn, in a registered competition (including English congresses, local and county leagues, and registered, approved team events) but are averaged into personal grade (ECF Grade) over a cycle of at least 30 games.

A player's contributing score for such averaging is taken to be their opponent's grade (but the gap is deemed to be 40 points, if greater than such a grade gap). However this is adjusted by adding 50 points for a win, subtracting 50 points for a loss, and making no adjustment for a draw. Negative grades are deemed to be nil, so a personal score of 50 arose quickly in the lower leagues and experienced novices aspire to a 100 grading. The cyclical averaging and cycle-persistent Grades are its hallmarks. The maximum gain in a single cycle is 90 points, which would entail beating much higher-rated opponents at every match. The opposite applies to losses.

To convert between ECF and Elo grades, the formula ELO = (ECF * 7.50) + 700 was sometimes used.[7]

Elo rating system

[edit]

The Elo system was invented by Arpad Elo and is the most common rating system. It is used by FIDE, other organizations and some Chess websites such as Internet Chess Club and chess24.com. Elo once stated that the process of rating players was in any case rather approximate; he compared it to "the measurement of the position of a cork bobbing up and down on the surface of agitated water with a yard stick tied to a rope and which is swaying in the wind".[8] Any attempt to consolidate all aspects of a player's strength into a single number inevitably misses some of the picture.

FIDE divides all its normal tournaments into categories by a narrower average rating of the players. Each category is 25 rating points wide. Category 1 is for an average rating of 2251 to 2275, category 2 is 2276 to 2300, etc. Women's tournaments currently commence 200 points lower, including its Category 1.[9]

Elo scales, 1978[10]
Rating range Category
2600+ No formal title, but sometimes informally known as "super grandmasters"[11]
2500–2599 Grandmasters (GM)
2400–2499 International Masters (IM)
2300–2399 FIDE Masters (FM)
2200–2299 Candidate Masters (CM)
2000–2199 Experts
1800–1999 Class A, category 1
1600–1799 Class B, category 2
1400–1599 Class C, category 3
1200–1399 Class D, category 4
1000–1199 Class E, category 5
Below 1000 Novices

The USCF uses the USCF system, a modification of the Elo system, in which the K factor varies and it gives bonus points for superior performance in a tournament.[12] USCF ratings are generally 50 to 100 points higher than the FIDE equivalents.[13]

USCF rating categories
Category Rating range
Senior master 2400 and up
National master 2200–2399
Expert 2000–2199
Class A 1800–1999
Class B 1600–1799
Class C 1400–1599
Class D 1200–1399
Class E 1000–1199
Class F 800–999
Class G 600–799
Class H 400–599
Class I 200–399
Class J 100–199

Example

[edit]

Elo gives an example of amending the rating of Lajos Portisch, a 2635-rated player before his tournament, who scores 10½ points of a possible 16 winning points (as this is against 16 players). First, the difference in rating is recorded for each other player he faced. Then the expected score, against each, is determined from a table, which publishes this for every band of rating difference. For instance, one opponent was Vlastimil Hort, who was rated at 2600. The rating difference of 35 gave Portisch an expected score of "0.55". This is an impossible score as not 0, 12 or 1 but as this is higher than 0.5 even a draw will slightly damage Portisch's rating and slightly improve Hort's rating, so (ignoring their other results in the tournament) moving their ratings slightly closer together.

Portisch's expected score is summed for each of his matches, which gave him a total expected score of 9.66 for the tournament. Then the formula is:

new rating = old rating + (K × (W−We))

K is 10; W is the actual match/tournament score; We is the expected score.

Portisch's new rating[14] is 2635 + 10×(10.5−9.66) = 2643.4.

Linear approximation

[edit]

Elo devised a linear approximation to his full system, negating the need for look-up tables of expected score. With that method, a player's new rating is

where Rnew and Rold are the player's new and old ratings respectively, Di is the opponent's rating minus the player's rating, W is the number of wins, L is the number of losses, C = 200 and K = 32. The term (W-L) / 2 is the score above or below 0. ΣD / 4C is the expected score according to: 4C rating points equals 100%.[15]

The USCF used a modification of this system to calculate ratings after individual games of correspondence chess, with a K = 32 and C = 200.[16]

Glicko rating system

[edit]

The Glicko system is a more modern approach, which was invented by Mark Glickman as an improvement of the Elo system. It is used by Chess.com, Free Internet Chess Server and other online chess servers. The Glicko-2 system is a refinement of the original Glicko system and is used by Lichess, Australian Chess Federation and other online websites.

Turkey UKD system

[edit]

TSF (Turkey Chess Federation) uses a combination of ELO and UKD system.[17]

USA ICCF system

[edit]

The ICCF U.S.A. used its own system in the 1970s. It now uses the Elo system.

Deutsche Wertungszahl

[edit]

The Deutsche Wertungszahl system replaced the Ingo system in Germany.

Chessmetrics

[edit]

The Chessmetrics system was invented by Jeff Sonas. It is based on computer analysis of a large database of games and is intended to be more accurate than the Elo system.

Universal Rating System

[edit]

The Universal Rating System was developed by Mark Glickman, Jeff Sonas, J. Isaac Miller and Maxime Rischard, with the support of the Grand Chess Tour, the Kasparov Chess Foundation, and the Chess Club and Scholastic Center of Saint Louis.[18]

Rating systems using computers as a reference

[edit]

Many rating systems give a rating to players at a given time, but cannot compare players from different eras. In 2006, Matej Guid and Ivan Bratko pioneered a new way of rating players, by comparing their moves against the recommended moves of a chess engine. The authors used the program Crafty and argued that even a lower-ranked program (Elo around 2700) could identify good players.[19] In their follow-up study, they used Rybka 3 to estimate chess player ratings.[20]

In 2017, Jean-Marc Alliot compared players using Stockfish 6 with an ELO rating around 3300, well above top human players.[21]

Chronology

[edit]
  • 1933 – The Correspondence Chess League of America (now ICCF U.S.A.) is the first national organization to use a numerical rating system. It chooses the Short system which clubs on the west coast of the US had used. In 1934 the CCLA switched to the Walt James Percentage System but in 1940 returned to a point system designed by Kenneth Williams.
  • 1942 – Chess Review uses the Harkness system, an improvement of the Williams system.
  • 1944 – The CCLA changes to an improved version of the Williams system devised by William Wilcock. A slight change to the system was made in 1949.
  • 1946 – The USSR Chess Federation uses a non-numerical system to classify players.
  • 1948 – The Ingo system is published and used by the West German Chess Federation.
  • 1949 – The Harkness system is submitted to the USCF. The British Chess Federation adopts it later and uses it at least as late as 1967.[22]
  • 1950 – The USCF starts using the Harkness system and publishes its first rating list in the November issue of Chess Life. Reuben Fine is first with a rating of 2817 and Sammy Reshevsky is second with 2770.[23]
  • 1959 – The USCF names Arpad Elo the head of a committee to examine all rating systems and make recommendations.
  • 1961 – Elo develops his system and it is used by the USCF.[24] It is published in the June 1961 issue of Chess Life.[25]
  • 1970 – FIDE starts using the Elo system. Bobby Fischer is at the top of the list.[26]
  • 1978 – Elo's book (The Rating of Chessplayers, Past and Present) on his rating system is published.
  • 1993 – Deutsche Wertungszahl replaces the Ingo system in Germany.
  • 2001 – the Glicko system by Glickman is published.[27]
  • 2005 – Chessmetrics is published by Jeff Sonas.[28]
  • 2006 – Matej Guid and Ivan Bratko publish the research paper "Computer Analysis of World Chess Champions", which rates champions by comparing their moves to the moves chosen by the computer program Crafty.[29]
  • 2017 – Jean-Marc Alliot publishes the research paper "Who is the Master?", which rates champions by comparing their moves to Stockfish 6.[21]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Elo rating system is the primary statistical method used by the Fédération Internationale des Échecs (FIDE) for calculating the relative skill levels of chess players by assigning each a numerical rating that adjusts based on their performance in rated games against opponents of known strength. Developed by Hungarian-American physics professor and chess master Arpad Emmerich Elo, the system uses a logistic probability model where a 200-point rating difference corresponds to an expected win probability of approximately 76% for the higher-rated player. It was first adopted by the United States Chess Federation (USCF) in 1960 and by FIDE, the sport's governing body, in 1970, with the inaugural FIDE rating list published in July 1971. FIDE's implementation of the Elo system produces monthly rating lists for standard, rapid, and blitz formats, serving as an objective measure of player performance to determine tournament pairings, eligibility for titles like grandmaster (requiring a minimum of 2500), and international rankings. Ratings are calculated by comparing a player's actual score in a to their expected score against opponents, with the difference multiplied by a development coefficient (K-factor) that varies by player age, experience, and rating level: 40 for new players until they complete 30 games and for players under 18 years old with ratings below 2300; 20 for players with ratings below 2400; and 10 for those rated 2400 or higher—to limit rapid fluctuations. Initial ratings for unrated players are established after at least five games in a -registered event, set as the average opponent rating adjusted by hypothetical results against two 1800-rated players, bounded between 1400 and 2200. The system addresses rating inflation through periodic adjustments and requires minimum time controls for rated games, ensuring reliability across diverse competitive levels from (ratings around 1000) to world champions (peaks exceeding 2800). Beyond , variants of the Elo system are used by national federations and online platforms, though they may differ in scaling and update frequency, underscoring its foundational role in quantifying chess prowess worldwide.

Fundamentals

Purpose and Design Goals

A chess rating system is a designed to estimate and quantify a player's relative level based on their in games against other players, providing an objective measure of playing strength. Its primary goals encompass facilitating fair pairings in tournaments by matching players of comparable ability, predicting the expected outcomes of matches through probabilistic scoring, tracking individual progress across games and events, and enabling standardized classification of players—from with low ratings to elite grandmasters with high ones. These objectives ensure competitive balance and motivate improvement by offering a clear, merit-based . The historical motivation for chess rating systems arose in the early , as the growing number of competitive players necessitated replacing subjective assessments—such as informal titles or opinion-based rankings—with consistent, data-driven evaluations to support organized play. Prior to formalized systems, player strength was often gauged through judgments by experts or organizers, leading to inconsistencies that hindered fair competition and reliable performance tracking. Key challenges addressed by these systems include accommodating variable competition levels across tournaments and regions, mitigating the effects of infrequent play that can cause ratings to become outdated or unstable, and grappling with the inherent in due to factors like performance variability and limited game samples. By incorporating statistical principles, rating systems aim to produce robust estimates despite these issues, ensuring ratings reflect true ability as accurately as possible. Metrics for evaluating the success of a chess rating system focus on its predictive accuracy for expected scores in matches, the long-term stability of ratings amid ongoing play, and its adaptability to diverse formats such as over-the-board and . The Elo system exemplifies these goals as a foundational benchmark for modern rating designs.

Core Principles and Components

Chess rating systems fundamentally rely on three key components: initial rating assignment for new players, evaluation of performance through game scores, and adjustment of ratings based on those results against opponents of known strength. Initial ratings are typically assigned to unrated players by averaging the ratings of opponents they face in their first few games, often weighted by factors such as the number of games played and the recency of those opponents' ratings, with a cap on the effective number of games considered to ensure reliability. For example, in FIDE's system (as of March 2024), the average opponent rating includes two hypothetical opponents rated 1800 (each counted as a draw), adjusted based on the player's score, and bounded between 1400 and 2200 after at least five games. Alternatively, provisional periods allow ratings to stabilize after a minimum number of games against rated opponents, preventing premature assignments based on limited data. Performance evaluation centers on the actual score achieved in each game—1 point for a win, 0.5 for a draw, and 0 for a loss—aggregated over multiple games to reflect overall results. Central to these systems are concepts like the expected score, which estimates the probability of one player defeating another based on their rating difference, and the rating difference itself as a predictor of outcomes. The expected score is derived from a modeling win probabilities, where a larger rating advantage for player A over B increases the likelihood of A's victory. The K-factor modulates the magnitude of rating adjustments, typically higher for inexperienced players to allow rapid convergence to true strength and lower for veterans to reflect greater stability, ensuring changes are proportional to surprise in results. The general update principle across most systems follows the formula: new rating=old rating+K×(actual scoreexpected score)\text{new rating} = \text{old rating} + K \times (\text{actual score} - \text{expected score}) This adjustment rewards outperforming expectations and penalizes underperformance, promoting convergence toward a player's underlying ability over time. Special cases in score calculation include draws, which contribute 0.5 points to the actual score, and forfeits generally excluded unless due to exceptional circumstances like force majeure with at least one move played, to maintain the integrity of performance metrics. Forfeits without moves played are often not rated, avoiding artificial inflation of scores. Byes are typically not included in rating calculations, as they do not involve played games. Rating inflation, where average ratings rise without corresponding skill improvements, and deflation, the opposite trend, arise from factors like inconsistent K-factor applications or expanding player pools, potentially skewing comparisons across eras. Regularization techniques mitigate these by imposing rating floors (e.g., minimums of 1000), periodic recalibrations through normalization of rating distributions, or adjustments based on long-term performance trends to preserve the system's statistical validity. These methods ensure ratings remain a reliable measure of relative strength.

Historical Systems

Ingo System

The Ingo system, developed in 1948 by Anton Hoesslinger for the , represented one of the earliest attempts to create a numerical rating framework for chess players. Named after Hoesslinger's hometown of , it was published in the April 1948 edition of Bayerischen Schachnachrichten and marked a shift from qualitative assessments to quantifiable measures of skill in the pre-Elo era. The system gained acceptance in , where it was used to generate ranking lists that aligned closely with estimates from experienced chess experts, providing a simple tool for organizing tournaments and player classifications. At its core, the Ingo system relied on a point-based scoring approach, awarding 1 point for a win, 0.5 points for a draw, and 0 points for a loss. A player's rating was computed as the average points per game over a specified period, adjusted to account for the relative strength of opponents through a performance metric derived from the deviation of the achieved score from an expected neutral outcome. This resulted in a scale where lower numbers denoted stronger players, starting from near zero for top performers and extending upward for novices; for context, typical club-level ratings fell between 100 and 200. Unlike later systems, it featured no formal categories such as letter grades, though equivalents to modern classifications placed elite players below 50 and beginners above 250. The method's simplicity facilitated manual calculations, making it practical for use without advanced computational resources. Despite its innovations, the Ingo system had notable limitations, including the absence of probabilistic models to predict expected outcomes between players of known ratings, which reduced its utility for forecasting match results. It was also highly sensitive to the number of games played, as ratings could fluctuate significantly with small sample sizes, and while it incorporated opponent strength, the adjustment was rudimentary and prone to inconsistencies across varying competition levels. The system remained in use in for over four decades but was eventually supplanted in the early by more statistically robust methods like the Deutsche Wertungszahl. It served as a foundational influence on subsequent approaches, including the Harkness system adopted by the U.S. Chess Federation in , which built upon its opponent-adjusted scoring principles.

Harkness System

The Harkness system was introduced in 1950 by Kenneth Harkness for the (USCF), marking an advancement in rating methodologies by incorporating adjustments for opponent strength while building on the simplicity of earlier approaches like the Ingo system. It served as the primary rating mechanism for the USCF from 1950 until 1960, when it was supplanted by the Elo system, and was praised for its ability to reward performance against stronger opposition more equitably than flat scoring methods. Under the Harkness system, ratings were updated after each tournament based on the player's performance relative to the average strength of opponents. The new rating was calculated as the average rating of all opponents faced plus 10 points for each percentage point by which the player's score exceeded 50%, or minus 10 points for each percentage point below 50%. For example, a player scoring 60% in a tournament against an average opponent rating of 1800 would receive a new rating of 1800 + (10 × 10) = 1900. This method ensured that performing as expected against the field maintained the rating, while superior or inferior results led to appropriate adjustments. Class designations were tied to specific thresholds, such as Class A (1800–1999) or reaching 2200 for Expert/Master status, facilitating clear progression through player categories. The system's key advantage lay in its explicit consideration of opponent strength, which provided a fairer measure of skill progression compared to unadjusted win counts, though it required manual tabulation and was eventually deemed insufficiently probabilistic for broader adoption.

English Chess Federation System

The English Chess Federation (ECF) grading system, originally developed under the British Chess Federation (BCF), traces its origins to informal classifications in the 1920s but was formally established in the 1950s and refined in the post-World War II period. Devised by statistician Richard W. B. Clarke, it served as the primary domestic rating mechanism for chess players in England until its replacement in 2020. Grades ranged from 0 for novice players to over 300 for grandmaster-level strength, providing a three-digit scale distinct from international systems. The core method relied on computing an average performance rating derived from a player's most recent games, typically the last 30 or more, with weighting applied based on opponent strength to emphasize competitive encounters. For each event or set of games, a performance grade was calculated by adjusting the average opponent grade according to the player's results, using predefined tables for expected outcomes. This approach incorporated variants for rapidplay games, which had separate grading lists to account for faster time controls, and later adaptations for online play under ECF oversight. Grade adjustments followed a structured formula: change in grade = (actual score - expected score) × factor, where the expected score was interpolated from grade difference tables (e.g., a 20-grade advantage yielding an expected win probability of about 64%), and the factor was a constant such as 20 to scale the update magnitude. Updates occurred biannually until later monthly shifts, ensuring grades reflected sustained performance rather than isolated results. New players received provisional grades after a minimum threshold of games, often 10 to 30, to avoid volatility from limited ; these were marked and updated cautiously until stability was achieved. For and competitions, adjustments weighted games by event prestige and context, such as board order or match importance, to better capture collective contributions without inflating individual grades. The system remained active for domestic grading until , when it transitioned to an Elo-based model, but historical ECF grades continue to inform event seeding and are mapped to ratings for equivalence, with an ECF grade of approximately aligning to a rating of 2100. This structure bore similarities to early systems in its emphasis on performance relative to opponents.

Standard Rating Systems

Elo Rating System

The rating system, developed by Hungarian-American physicist and chess master in the late 1950s and early s, represents a statistical method for estimating player skill levels in chess based on game outcomes. As chairman of the (USCF) Rating Committee starting in 1959, Elo refined earlier systems to create a more robust model, which the USCF officially adopted in to replace inconsistent prior methods. The system's mathematical foundation draws from the to model performance differences between players, assuming that skill variations follow a logistic probability curve rather than a normal one for better fit to competitive outcomes. Elo validated the model through extensive of thousands of , including data from U.S. Open Tournaments (1973–1975) involving 1,514 players over 12 rounds and historical crosstables spanning 120 years, confirming its predictive accuracy with chi-square tests showing close alignment to observed results (standard deviation ≈1.65). The Fédération Internationale des Échecs () adopted the Elo system in 1970 following trials and presentations at congresses like the 1965 Weisbaden event, marking its transition to an international standard. This adoption led to FIDE's first official rating list in 1971, covering the top players. with ratings scaled such that a 200-point difference predicts approximately a 76% for the higher-rated player. The system's core formula calculates the expected score EAE_A for player A against opponent B as EA=11+10(RBRA)/400,E_A = \frac{1}{1 + 10^{(R_B - R_A)/400}}, where RAR_A and RBR_B are the current ratings of players A and B, respectively; the 400 scaling factor ensures that a 400-point difference yields an expected score of 0.91 (i.e., 91% win probability). After a game, player A's updated rating RAR_A' is RA=RA+K(SAEA),R_A' = R_A + K (S_A - E_A), with SAS_A denoting the actual score (1 for a win, 0.5 for a draw, 0 for a loss) and KK as the development coefficient that controls rating volatility. This update symmetrically adjusts the opponent's rating by the negative of the change, preserving the zero-sum nature of the system. The KK-factor varies to balance stability for established players against responsiveness for newcomers and juniors, mitigating floor effects for beginners (who start with provisional ratings around 1200–1500) and ceiling effects for top players (whose gains diminish above certain thresholds). In the original USCF implementation, K=32K = 32 for most adult players, with variations such as 24 or 16 based on rating level, and higher for juniors to accelerate convergence. FIDE's current regulations (effective 1 March 2024) set K=40K = 40 for new players until 30 games are completed and for juniors under 18 with ratings below 2300 (until the end of their 18th year); K=20K = 20 for players under 2400; and K=10K = 10 for those reaching 2400 or higher, with further reductions if the product of games played and KK exceeds 700 in a period to prevent excessive swings. To illustrate, consider a game between Player A (rating 2400) and Player B (rating 2200), assuming K=32K = 32 for both. The rating difference is 200 points, so Player A's expected score is EA=11+10(22002400)/400=11+100.511+0.31620.76.E_A = \frac{1}{1 + 10^{(2200 - 2400)/400}} = \frac{1}{1 + 10^{-0.5}} \approx \frac{1}{1 + 0.3162} \approx 0.76. Player B's expected score is EB=1EA0.24E_B = 1 - E_A \approx 0.24.
  • If A wins (SA=1S_A = 1), A's rating change is 32×(10.76)=7.6832 \times (1 - 0.76) = 7.68 (rounded to 8), so RA=2408R_A' = 2408; B loses 8 points to 2192.
  • If the game draws (SA=0.5S_A = 0.5), A's change is 32×(0.50.76)=8.3232 \times (0.5 - 0.76) = -8.32 (rounded to -8), so RA=2392R_A' = 2392; B gains 8 to 2208.
  • If B wins (SA=0S_A = 0), A's change is 32×(00.76)=24.3232 \times (0 - 0.76) = -24.32 (rounded to -24), so RA=2376R_A' = 2376; B gains 24 to 2224.
This example demonstrates how the system rewards upsets while stabilizing ratings over multiple games. Globally, the Elo system forms the backbone of 's official ratings for over-the-board play, with separate lists for standard, rapid, and blitz formats, and has been adapted for some online platforms while influencing millions of players through its scalable, performance-based adjustments. By 1982, FIDE rated tournaments had surged to 544 annually from 70 in 1970, underscoring its enduring impact on chess organization and competition.

Glicko Rating System

The was created by Mark Glickman in 1995 to address limitations in the Elo system by incorporating a measure of rating uncertainty. It extends the Elo framework by introducing a rating deviation (RD), which quantifies the reliability of a player's rating estimate, starting at 350 for unrated players to reflect high initial uncertainty. The system scales the logistic probability using a factor q=ln(10)4000.005756q = \frac{\ln(10)}{400} \approx 0.005756, ensuring compatibility with Elo-like rating values around 1500 for average players. Widely adopted in , it powers rating calculations on platforms like and the Internet Chess Club (ICC), where it supports dynamic adjustments for irregular play. It has also been adopted by the USCF for over-the-board ratings since 2016. In the original Glicko-1 variant, ratings update after a period of games using r=r+qRD2+d2j=1mg(RDj)(sjEj)r' = r + \frac{q}{RD^{-2} + d^{-2}} \sum_{j=1}^m g(RD_j) (s_j - E_j), where q=ln(10)/400q = \ln(10)/400, g(RD)=1/1+3q2RD2/π2g(RD) = 1 / \sqrt{1 + 3 q^2 RD^2 / \pi^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.