Hubbry Logo
Graduate Record ExaminationsGraduate Record ExaminationsMain
Open search
Graduate Record Examinations
Community hub
Graduate Record Examinations
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Graduate Record Examinations
Graduate Record Examinations
from Wikipedia

Record Examination: General Test
Logo used since 2024
AcronymGRE
TypeComputer-based or paper-based standardized test
AdministratorEducational Testing Service
Skills testedAnalytical writing, quantitative reasoning and verbal reasoning.
PurposeAdmissions to master's and doctoral degree programs in various universities
Year started1936; 89 years ago (1936)
Duration1 hour and 58 minutes[1]
Score rangeAnalytical writing: 0.0 to 6.0 (in 0.5-point increments),
Verbal reasoning: 130 to 170 (in 1-point increments),
Quantitative reasoning: 130 to 170 (in 1-point increments).
Score validity5 years
OfferedComputer-based test: Multiple times a year (depends on availability of the test center)
Paper-based test: Up to 3 times a year in October, November and February[2]
Restrictions on attemptsComputer-based test: Can be taken only once after 21 days from the day of exam in every year. Maximum of 5 times a year. (Applies even if candidate cancels scores on a test taken previously.)[3]
Paper-based test: Can be taken as often as it is offered.[3]
RegionsAbout 1,000 test centers in more than 160 countries[4]
LanguagesEnglish
Annual number of test takersDecrease 256,215 (T.Y. 2023-24)[5]
PrerequisitesNo official prerequisite. Intended for bachelor's degree graduates and undergraduate students who are about to graduate. Fluency in English assumed.
FeeUS$ 205[6]
(Limited offers of "Fee Reduction Program" for U.S. citizens or resident aliens who demonstrate financial need, and for national programs in United States that work with underrepresented groups.[7])
Used byMost graduate schools in USA, and in a few other countries
Websitewww.ets.org/gre

The Graduate Record Examinations (GRE) is a standardized test that is part of the admissions process for many graduate schools[8] in the United States, Canada,[9] and a few other countries. The GRE is owned and administered by the Educational Testing Service (ETS).[10] The test was established in 1936 by the Carnegie Foundation for the Advancement of Teaching.[11]

According to ETS, the GRE General Test aims to measure verbal reasoning, quantitative reasoning, analytical writing, and critical thinking skills that have been acquired over a long period of learning. The content of the GRE consists of certain specific data analysis or interpretation, arguments and reasoning, algebra, geometry, arithmetic, and vocabulary sections. The GRE General Test is offered as a computer-based exam administered at testing centers and institution owned or authorized by Prometric. In the graduate school admissions process, the level of emphasis that is placed upon GRE scores varies widely among schools and departments. The importance of a GRE score can range from being a mere admission formality to an important selection factor.

The GRE was significantly overhauled in August 2011, resulting in an exam that is adaptive on a section-by-section basis, rather than question by question, so that the performance on the first verbal and math sections determines the difficulty of the second sections presented (excluding the experimental section). Overall, the test retained the sections and many of the question types from its predecessor, but the scoring scale was changed to a 130 to 170 scale (from a 200 to 800 scale).[12]

The cost to take the test is US$205,[6] although ETS will reduce the fee under certain circumstances.[7] It also provides financial aid to GRE applicants who prove economic hardship.[13] ETS does not release scores that are older than five years, although graduate program policies on the acceptance of scores older than five years will vary.

Once almost universally required for admission to Ph.D. science programs in the U.S., its use for that purpose has fallen precipitously.[14]

History

[edit]

The Graduate Record Examinations was "initiated in 1936 as a joint experiment in higher education by the graduate school deans of four Ivy League universities and the Carnegie Foundation for the Advancement of Teaching."[11]

The first universities to experiment with the test on their students were Harvard University, Yale University, Princeton University and Columbia University.[15] The University of Wisconsin was the first public university to ask their students to take the test in 1938.[16] It was first given to students at the University of Iowa in 1940, where it was analyzed by psychologist Dewey Stuit.[15] It was first taken by students at Texas Tech University in 1942.[17] In 1943, it was taken by students at Michigan State University, where it was analyzed by Paul Dressel.[18] It was taken by over 45,000 students applying to 500 colleges in 1948.[11]

"Until the Educational Testing Service was established in January, 1948, the Graduate Record Examination remained a project of the Carnegie Foundation."[11]

2011 revision

[edit]

In 2006, ETS announced plans to make significant changes in the format of the GRE. Planned changes for the revised GRE included a longer testing time, a departure from computer-adaptive testing, a new grading scale, and an enhanced focus on reasoning skills and critical thinking for both the quantitative and qualitative sections.[19]

On April 2, 2007, ETS announced the decision to cancel plans for revising the GRE.[20] The announcement cited concerns over the ability to provide clear and equal access to the new test after the planned changes as an explanation for the cancellation. The ETS stated, however, that they did plan "to implement many of the planned test content improvements in the future", although specific details regarding those changes were not initially announced.

Changes to the GRE took effect on November 1, 2007, as ETS started to include new types of questions in the exam. The changes mostly centered on "fill in the blank" type answers for the mathematics section that requires the test-taker to fill in the blank directly, without being able to choose from a multiple choice list of answers. ETS announced plans to introduce two of these new types of questions in each quantitative section, while the majority of questions would be presented in the regular format.[21]

Since January 2008, the Reading Comprehension within the verbal sections has been reformatted, passages' "line numbers will be replaced with highlighting when necessary in order to focus the test taker on specific information in the passage" to "help students more easily find the pertinent information in reading passages."[22]

In December 2009, ETS announced plans to move forward with significant revisions to the GRE in 2011.[23] Changes include a new 130–170 scoring scale, the elimination of certain question types such as antonyms and analogies, the addition of an online calculator, and the elimination of the CAT format of question-by-question adjustment, in favor of a section by section adjustment.[24]

On August 1, 2011, the Revised GRE General test replaced General GRE test. The revised GRE is said to be better by design and provides a better test taking experience. The new types of questions in the revised format are intended to test the skills needed in graduate and business schools programs.[25] From July 2012 onwards GRE announced an option for users to customize their scores called ScoreSelect.[26]

Before October 2002

[edit]

The earliest versions of the GRE tested only for verbal and quantitative ability. For a number of years before October 2002, the GRE had a separate Analytical Ability section which tested candidates on logical and analytical reasoning abilities. This section was replaced by the Analytical Writing Assessment.[27][28]

Structure

[edit]

The computer-based GRE General Test consists of six sections. The first section is always the analytical writing section involving separately timed issue and argument tasks. The next five sections consist of two verbal reasoning sections, two quantitative reasoning sections, and either an experimental or research section. These five sections may occur in any order. The experimental section does not count towards the final score but is not distinguished from the scored sections. Unlike the computer adaptive test before August 2011, the GRE General Test is a multistage test, where the examinee's performance on earlier sections determines the difficulty of subsequent sections, using a technique known as computer-adaptive testing. This format allows the examined person to freely move back and forth between questions within each section, and the testing software allows the user to "mark" questions within each section for later review if time remains. The entire testing procedure lasts about 3 hours 45 minutes.[29][30] One-minute breaks are offered after each section and a 10-minute break after the third section.

The paper-based GRE General Test also consists of six sections. The analytical writing is split up into two sections, one section for each issue and argument task. The next four sections consist of two verbal and two quantitative sections in varying order. There is no experimental section on the paper-based test.

Verbal section

[edit]

The computer-based verbal sections assess reading comprehension, critical reasoning, and vocabulary usage. The verbal test is scored on a scale of 130–170, in 1-point increments. (Before August 2011, the scale was 200–800, in 10-point increments.) In a typical examination, each verbal section consists of 20 questions to be completed in 30 minutes.[29] Each verbal section consists of about 6 text completion, 4 sentence equivalence, and 10 critical reading questions. The changes in 2011 include a reduced emphasis on rote vocabulary knowledge and the elimination of antonyms and analogies. Text completion items have replaced sentence completions and new reading question types allowing for the selection of multiple answers were added.

Quantitative section

[edit]

The computer-based quantitative sections assess knowledge and reasoning skills taught in most Mathematics and Statistics courses in secondary schools.[31] The quantitative test is scored on a scale of 130–170, in 1-point increments (Before August 2011 the scale was 200–800, in 10-point increments). In a typical examination, each quantitative section consists of 20 questions to be completed in 35 minutes.[29] Each quantitative section consists of about 8 quantitative comparisons, 9 problem solving items, and 3 data interpretation questions. The changes in 2011 include the addition of numeric entry items requiring the examinee to fill in the blank and multiple-choice items requiring the examinee to select multiple correct responses.[32]

Analytical writing section

[edit]

The analytical writing section consists of two different essays, an "issue task" and an "argument task". The writing section is graded on a scale of 0–6, in half-point increments. The essays are written on a computer using a word processing program specifically designed by ETS. The program allows only basic computer functions and does not contain a spell-checker or other advanced features. Each essay is scored by at least two readers on a six-point holist scale. If the two scores are within one point, the average of the scores is taken. If the two scores differ by more than a point, a third reader examines the response.

Issue Task

[edit]

The test taker is given 30 minutes to write an essay about a selected topic.[33] Issue topics are selected from a pool of questions, which the GRE Program has published in its entirety. Individuals preparing for the GRE may access the pool of tasks on the ETS website.[34]

Argument Task

[edit]

The test taker will be given an argument (i.e. a series of facts and considerations leading to a conclusion) and asked to write an essay that critiques the argument. Test takers are asked to consider the argument's logic and to make suggestions about how to improve the logic of the argument. Test takers are expected to address the logical flaws of the argument and not provide a personal opinion on the subject. The time allotted for this essay is 30 minutes.[29] The Arguments are selected from a pool of topics, which the GRE Program has published in its entirety. Individuals preparing for the GRE may access the pool of tasks on the ETS website.[35]

Experimental section

[edit]

The experimental section, which can be either verbal or quantitative, contains new questions ETS is considering for future use. Although the experimental section does not count towards the test-taker's score, it is unidentified and appears identical to the scored sections. Because test takers have no definite way of knowing which section is experimental, it is typically advised that test takers try their best and be focused on every section. Sometimes an identified research section at the end of the test is given instead of the experimental section.[36] There is no experimental section on the paper-based GRE.[37]

Scoring

[edit]

An examinee can miss one or more questions on a multiple-choice section and still receive a perfect score of 170. Likewise, even if no question is answered correctly, 130 is the lowest possible score.[12] Verbal and quantative reasoning scores are given in one-point increments, and analytical writing scores are given in half-point increments on a scale of 0 to 6.[38][39]

Scaled score percentiles

[edit]

The percentiles for the current General test and the concordance with the prior format[40] are as follows. According to interpretive data published by ETS, from July 1, 2015 to June 30, 2018 about 2 million people have taken the test. Based on performance of individuals the mean and standard deviation of verbal section were 150.24 and 8.44. Whereas, mean and standard deviation for quantitative section were 153.07 and 9.24. Analytical writing has a mean of 3.55 with a standard deviation of 0.86.[41]

Scaled score Verbal reasoning percentile Verbal prior scale Quantitative reasoning percentile Quantitative prior scale
170 99 760–800 96 800
169 99 740–750 94 800
168 98 720–730 92 800
167 98 710 89 800
166 97 700 87 800
165 96 680–690 85 790
164 94 660–670 83 790
163 92 650 80 780
162 90 630–640 78 770
161 88 620 75 770
160 85 600–610 72 760
159 82 590 69 750
158 79 570–580 65 740
157 75 560 62 730
156 72 540–550 59 720
155 67 530 55 700–710
154 63 510–520 51 690
153 59 500 48 680
152 53 480–490 44 660–670
151 50 460–470 40 640–650
150 45 450 36 630
149 40 430–440 33 610–620
148 36 420 29 590–600
147 32 410 25 570–580
146 28 390–400 22 550–560
145 25 380 18 530–540
144 22 370 15 500–520
143 19 350–360 13 480–490
142 16 340 11 460–470
141 14 330 9 430–450
140 11 320 7 400–420
139 9 310 6 380–390
138 8 300 4 350–370
137 6 290 3 330–340
136 5 280 3 300–320
135 4 280 2 280–290
134 3 270 1 260–270
133 2 260 1 240–250
132 2 250 <1 220–230
131 1 240 <1 200–210
130 <1 200–230 <1 200
Field-wise distribution of takers of GRE revised General Test.[42]
Analytical Writing score Writing % Below
6 99
5.5 98
5 92
4.5 81
4 57
3.5 39
3 15
2.5 7
2 2
1.5 1
1 <1
0.5 <1

"Field-wise distribution" of test takers is "limited to those who earned their college degrees up to two years before the test date." ETS provides no score data for "non-traditional" students who have been out of school more than two years, although its own report "RR-99-16" indicated that 22% of all test takers in 1996 were over the age of 30.

GRE Subject Tests

[edit]

In addition to the General Test, there are also three GRE Subject Tests testing knowledge in the specific areas of Mathematics, Physics, and Psychology. The length of each exam is 170 minutes.

In the past, subject tests were also offered in the areas of Computer Science, Economics, Revised Education, Engineering, English Literature, French, Geography, Geology, German, History, Music, Political Science, Sociology, Spanish, and Biochemistry, Cell and Molecular Biology.[43] In April 1998, the Revised Education and Political Science exams were discontinued. In April 2000, the History and Sociology exams were discontinued; with Economics, Engineering, Music, and Geology being discontinued in April 2001.[44] The Computer Science exam was discontinued after April 2013.[45] Biochemistry, Cell and Molecular Biology was discontinued in December 2016. The GRE Biology Test and GRE Literature in English Test tests were discontinued in May 2021.[46] The GRE Chemistry Test was discontinued in May 2023.[47]

Use in admissions

[edit]

Many graduate schools in the United States require GRE results as part of the admissions process. The GRE is a standardized test intended to measure all graduates' abilities in tasks of general academic nature (regardless of their fields of specialization) and the extent to which undergraduate education has developed their verbal skills, quantitative skills, and abstract thinking.

In addition to GRE scores, admission to graduate schools depends on several other factors, such as GPA, letters of recommendation, and statements of purpose.[48] Furthermore, unlike other standardized admissions tests (such as the SAT, LSAT, and MCAT), the use and weight of GRE scores vary considerably not only from school to school, but also from department to department and program to program.[49] For instance, most business schools and economics programs require very high GRE or GMAT scores for entry, while engineering programs are known to allow more score variation. Liberal arts programs may only consider the applicant's verbal score, while mathematics and science programs may only consider quantitative ability. Some schools use the GRE in admissions decisions, but not in funding decisions; others use it for selection of scholarship and fellowship candidates, but not for admissions. In some cases, the GRE may be a general requirement for graduate admissions imposed by the university, while particular departments may not consider the scores at all.[50] Graduate schools will typically provide the average scores of previously admitted students and information about how the GRE is considered in admissions and funding decisions. In some cases, programs have hard cut off requirements for the GRE; for example, the Yale Economics PhD program requires a minimum quantitative score of 160 to apply.[51] The best way to ascertain how a particular school or program evaluates a GRE score in the admissions process is to contact the person in charge of graduate admissions for the specific program in question.

In February 2016, the University of Arizona James E. Rogers College of Law became the first law school to accept either the GRE or the Law School Admissions Test (LSAT) from all applicants.[52][53][54] The college made the decision after conducting a study showing that the GRE is a valid and reliable predictor of students' first-term law school grades.

In the spring of 2017, Harvard Law School announced it was joining University of Arizona Law in accepting the GRE in addition to the LSAT from applicants to its three-year J.D. program.[55]

After a trial cycle of GRE–free admissions for Fall 2021, University of California, Berkeley voted to drop the GRE requirement for most graduate program admissions for Fall 2022 as well.[56] University of Michigan, Ann Arbor shortly followed announcing that they would drop the GRE requirements for Ph.D. admissions beginning with the 2022–23 admissions cycle.[57] By late 2022, the trend had intensified.[14]

MBA

[edit]

GRE score can be used for taking admission in MBA in foreign colleges.

The GMAT (Graduate Management Admission Test) is a computer-adaptive standardized test in mathematics and the English language for measuring aptitude to succeed academically in graduate business studies. Business schools commonly use the test as one of many selection criteria for admission into an MBA program. Starting in 2009, many business schools began accepting the GRE in lieu of a GMAT score. Policies varied widely for several years. However, as of the 2014–2015 admissions season, most business schools accept both tests equally. Either a GMAT score or a GRE score can be submitted for an application to an MBA program. Business schools also accept either score for their other (non-MBA) Masters and Ph.D. programs.

The primary issue on which business school test acceptance policies vary is in how old a GRE or GMAT score can be before it is no longer accepted. The standard is that scores cannot be more than 5 years old (e.g., Wharton,[58] MIT Sloan,[59] Columbia Business School[60]).

Intellectual clubs

[edit]

High GRE scores are accepted as qualifying evidence to some intellectual clubs such as Intertel[61] and the Triple Nine Society,[62] the minimum passing score depending on the selectivity of the society and the time period when the test was taken. Intertel accepts scores in the 99th percentile[how?] obtained after 2011. Mensa does not accept any score post-September 2001.[63]

Preparation

[edit]

A variety of resources are available for those wishing to prepare for the GRE. ETS provides preparation software called PowerPrep, which contains two practice tests of retired questions, as well as further practice questions and review material. Since the software replicates both the test format and the questions used, it can be useful to predict the actual GRE scores. ETS does not license their past questions to any other company, making them the only source for official retired material. ETS used to publish the "BIG BOOK" which contained a number of actual GRE questions; however, this publishing was abandoned. Several companies provide courses, books, and other unofficial preparation materials.

Some students taking the GRE use a test preparation company. Students who do not use these courses often rely on material from university text books, GRE preparation books, sample tests, and free web resources.

Testing locations

[edit]

While the general and subject tests are held at many undergraduate institutions, the computer-based general test can be held in over 1,000 locations[64] with appropriate technological accommodations. In the United States, students in major cities or from large universities will usually find a nearby test center, while those in more isolated areas may have to travel a few hours to an urban or university location. Many industrialized countries also have test centers, but at times test-takers must cross country borders.

Criticism

[edit]

Bias

[edit]

Algorithmic bias

[edit]

Critics have claimed that the computer-adaptive methodology may discourage some test takers since the question difficulty changes with performance.[65] For example, if the test-taker is presented with remarkably easy questions halfway into the exam, they may infer that they are not performing well, which will influence their abilities as the exam continues, even though question difficulty is subjective. By contrast, standard testing methods may discourage students by giving them more difficult items earlier on.

Critics have also stated that the computer-adaptive method of placing more weight on the first several questions is biased against test takers who typically perform poorly at the beginning of a test due to stress or confusion before becoming more comfortable as the exam continues.[66] On the other hand, standard fixed-form tests could equally be said to be "biased" against students with less testing stamina since they would need to be approximately twice the length of an equivalent computer adaptive test to obtain a similar level of precision.[67]

Implicit bias

[edit]

The GRE has also been subjected to the same racial bias criticisms that have been lodged against other admissions tests. In 1998, The Journal of Blacks in Higher Education noted that the mean score for black test-takers in 1996 was 389 on the verbal section, 409 on the quantitative section, and 423 on the analytic, while white test-takers averaged 496, 538, and 564, respectively.[68] The National Association of Test Directors Symposia in 2004 stated a belief that simple mean score differences may not constitute evidence of bias unless the populations are known to be equal in ability.[69] A more effective, accepted, and empirical approach is the analysis of differential test functioning, which examines the differences in item response theory curves for subgroups; the best approach for this is the DFIT framework.[70]

Weak indicator of graduate school performance

[edit]

The GREs are criticized for not being a true measure of whether a student will be successful in graduate school. Robert Sternberg (now of Cornell University;[71] working at Yale University at the time of the study), a long-time critic of modern intelligence testing in general, found the GRE general test was weakly predictive of success in graduate studies in psychology.[72] The strongest relationship was found for the now-defunct analytical portion of the exam.

The ETS published a report ("What is the Value of the GRE?") that points out the predictive value of the GRE on a student's index of success at the graduate level.[73] The problem with earlier studies is the statistical phenomenon of restriction of range. A correlation coefficient is sensitive to the range sampled for the test. Specifically, if only students accepted to graduate programs are studied (in Sternberg & Williams and other research), the relationship is occluded. Validity coefficients range from .30 to .45 between the GRE and both first year and overall graduate GPA in ETS' study.[74]

Kaplan and Saccuzzo state that the criterion that the GRE best predicts is first-year grades in graduate school. However, this correlation is only in the high tens to low twenties. "If the test correlates with a criterion at the .4 level, then it accounts for 16% of the variability in that criterion, with the other 84% resulting from unknown factors and errors"[75] (p. 303). Graduate schools may be placing too much importance on standardized tests rather than on factors that more fully account for graduate school success, such as a thesis-requiring Honours degree, prior research experience, GPAs, or work experience. While graduate schools do consider these areas, many times schools will not consider applicants that score below a current score of roughly 314 (1301 prior score). Kaplan and Saccuzzo also state that "the GRE predict[s] neither clinical skill nor even the ability to solve real-world problems" (p. 303).

In 2007, a study by a university found a correlation of .30 to .45 between the GRE and both first year and overall graduate GPA. The correlation between GRE score and graduate school completion rates ranged from .11 (for the now defunct analytical section) to .39 (for the GRE subject test). Correlations with faculty ratings ranged from .35 to .50.[74]

Historical susceptibility to cheating

[edit]

In May 1994, Kaplan, Inc warned ETS, in hearings before a New York legislative committee, that the small question pool available to the computer-adaptive test made it vulnerable to cheating. ETS assured investigators that it was using multiple sets of questions and that the test was secure. This was later discovered to be incorrect.[76]

In December 1994, prompted by student reports of recycled questions, then Director of GRE Programs for Kaplan, Inc and current CEO of Knewton, Jose Ferreira, led a team of 22 staff members deployed to 9 U.S. cities to take the exam. Kaplan, Inc then presented ETS with 150 questions, representing 70–80% of the GRE.[77] According to early news releases, ETS appeared grateful to Stanley H. Kaplan, Inc. for identifying the security problem. However, on December 31, ETS sued Kaplan, Inc. for violation of a federal electronic communications privacy act, copyright laws, breach of contract, fraud, and a confidentiality agreement signed by test-takers on test day.[78] On January 2, 1995, an agreement was reached out of court.

Additionally, in 1994, the scoring algorithm for the computer-adaptive form of the GRE was discovered to be insecure. ETS acknowledged that Kaplan, Inc employees, led by Jose Ferreira, reverse-engineered key features of the GRE scoring algorithms. The researchers found that a test taker's performance on the first few questions of the exam had a disproportionate effect on the test taker's final score. To preserve the integrity of scores, ETS adopted a more sophisticated scoring algorithm.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia

The Graduate Record Examinations (GRE) are a suite of standardized tests developed and administered by the to assess skills essential for success in graduate, business, and law programs worldwide. The primary GRE General Test evaluates , quantitative reasoning, and analytical writing abilities through a computer-delivered format featuring multiple-choice and sections, with a shortened structure implemented in 2023 reducing the total testing time to under two hours. Complementing the General Test, GRE Subject Tests measure specialized knowledge in fields such as , physics, and via discipline-specific questions.
Widely utilized by admissions committees to compare applicants from diverse backgrounds, the GRE provides a common metric amid varying undergraduate grading standards, though its scores are just one factor in holistic evaluations. Empirical meta-analyses indicate that GRE scores exhibit moderate for graduate grade-point average (correlations typically ranging from 0.20 to 0.40) and degree completion, often improving when combined with undergraduate GPA, but with diminishing incremental value in some fields due to restricted score ranges and other predictors. The GRE has faced controversies over its fairness and , with score disparities across socioeconomic, racial, and lines prompting debates on potential biases, despite psychometric studies affirming validity after for prior achievement differences. Many graduate programs, particularly in and social sciences, have de-emphasized or eliminated GRE requirements since the early , citing equity concerns and limited added predictive power beyond other admissions criteria, though retention advocates highlight its role in identifying high-potential candidates from underrepresented institutions.

History

Origins and Development (1930s–1960s)

The Graduate Record Examinations (GRE) originated in 1936 as an experimental initiative spearheaded by the graduate school deans of , , , and , with support from the Carnegie Foundation for the Advancement of Teaching. The test aimed to provide a standardized assessment of applicants' verbal and quantitative aptitudes, enabling graduate admissions committees to compare candidates from diverse undergraduate institutions on a common basis, rather than relying solely on subjective evaluations or institutional prestige. Initially limited to verbal and quantitative sections, the GRE was administered experimentally at the founding universities to evaluate its efficacy in predicting graduate success. Adoption expanded gradually in the late 1930s and 1940s, with the University of Wisconsin becoming the first public institution to mandate the GRE for admissions in 1938. Subsequent integrations included the in 1940, in 1942, and in 1943, reflecting growing recognition of the test's utility amid increasing graduate program competition. By the mid-1940s, the GRE had transitioned from a pilot project under Carnegie oversight to a broader tool, particularly benefiting veterans pursuing advanced degrees through the , whose non-traditional academic paths necessitated objective metrics for evaluation. In 1948, administration of the GRE shifted to the newly formed (ETS), established through the merger of the , the Carnegie Foundation for the Advancement of Teaching, and the College Entrance Examination Board to centralize and professionalize standardized testing operations. That year, approximately 45,000 examinees applied to over 500 institutions using GRE scores, marking a surge in scale and standardization. Throughout the 1950s and into the 1960s, the test solidified its role in graduate admissions, with ETS refining scoring and delivery to accommodate rising enrollment; for instance, subject-specific area tests emerged alongside the general exam to gauge disciplinary knowledge, though the core verbal and quantitative components remained foundational. This period saw the GRE evolve from an experiment to a cornerstone of merit-based selection, despite debates over its relative to undergraduate grades.

Expansion and Early Standardization (1970s–1990s)

During the , the GRE experienced steady but fluctuating participation, with approximately 265,000 test takers in 1970 rising to a peak of over 300,000 by 1974 before stabilizing around 280,000–300,000 annually through the decade, reflecting growing reliance on the exam by U.S. graduate programs for admissions decisions amid post-World War II expansion in higher education access. This period marked ETS's emergence as the dominant U.S. testing organization by the mid-, driven by the GRE's role in standardizing applicant evaluation across disciplines, though graduate enrollment growth began decelerating due to economic factors. Standardization efforts intensified with the October 1977 restructuring of the GRE Aptitude Test, which introduced an experimental analytical ability measure alongside verbal and quantitative sections to better assess reasoning skills, comprising two 25-minute analytical sections in a format totaling about 150 minutes plus a variable section. Further refinements in October 1981 revised the analytical measure to emphasize reasoning items (75% Type 1 logical diagrams and analyses, 25% Type 2), shifted to a seven-section format of 30 minutes each (two verbal, two quantitative, two analytical, one variable), eliminated scoring in favor of rights-only scoring to reduce penalties, and increased annual disclosed test editions to enhance transparency and . The establishment of the Validity Study Service in 1979 supported graduate departments in evaluating GRE correlations with academic performance, fostering empirical . By the 1980s and , test taker volumes rebounded and expanded significantly, dipping to 256,000 in 1982 before climbing to 344,000 in 1990 and peaking at 411,000 in 1992, coinciding with broader and ETS's initiatives like the 1978–1987 examinee files for longitudinal analysis. A pivotal standardization advance occurred in 1992 with the launch of the first computerized GRE, transitioning from paper-based to digital delivery at testing centers, followed in 1993 by computer-adaptive testing () that adjusted question difficulty in real-time based on performance, improving efficiency and score precision while maintaining comparability to prior formats through equating studies. These developments, grounded in ETS research on , aimed to mitigate practice effects and ensure scores reflected innate abilities more reliably across diverse applicant pools.

Major Revisions Before 2002

In 1977, the (ETS) introduced the Analytical Ability section to the GRE General Test, marking the first major revision since the test's early development; this section assessed and problem-solving skills through questions involving analysis of arguments and data interpretation, supplementing the existing Verbal and Quantitative sections. The GRE transitioned to a computer-based format in 1992, with the full implementation of computer-adaptive testing (CAT) for the General Test occurring in October 1993; under CAT, the difficulty of subsequent questions adjusted based on the test-taker's performance on prior ones, aiming to increase efficiency, precision in scoring, and security against cheating compared to paper-based versions. In October 1999, ETS added the Writing Assessment to the GRE General Test, consisting of two essays—one analyzing an issue and one critiquing an argument—to evaluate and written communication skills; this 60-minute component was initially positioned after the other sections but represented a shift toward assessing graduate-level competencies beyond multiple-choice formats.

2002 and 2011 Overhauls

In October 2002, the restructured the GRE General Test by eliminating the Analytical section, which had assessed logical and analytical reasoning through argument analysis and graphical data interpretation questions, and fully integrating the Analytical Writing Assessment as a core component. The writing assessment, initially launched as a separate computer-based test in October 1999, now consisted of two 30-minute tasks—an "Analyze an Issue" prompt requiring test-takers to develop a position on a general topic with supporting reasons, and an "Analyze an Argument" task evaluating the logical soundness of a provided argument—positioned at the beginning of the exam to prioritize written communication skills deemed essential for graduate-level work. This overhaul shortened the overall test duration compared to prior formats while shifting emphasis from discrete reasoning exercises to extended analytical writing, with scores reported on a 0-6 scale in half-point increments alongside verbal and quantitative results. The 2002 changes addressed criticisms of the Analytical section's validity in predicting graduate success, as ETS research indicated writing proficiency correlated more strongly with academic performance than isolated logic puzzles, though some admissions committees noted challenges in equating scores across the transition period. Verbal and quantitative sections retained their computer-adaptive format, with 200-800 scoring, but the removal of analytical reasoning questions streamlined content to focus on , , , and problem-solving, reducing potential overlap with specialized subject tests. On August 1, 2011, ETS implemented the GRE revised General Test, marking the most extensive format update since the shift to computer-based delivery in 1993, with revisions announced in 2006 following validity studies and pilot testing. Key modifications included a transition from question-level computer-adaptive testing to section-level adaptivity, where performance on the first verbal or quantitative section determined the difficulty of the second, allowing for more experimental questions and a fixed structure of two scored sections per measure plus one unscored research section; the test length extended to about 3 hours and 45 minutes, incorporating breaks and an on-screen for all quantitative tasks. Verbal reasoning introduced new question types such as sentence equivalence (selecting two words to form synonymous completions) and text completion (filling blanks in passages), emphasizing contextual over rote , while quantitative reasoning added data interpretation sets and real-world problem-solving scenarios drawn from undergraduate curricula. Scoring scales were overhauled to 130-170 for verbal and quantitative in one-point increments, replacing the 200-800 scale to enhance score and reduce effects observed in high-achieving populations; analytical writing remained 0-6 but with refined rubrics prioritizing evidence-based argumentation. ETS justified these alterations through empirical data showing improved alignment with graduate admissions criteria, including stronger correlations with first-year GPA via expanded , though independent analyses questioned whether the added length and adaptive shifts disproportionately affected test-takers under time pressure without proportionally boosting predictive accuracy. The revisions also permitted score viewing on test day for preview and cancellation options, aiming to increase applicant control amid competition from alternatives like the GMAT.

2023 Shortening and Modern Updates

In September 2023, the (ETS) implemented a major revision to the GRE General Test, reducing its duration from 3 hours and 45 minutes to 1 hour and 58 minutes. This shorter format took effect for all test administrations starting September 22, 2023, with registration opening concurrently. The changes aimed to alleviate test-taker fatigue and improve the overall candidate experience, drawing from ETS research and feedback indicating that excessive length contributed to diminished performance in later sections. Key modifications included streamlining the Analytical Writing section by eliminating the "Analyze an Argument" task, retaining only the "Analyze an Issue" task allotted 30 minutes. Both and Quantitative Reasoning sections saw a reduction in questions from 40 total each (across two scored sections of 20 questions) to 27 total each, distributed as 12 questions in the first section and 15 in the second. Time allocations adjusted accordingly: totals 41 minutes (18 minutes for the first section, 23 for the second), while Quantitative Reasoning totals 47 minutes (21 minutes for the first, 26 for the second). Additionally, ETS removed the previously included unscored experimental section, which had been used for pretesting new questions without affecting scores.
SectionPrevious Format (Pre-September 2023)Shorter Format (Post-September 2023)
Analytical WritingTwo tasks (Issue and Argument), 60 minutes totalOne task (Issue), 30 minutes
40 questions, 60 minutes (two sections of 20 questions, 30 minutes each)27 questions, 41 minutes (Section 1: 12 questions/18 min; Section 2: 15 questions/23 min)
Quantitative Reasoning40 questions, 70 minutes (two sections of 20 questions, ~35 minutes each)27 questions, 47 minutes (Section 1: 12 questions/21 min; Section 2: 15 questions/26 min)
Unscored SectionPresent (Verbal or Quantitative)Removed
Score reporting accelerated to 8-10 days from the prior 10-15 days, enabling quicker application submissions. ETS maintained that the revisions preserve the test's validity and reliability for predicting graduate program success, with no alterations to question types, scoring scales (130-170 for Verbal and Quantitative, 0-6 for Analytical Writing), or rankings. Initial data post-implementation suggested comparable score distributions to the legacy test, supporting ETS's claims of equivalence despite the condensed format.

Test Structure and Content

Overview of the General Test

The Graduate Record Examinations (GRE) General Test, administered by the , serves as a standardized assessment for admissions to graduate, business, and law programs worldwide. It evaluates essential skills including , quantitative reasoning, and analytical writing, which are deemed critical for academic success at the graduate level. The test is accepted by thousands of institutions and provides a common metric for comparing applicants' readiness beyond undergraduate grades. In its current format, effective September 22, 2023, the GRE General Test lasts approximately 1 hour and 58 minutes, a reduction from nearly 4 hours in prior versions to enhance test-taker experience and efficiency without compromising validity. The structure comprises one Analytical Writing section (1 task, 30 minutes), two sections (12 questions in 18 minutes for the first, 15 questions in 23 minutes for the second), and two Quantitative Reasoning sections (12 questions in 21 minutes for the first, 15 questions in 26 minutes for the second). It employs section-level adaptive testing, where the difficulty of the second Verbal and Quantitative sections adjusts based on first-section performance, alongside features permitting answer review, skipping, and changes within sections. The test is primarily computer-delivered at authorized centers or via supervised at-home options, with an on-screen available for Quantitative Reasoning. Scores are reported for (130-170 scale), Quantitative Reasoning (130-170 scale), and Analytical Writing (0-6 scale), enabling graduate programs to gauge applicants' abilities in analyzing arguments, solving mathematical problems, and articulating complex ideas coherently. This format prioritizes measuring real-world graduate competencies over rote knowledge, with content drawn from high school-level and general academic vocabulary.

Verbal Reasoning Section

The Verbal Reasoning section of the GRE General Test assesses the test-taker's ability to analyze and evaluate written material, synthesize information from it, analyze relationships among parts of sentences, and recognize relationships among words and concepts. This measure emphasizes skills in understanding discourse, reasoning from incomplete data, identifying assumptions and perspectives, and evaluating arguments, which are intended to reflect capabilities useful in graduate-level academic work. Since the shortened GRE format implemented on September 22, 2023, the section consists of two scored sections that may appear in any order after the Analytical Writing section. Section 1 includes 12 questions to be completed in 18 minutes, while Section 2 has 15 questions in 23 minutes, for a total of 27 questions over 41 minutes. The section is section-level adaptive, meaning the difficulty of Section 2 adjusts based on performance in Section 1; test-takers can skip questions, review, and change answers within each section before time expires. The section features three question types: , Text Completion, and Sentence Equivalence, with approximately half the questions based on passages and the other half on discrete sentence- or paragraph-level items. questions require test-takers to read passages of one or more paragraphs (typically 100-450 words) drawn from , social sciences, or natural sciences, then answer multiple-choice questions testing comprehension of main ideas, inferences, author's attitudes, logical structure, and supporting details. Passages may include arguments where questions probe assumptions or evaluate evidence. Text Completion tasks present a sentence or short passage (up to five sentences) with one to three blanks, requiring selection of the word or phrase from five options that best fits each blank in context, ensuring coherent and precise completion without relying solely on isolation. Single-blank items have five choices, while multiple-blank ones provide separate options per blank, demanding integrated reasoning across the text. Sentence Equivalence questions involve a sentence with one blank and six answer choices; test-takers select two words that both fit the blank and produce sentences with equivalent meanings, emphasizing synonymy in context rather than identical wording. Correct answers must form a pair yielding logically similar outcomes. Scores for Verbal Reasoning range from 130 to 170 in one-point increments, derived from the number of correct responses across both sections via equating to account for minor variations in difficulty, with no penalty for guessing. Raw scores are scaled to ensure comparability across test administrations.

Quantitative Reasoning Section

The Quantitative Reasoning section of the GRE General Test evaluates test-takers' ability to understand, interpret, and analyze quantitative information, as well as to apply basic mathematical concepts to solve problems using arithmetic, , , and . This section emphasizes quantitative reasoning and problem-solving skills developed at the high school level, without requiring advanced topics such as , , or geometric proofs. The section consists of two adaptive subsections, with the difficulty of the second determined by performance on the first. The first subsection includes 12 questions to be completed in 21 minutes, while the second has 15 questions allotted 26 minutes, for a total of 27 questions and 47 minutes. These subsections may appear in any order following the Analytical Writing section, alongside the sections. Scores for Quantitative Reasoning range from 130 to 170 in one-point increments, derived from the total number of correct responses across both subsections, adjusted for the adaptive format. Content is drawn from four primary areas, aligned with standard high school mathematics curricula:
  • Arithmetic: Covers properties of integers (including divisibility, factorization, primes, remainders, and even/odd distinctions), arithmetic operations, exponents and roots, estimation techniques, percentages, ratios and proportions, , and basic sequences of numbers.
  • Algebra: Includes operations with exponents, algebraic manipulation and factoring, functions and their representations, solving equations and inequalities, and elements of coordinate such as graphing lines, intercepts, slopes, and equations of lines.
  • Geometry: Encompasses properties of lines, angles, triangles, quadrilaterals, circles, and other polygons; three-dimensional figures; perimeter, area, and volume calculations; and the applied to right triangles.
  • Data Analysis: Focuses on descriptive statistics (, , mode, range, standard deviation, quartiles, and ), interpretation of data from graphs, tables, and charts, probability concepts, and counting methods such as permutations and combinations; inferential statistics are not tested.
Questions appear in three main formats: Quantitative Comparison tasks, which require determining the relationship (greater, less, equal, or indeterminate) between two quantities; items, available as single- or multiple-answer multiple-choice or numeric entry requiring a calculated response; and Data Interpretation questions, often presented in sets involving graphical or tabular . Nonstandard mathematical symbols or terminology are defined within individual questions, adhering to high school-level conventions. An on-screen basic calculator is provided for use throughout the section, featuring , , , division, , and memory functions, though it is intended primarily for computations too time-consuming for mental arithmetic, such as lengthy divisions or square roots. Test-takers are advised against relying on it for straightforward calculations to maintain . The section's design prioritizes real-world quantitative modeling and reasoning over rote computation, reflecting skills relevant to graduate-level quantitative analysis.

Analytical Writing Section

The Analytical Writing section of the GRE General Test evaluates test-takers' and analytical writing abilities, specifically their capacity to articulate and support complex ideas, construct and evaluate arguments, and sustain a focused, coherent discussion. Introduced as part of the test's core components since its early standardization, this section requires responses to prompts drawn from a predefined pool of topics maintained by the (ETS). Prior to September 22, 2023, it comprised two separately timed tasks—an "Analyze an Issue" task and an "Analyze an Argument" task—each allocated 30 minutes, totaling 60 minutes; the revision shortened the overall test duration by eliminating the Argument task, reducing the section to a single 30-minute Issue task while preserving its assessment focus. In the Analyze an Issue task, test-takers must respond to a prompt presenting a claim or statement of opinion on a broad topic, such as , , or , by developing a position supported by reasons and examples drawn from reading, observation, or personal experience. Effective responses demonstrate clear reasoning, relevant evidence, control of language, and awareness of counterarguments, with ETS emphasizing depth of analysis over length or creativity. The task pool includes over 100 prompts categorized by themes, ensuring variety while testing general analytical skills applicable across graduate disciplines; ETS rotates prompts to maintain security and fairness. Scoring occurs on a 0–6 scale in half-point increments, with ETS employing two independent, trained raters who evaluate responses holistically based on criteria including task development, organization, language use, and critical reasoning; if scores differ by more than one point, a third rater resolves the discrepancy. The reported score is the average of the two closest ratings, rounded to the nearest half-point, with consistently above 0.90 as reported in ETS technical documentation, indicating high consistency. Empirical studies, including meta-analyses of GRE data from thousands of graduate students, show the Analytical Writing score correlates modestly with first-year graduate GPA (r ≈ 0.20–0.30) and writing-intensive outcomes, though less strongly than Verbal or Quantitative scores for broader academic performance, underscoring its targeted utility for assessing communication skills amid debates over standardized testing's overall predictive power.

GRE Subject Tests

Available Subjects and Purpose

The GRE Subject Tests evaluate examinees' mastery of undergraduate-level content in designated academic disciplines, serving as a standardized measure of specialized knowledge to aid admissions decisions. Unlike the GRE General Test, which assesses broad , these subject-specific assessments gauge achievement in fields demanding rigorous foundational training, thereby helping admissions committees differentiate candidates with comparable general qualifications but varying depths of domain expertise. As of September 2023, the available GRE Subject Tests are limited to , Physics, and , following the discontinuation of the Chemistry test after its final paper-based offering in April 2023. The Test, lasting 2 hours and 50 minutes, covers topics such as (approximately 50% of questions), (25%), and additional areas like and statistics; the Physics Test (2 hours) emphasizes (20%), (18%), and (12%); and the Test (2 hours) includes biological bases of behavior (roughly 30%), cognitive and developmental aspects (27%), and (15%). These tests are particularly relevant for applicants to doctoral or research-oriented master's programs in the respective fields, where they provide of subject-matter proficiency that complements undergraduate transcripts, recommendations, and experience. ETS data indicate that high Subject Test scores correlate with stronger performance in graduate coursework, though their use has declined amid broader shifts toward holistic admissions criteria. Departments in , physics, and often recommend or require them for international applicants or those from less familiar institutions to verify equivalence of preparation.

Structure and Recent Changes

The GRE Subject Tests assess undergraduate-level achievement in specialized fields through multiple-choice questions drawn from typical coursework. Each test yields a single scaled score from 200 to 990, reported in 10-point increments, based on the number of correct answers with no penalty for guessing. As of 2025, the available tests are in , Physics, and ; Biology, Chemistry, and Literature in English were discontinued in prior years, with Chemistry ending after its April 2023 administration. The Mathematics Test comprises approximately 66 questions covering (50%), (25%), and additional topics such as , linear algebra, probability, and (25%), administered in a 2-hour-50-minute format. The Physics Test includes about 100 questions on , , , , and other areas, while the Psychology Test features roughly 205 questions spanning biological, cognitive, social, developmental, and , among others. All tests emphasize factual recall, application, and interpretation over rote . In 2023, ETS shifted the Subject Tests from infrequent paper-based administrations to computer-delivered formats offered twice monthly in September, October, and April at testing centers worldwide, enhancing accessibility and reducing wait times for scores. Concurrently, the Physics and Tests were shortened to 2 hours each from prior lengths exceeding 2.5 hours, with adjusted question counts to maintain content coverage while streamlining the exam. The Test retained its 2-hour-50-minute duration. These modifications followed ETS's broader efforts to modernize assessments, including discontinuations of less-utilized subjects amid declining demand from graduate programs.

Scoring and Percentiles

Score Ranges and Calculation

The GRE General Test produces three separate scores: one for Verbal Reasoning (130–170, in 1-point increments), one for Quantitative Reasoning (130–170, in 1-point increments), and one for Analytical Writing (0–6, in half-point increments). These ranges have remained consistent following the test's shortening in September 2023, which reduced the number of questions but preserved the scaling methodology. Scores are reported approximately 8–10 days after the computer-delivered test or 5 weeks after the paper-delivered version, with test takers able to view them in their ETS account. Verbal and Quantitative scores derive from the total number of correct answers across two sections per measure, with no penalty for unanswered or incorrect questions. ETS does not publish official raw-to-scaled score conversion tables, particularly for Verbal Reasoning; instead, raw scores are converted to the 130–170 scale through equating, which accounts for variations in section difficulty and the section-level adaptive test format to ensure comparable performance across test versions. The shortened GRE (since September 2023) includes 27 Verbal Reasoning questions total, but no fixed conversion table is provided, with the latest official information for 2025–26 confirming the equating process without specific raw-to-scaled mappings. The test employs section-level adaptive delivery: performance on the first section determines the difficulty of the second, but the final scaled score combines raw performance from both via statistical equating to adjust for minor variations in test difficulty and ensure comparability across administrations. Equating uses and historical data from representative test-taker samples to map raw scores (correct answers) to the 130–170 scale, preventing inflation or deflation due to form differences.
SectionScore RangeScoring IncrementBasis of Calculation
130–1701 pointTotal correct answers across adaptive sections, equated to scaled score
Quantitative Reasoning130–1701 pointTotal correct answers across adaptive sections, equated to scaled score
Analytical Writing0–60.5 pointsAverage of scores from two tasks, evaluated by trained human raters and ETS's e-rater system for consistency
Analytical Writing scores reflect the average of evaluations for the "Analyze an Issue" and "Analyze an Argument" tasks, each initially scored 0–6 by at least one trained human rater and ETS's automated e-rater scoring engine; discrepancies trigger a second human review to ensure reliability. ETS does not compute or report an official total score, though the sum of Verbal and Quantitative scores (260–340) is sometimes referenced informally by admissions programs; decisions emphasize section-specific performance over aggregates. For the discontinued GRE Subject Tests, scores ranged from 200–990 in 10-point increments, based solely on total correct answers equated across forms, but these are no longer offered after September 2023.

Percentile Rankings and Interpretation

Percentile rankings for the GRE General Test indicate the percentage of test takers who obtained raw scores below a given scaled score, providing a norm-referenced measure of relative performance among recent examinees. These ranks are calculated separately for and Quantitative Reasoning (scaled 130–170) and Analytical Writing (scaled 0–6 in half-point increments), with no official composite percentile across sections. The (ETS) derives percentiles from the scores of all individuals who tested between July 1, 2021, and June 30, 2024, updating them periodically to reflect current test-taker populations. Interpretation of percentiles emphasizes comparative standing rather than absolute proficiency, as scaled scores are equated across test forms to ensure fairness, but distributions differ by section due to varying difficulty perceptions and applicant pools. For instance, a 160 in aligns with the , outperforming 86% of test takers, whereas the same score in Quantitative Reasoning corresponds to the , reflecting stronger overall quantitative performance among examinees. Analytical Writing percentiles are similarly relative; a score of 4.0 typically falls around the , though this measure receives less emphasis in admissions due to its subjective elements and lower predictive correlations. ETS further contextualizes percentiles by intended graduate major field, revealing score distributions that vary significantly across disciplines, as applicants self-select into fields aligning with their strengths. In broad categories, mean Quantitative scores exceed 160 (top 25th percentile overall) for and sciences, while Verbal means surpass 155 for and , enabling programs to benchmark applicants against field-specific norms rather than global averages. Admissions committees interpret high percentiles (e.g., 75th or above) as competitive signals of readiness, particularly when aligned with program priorities—Quantitative for STEM, Verbal for social sciences—but stress holistic review, as percentiles alone do not capture domain-specific skills or undergraduate preparation. Over-reliance on percentiles can overlook cohort effects, such as inflation from test-prep or demographic shifts, though ETS validity studies affirm their stability for relative comparisons.

Validity and Reliability Evidence

The reliability of GRE General Test scores is evidenced by high internal consistency estimates derived from for (0.87) and Quantitative Reasoning (0.94), with Analytical Writing showing moderate reliability (0.76) based on test-retest analyses of task ratings, where scores agree 97% of the time across repeat test-takers from July 2021 to June 2024. Test-retest correlations further support score stability, with Quantitative Reasoning at 0.91, at 0.82, and Analytical Writing at 0.78 in a study of over one million examinees. Standard errors of indicate low expected variability, at 3.2 points for Verbal, 2.6 for Quantitative, and 0.41 for Analytical Writing on their respective scales, based on from September 2023 to June 2025 forms. These metrics, calculated by ETS using representative test editions, demonstrate consistent of intended constructs across administrations, though Analytical Writing's lower reliability reflects the subjective elements of essay scoring. Evidence for the validity of GRE scores encompasses through ETS's alignment of test content with graduate-level skills via expert reviews and empirical analyses, ensuring sections measure , quantitative reasoning, and analytical writing as defined. for graduate success is moderate, with a 2001 meta-analysis of over 82,000 students across 1,700 samples finding GRE Verbal and Quantitative scores correlating 0.30–0.45 with first-year graduate GPA, incrementally beyond undergraduate GPA, and positively with degree attainment and research productivity, though some confidence intervals overlapped zero for broader outcomes. ETS reports confirm these patterns in recent data, with correlations such as 0.22 for Verbal with cumulative GPA in health professions master's programs, 0.37 for Quantitative in MBA programs, and 0.27 for Analytical Writing in biomedical doctoral programs, holding across STEM, , and other fields in samples exceeding 25,000 students. However, a 2023 meta-analysis of GRE predictive effects across GPA, comprehensive exams, and other outcomes found 61.6% of reported associations nonsignificant, suggesting attenuated validity in contemporary contexts, potentially due to range restriction in applicant pools or shifts in admissions practices emphasizing non-cognitive factors. ETS counters that such findings overlook incremental value over undergraduate GPA and comparability to predictors in other domains, like medical licensing exams. Domain-specific evidence varies, with stronger predictions for quantitative-heavy fields and performance (r ≈ 0.40–0.50 for first-year grades), but weaker or inconsistent links to long-term metrics like publications in programs. Overall, while GRE scores provide generalizable but modest criterion-related validity—typical for single admissions predictors—independent academic critiques, including program-level studies post-test-optional policies, question their standalone utility amid rising emphasis on holistic review.

Admissions Use and Impact

Role in Graduate Program Selection

The Graduate Record Examination (GRE) serves as a standardized tool in the graduate admissions process, enabling committees to assess applicants' aptitude in , quantitative reasoning, and analytical writing relative to a national pool, thereby supplementing undergraduate grade point averages (GPAs), recommendation letters, and statements of purpose that may vary in rigor across institutions. Admissions panels at thousands of programs worldwide, including those in master's, doctoral, and professional fields, incorporate GRE scores to identify candidates prepared for graduate-level demands, with the test's design allowing cross-institutional comparability not afforded by institution-specific metrics like GPA alone. In practice, many programs apply GRE scores as an initial filter, setting minimum thresholds to narrow large applicant pools; for example, competitive disciplines often prioritize scores above the 50th percentile to rank candidates, particularly when resources for detailed review are limited. Experimental studies simulating decisions within specific programs demonstrate that declining GRE scores reduce the probability of admission offers, with sharper effects on funded positions, as committees weigh scores against holistic factors but retain them for predictive utility in high-stakes selections. Quantitative sections hold particular weight in STEM fields, while verbal and writing scores influence and social sciences admissions, though overall composite scores frequently proxy broader cognitive and preparatory capabilities. Requirements for GRE submission have declined amid post-2019 shifts toward test-optional policies, driven by accessibility concerns during the and subsequent equity debates; for instance, doctoral programs in saw the mandate for GRE Quantitative and Verbal scores drop from 45% to 14% by 2023, with similar reductions in fields like (over 50% non-requiring by 2020). Despite this, programs retaining or recommending the GRE—especially top-tier ones—continue using submitted scores to differentiate applicants, as voluntary high performers signal stronger candidacy in otherwise comparable profiles, countering the dilution of standardization in optional eras. By 2025, while waivers proliferate at institutions like MIT and Stanford, the GRE remains integral for fellowship and international applicant evaluations, preserving its role in merit-based where empirical comparability outweighs policy relaxations.

Empirical Predictive Validity for Success

The Graduate Record Examination (GRE) demonstrates modest empirical predictive validity for certain graduate outcomes, particularly first-year and overall graduate grade point average (GPA), though undergraduate GPA (UGPA) often provides comparable or superior prediction on its own. A seminal meta-analysis by Kuncel, Hezlett, and Ones (2001), aggregating data from over 100 studies involving tens of thousands of graduate students, reported validity coefficients of 0.31 for combined GRE Verbal and Quantitative scores predicting overall graduate GPA, and 0.38 for first-year graduate GPA when combined with UGPA; GRE scores added incremental validity beyond UGPA alone, with correlations strengthening in more selective programs and quantitative fields. Similar patterns emerged for comprehensive exam performance (r ≈ 0.29) and faculty ratings (r ≈ 0.24), but predictions weakened for research productivity or degree completion. Field-specific validity varies, with GRE Quantitative scores showing stronger correlations ( > 0.30) for success in STEM disciplines, where mathematical aptitude causally underpins coursework and research demands, compared to fields where Verbal scores align more closely but still modestly ( ≈ 0.20-0.25). For instance, in programs, GRE scores explained 10-15% of variance in graduate GPA for U.S. students, outperforming international cohorts where factors confound results. ETS-sponsored studies, such as those across Florida's involving over 25,000 students, corroborate these findings, with GRE scores predicting master's and doctoral GPAs ( = 0.22-0.28) and showing higher utility in combination with UGPA for degree attainment. However, these institutional analyses, while empirically grounded, warrant scrutiny given ETS's vested interest in affirming test utility. More recent evidence indicates attenuating validity, potentially due to applicant pool changes, test revisions, or shifts toward holistic admissions. A 2023 by Sawczuk et al., reviewing 79 studies and over 200 effect sizes, found 61.6% of GRE-outcome associations nonsignificant, with average correlations below 0.20 for GPA and near zero for attrition or rates; validity appeared higher pre-2010 but declined thereafter, challenging claims of robust ongoing prediction. Domain-specific inquiries reinforce this: in physics PhD programs, UGPA predicted graduate GPA (r = 0.35) and completion more effectively than GRE scores (r < 0.20), attributing GRE's edge primarily to quantitative subsections in analytical tasks. For broader success metrics like securing or defense, GRE correlations hover around 0.15-0.25, often rivaled by noncognitive factors such as letters of recommendation, underscoring that while GRE captures relevant to academic performance, it explains limited variance (typically <10%) in multifaceted graduate outcomes. The adoption of the GRE in graduate admissions has historically been widespread, with most U.S. programs requiring scores as a standardized measure of applicant prior to the late 2010s. However, beginning around 2018 and accelerating after the disrupted testing access in 2020, a significant shift occurred toward test-optional or test-free policies, driven by concerns over equity, access, and the exam's incremental predictive value beyond undergraduate GPA. This trend resulted in a marked decline in required usage across many fields, particularly in the , social sciences, and , where programs increasingly viewed the GRE as non-essential for holistic review. Empirical data from field-specific surveys illustrate the scale of this change. In doctoral programs, the proportion requiring GRE Quantitative and Verbal scores fell from 45% to 14%, while GRE Writing requirements dropped from 21% to 8%, reflecting policies adopted during and after the . Similarly, in biomedical Ph.D. programs, only 3% required GRE General Test scores by 2022, compared to 84% four years earlier, with an additional 5% strongly recommending them. By the 2021–2022 application cycle, content analyses confirmed that a majority of programs in fields like (44% elimination by 2019) and (35%) had waived or made optional the GRE requirement. These shifts correlated with a broader decline in test volume, as GRE examinees dropped from 532,826 in 2018 to 319,101 by the 2022–2023 testing year. In contrast, adoption trends vary by discipline, with business and some STEM programs showing resilience or growth in GRE usage. For top U.S. MBA programs, GRE submissions rose to 37.2% of total scores in 2025, up from 31.1% two years prior, as applicants increasingly substituted it for the declining GMAT. ETS responded to these dynamics in September 2023 by shortening the GRE General Test to two hours from nearly four, aiming to boost and amid optional policies. As of 2025, while many programs maintain permanent test-optional stances—particularly in non-quantitative fields—no widespread reversion to mandatory requirements has occurred, though some institutions periodically reassess policies based on enrollment data. This fragmentation underscores ongoing debates over versus flexibility in admissions.

Preparation and Administration

Official Preparation Resources

The (ETS), the administrator of the GRE General Test, provides official preparation materials designed to reflect the test's content, format, and difficulty, emphasizing authenticity over third-party approximations. These resources include free tools for initial familiarization and paid options for deeper practice, updated to align with the shorter GRE format introduced in September 2023, which reduced the test length to under two hours while maintaining section adaptive scoring. Free resources accessible via an ETS account include two full-length POWERPREP Online practice tests that simulate the computer-delivered test environment, including timed sections for Analytical Writing, Verbal Reasoning, and Quantitative Reasoning, with immediate unscored results for non-adaptive previews. Additional no-cost materials encompass hundreds of sample questions with explanations, a math review PDF covering key quantitative concepts, and instructional videos on test strategies, question types, and scoring. ETS also offers downloadable resources tied to video presentations, such as tips for each section and general test-taking advice. Paid official resources extend practice through POWERPREP PLUS Online, which provides two additional full-length adaptive practice tests with real scoring, performance insights, and explanations, priced at $39.95 each as of 2025. ETS publishes printed and e-book versions of preparation guides, including The Official Guide to the GRE General Test, Third Edition (containing over 300 authentic questions across all sections with detailed answer explanations) and specialized volumes like Official GRE Practice Questions and Official GRE Quantitative Reasoning Practice Questions, each offering targeted drills from past exams. For Analytical Writing, the ScoreItNow! service allows submission of two essays for e-rater automated scoring and feedback, costing $20. Accessible formats, such as large-print or versions of books and tests, are available for candidates with disabilities upon request. These materials prioritize empirical alignment with actual test items, though ETS notes that practice scores may vary from official results due to adaptive elements and test-day conditions.

Third-Party and Self-Study Methods

Third-party preparation resources for the GRE include commercial courses and materials from providers such as Kaplan, , and , which offer structured lessons, video tutorials, and practice questions beyond official ETS content. Kaplan's programs emphasize adaptive study plans and expert instruction, with users reporting familiarity with test formats through their practice tools. Similarly, provides online video lessons and thousands of practice questions, often praised for affordability and accessibility in 2024 reviews. Books from third-party publishers supplement official guides with additional drills; Manhattan Prep's 5 lb. Book of GRE Practice Problems contains over 1,800 questions focused on quantitative and , designed for targeted skill-building. Princeton Review's GRE Premium Prep, 2024 edition, includes five practice tests and strategies for all sections, updated to align with the shorter GRE format introduced in 2023. These resources aim to replicate test conditions, though their questions are not drawn from actual exams, potentially limiting predictive accuracy compared to ETS materials. Self-study methods rely on disciplined, independent routines using free or low-cost materials, proving viable for motivated test-takers. Effective strategies involve daily practice—such as 5 days per week for 1-2 hours—focusing on weak areas identified through diagnostic tests, alongside full-length simulations to build stamina. Resources like ETS's free PowerPrep tests and Khan Academy's math modules support self-paced review, with building via flashcards and reading analytical enhancing verbal scores. Empirical data on self-study outcomes indicate score gains of 5-8 points on average with consistent effort, though intensive structured practice yields higher improvements for those mastering content gaps. Analysis of over 100,000 students' online self-directed sessions for GRE preparation highlights that and error review correlate with better retention, underscoring the causal role of active engagement over passive review. Success rates vary by baseline ability; high-achievers often achieve 165+ in quantitative sections through self-study by prioritizing quant drills early. Combining self-study with select third-party elements, such as flashcards or drills, optimizes outcomes without full-course costs, as evidenced by reports of 15+ point jumps from targeted practice. However, undisciplined self-study risks incomplete coverage, as studies show unmastered skills persist without systematic tracking. Test-takers should verify resource alignment with the 2023 GRE revisions, which reduced section lengths and emphasized real-world reasoning.

Testing Formats, Locations, and Accommodations

The GRE General Test is administered exclusively in a computer-delivered format, featuring section-level adaptive testing where the difficulty of subsequent sections adjusts based on performance in prior ones. Since the shortened version introduced in September 2023, the test lasts 1 hour and 58 minutes, comprising one Analytical Writing task (30 minutes), two Verbal Reasoning sections (41 minutes total, 27 minutes each with 12 questions per section plus one unscored), and two Quantitative Reasoning sections (47 minutes total, 21 minutes for the first with 12 questions and 26 minutes for the second with 11 scored questions plus one unscored). This format applies identically to both test center and at-home administrations, with no scheduled breaks and restrictions on unscheduled ones to maintain security. Testing occurs at over 1,000 authorized centers operated by in more than 160 countries, available on a near-continuous basis for computer-delivered sessions, subject to local availability and scheduling. An at-home option, launched in response to the and retained permanently, allows candidates to take the test on a personal desktop or laptop computer in a private, secure location meeting ETS technical requirements, including a compatible operating system, , , and sufficient bandwidth for online proctoring by a monitor. At-home tests require a pre-check of equipment and environment, with the session beginning 15 minutes early for identity verification and setup, and are offered wherever the test center version is available, excluding regions with restrictions. Accommodations for test takers with documented disabilities or health-related needs are provided through ETS Disability Services, requiring submission of a Testing Accommodations Request Form along with supporting documentation at least four weeks prior to the desired test date for approval. Eligible modifications include extended time (e.g., 50% or 100% extra per section), additional breaks outside testing time for medical needs, screen magnification, color contrast adjustments, screen readers like JAWS with or without braille displays, and separate testing rooms; these apply to both test center and at-home formats, though at-home proctoring must accommodate the specific setup. Approval decisions hinge on evidence that the condition substantially limits standard testing conditions, with ETS prioritizing consistency across administrations while adhering to legal standards under the Americans with Disabilities Act.

Criticisms and Counterarguments

Allegations of Cultural or Group Bias

Critics have alleged that the GRE exhibits cultural or group , primarily citing persistent score disparities across racial, ethnic, and socioeconomic lines, which they attribute to test content favoring Western, middle-class experiences. For example, U.S. test-taker data from ETS indicate that White and Asian examinees consistently outperform Black and Hispanic groups on both and Quantitative Reasoning sections, with gaps often approximating one standard deviation, such as average Quantitative scores around 160 for Asians versus lower for other groups. These differences have prompted claims that the test disadvantages underrepresented minorities, leading some graduate programs to adopt test-optional policies to boost diversity, as evidenced by increased enrollment of such applicants without commensurate declines in program outcomes. ETS counters these allegations through multifaceted fairness protocols, including content reviews by diverse expert panels to eliminate culturally loaded items and statistical detection of (DIF), which identifies items where groups with equivalent overall ability perform differently. DIF analyses, conducted using methods like and Mantel-Haenszel procedures on large GRE item pools, have revealed minimal substantive bias; for instance, while some verbal items show small DIF favoring White over Black test-takers due to factors like vocabulary familiarity, these effects are statistically detectable but do not materially impact overall score validity or equity. ETS maintains that such procedures ensure the test measures intended constructs—verbal, quantitative, and —independent of group membership. Empirical research supports limited evidence of inherent cultural bias, as GRE scores demonstrate comparable predictive validity for graduate GPA and degree completion across racial/ethnic groups, suggesting differences reflect underlying ability variances rather than test artifacts. Longitudinal tracking of SAT-to-GRE transitions shows stable subgroup gaps, attributable more to preparation disparities and cognitive skill distributions than item-specific unfairness. Nonetheless, allegations persist in academic discourse, often emphasizing socioeconomic confounders, though randomized studies including GRE scores for underrepresented applicants yield no systematic reviewer against them. Critics' interpretations may overlook that standardized tests like the GRE, by design, abstract from specific cultural knowledge to assess general reasoning, potentially amplifying real group differences in those traits.

Debates on Predictive Limitations

Critics of the GRE argue that its predictive power for graduate school success is limited, particularly beyond first-year grade point average (GPA), with correlations often modest or nonsignificant for broader outcomes like degree completion and research productivity. A 2023 meta-analysis of 128 studies encompassing over 500,000 graduate students found that GRE scores explained minimal variance in non-GPA metrics, such as comprehensive exam passage (only 4.3% of effects significant) and degree completion (nonsignificant in 71.4% of cases), concluding that the test's utility diminishes for long-term success indicators. This contrasts with earlier findings, like a 2001 meta-analysis reporting GRE correlations with graduate GPA ranging from 0.22 to 0.39, though even there, undergraduate GPA (UGPA) outperformed GRE alone, and combined predictors explained only 6-10% of variance in performance. Field-specific studies highlight variability, fueling debates on generalizability; for instance, in biomedical PhD programs, a analysis of 1,685 students at eight institutions showed GRE scores failed to predict PhD completion, qualifying exam passage, time to defense, or publication output, with no significant associations after controlling for UGPA. Similarly, a 2019 study of physics PhD applicants reported null correlations across the GRE score range for research performance, prompting arguments that the test measures test-taking skills rather than domain-specific aptitude or perseverance. ETS counters that GRE adds incremental validity to UGPA, predicting first-year GPA with correlations up to 0.35 in aggregated data, and emphasizes its role in identifying cognitive readiness amid diverse applicant pools. The incremental value of GRE over UGPA alone remains contested, as adding GRE scores boosts explained variance by just 0.04-0.06 in many models, leading some programs to question its necessity given preparation costs and potential to deter qualified candidates without enhancing selection accuracy. Proponents, drawing from ETS validity research, note stronger predictions in quantitative fields (e.g., correlations of 0.40+ for GRE-Q with STEM GPA), arguing limitations stem from outcome measurement issues rather than the test itself, while critics attribute overreliance on GPA as a proxy to causal oversimplification, ignoring unmeasured factors like program fit and . These debates have informed test-optional policies, with empirical reviews indicating no decline in program quality post-adoption, though causal attribution is challenged by selection effects.

Historical Cheating Vulnerabilities

The introduction of the computer-adaptive format for the GRE in 1994 created early vulnerabilities to question memorization and unauthorized sharing, as test-takers could recall and disseminate items from the fixed question pool used in adaptive testing. In 1994, sued Kaplan, a test-preparation company, alleging that Kaplan had systematically sent undercover employees to take the exam and memorize up to 200 questions, which were then incorporated into Kaplan's study materials, compromising test integrity. ETS settled the lawsuit out of court in 1998 and responded by suspending the full computer-adaptive GRE temporarily and later limiting its scope to reduce reuse risks, highlighting how the adaptive model's reliance on a finite item bank enabled such breaches. International testing centers emerged as persistent weak points in the 2000s, with widespread proxy-taking and score inflation reported in high-demand regions. In 2002, ETS identified cheating affecting GRE scores from over 40 countries, particularly flagging irregularities from , , and , where organized groups allegedly facilitated impersonation or leaked materials, leading to the invalidation of thousands of scores. A notable case involved School in , accused in 2001 of accessing and using unreleased GRE questions for coaching, underscoring vulnerabilities in overseas proctoring where local , such as employees providing answers, was documented. Proxy schemes intensified in the , often involving "gunmen" hired to impersonate test-takers at international centers. In 2015, U.S. authorities charged 15 Chinese nationals in a conspiracy to defraud ETS by arranging proxies for GRE and TOEFL exams between 2011 and 2015, using fake identities and coaching services to secure fraudulent scores for Chinese students seeking U.S. graduate admissions. These operations exploited lax identity verification and monitoring at foreign sites, with ETS collaborating with to prosecute participants, though enforcement challenges persisted due to jurisdictional limits. The shift to at-home GRE testing during the from 2020 onward amplified risks, enabling software-assisted cheating and real-time leaks. Reports indicated over 10,000 potential international cheaters using hidden aids or proctor bypasses, with incidents in involving organized score guarantees for 330+ totals and in leading to 2024 withdrawals of MBA acceptances by business schools after ETS flagged virtual test irregularities. Leaked questions appeared on platforms like and , prompting ETS to enhance AI monitoring and biometric checks, though critics noted that high-stakes demand in developing markets continued to drive sophisticated circumventions beyond domestic U.S. controls.

Evidence-Based Defenses and Reforms

A comprehensive by Kuncel, Hezlett, and Ones (2001) synthesized data from over 100 studies involving more than 80,000 graduate students, finding that GRE scores exhibit generalizable for first-year graduate GPA ( r = 0.34 for GRE Verbal and Quantitative combined with undergraduate GPA), overall graduate GPA (r = 0.22), and faculty ratings of student success, outperforming undergraduate GPA alone in many contexts. This validity extends to and degree completion, with GRE scores incrementally improving prediction beyond undergraduate grades by 5-10% in regression models across disciplines. More recent analyses reinforce these findings; a 2023 meta-analysis of GRE predictive validity across graduate outcomes, including GPA and comprehensive exams, reported modest but statistically significant correlations (r ≈ 0.20-0.30), consistent with standardized tests' typical effect sizes for academic performance, and emphasized that GRE adds unique variance not captured by holistic factors like letters of recommendation. ETS has documented over 1,500 validity studies since the test's inception, collectively affirming GRE's role in forecasting valued outcomes such as persistence to degree and writing proficiency, with correlations holding across diverse programs despite range restriction in high-achieving applicant pools. To counter allegations of cultural or group , ETS employs rigorous (DIF) analyses on every test form, screening items for performance discrepancies across demographic groups after controlling for ability; these procedures, detailed in ETS guidelines, have consistently identified minimal DIF in GRE items, ensuring scores reflect construct-relevant skills rather than extraneous factors. For instance, fairness reviews of Analytical Writing prompts revealed low DIF values across and racial/ethnic comparisons, with no prompts exhibiting substantial that warranted removal. In response to criticisms of length, accessibility, and security vulnerabilities, ETS implemented the shorter GRE General Test on September 22, 2023, reducing administration time from approximately 3 hours 45 minutes to 1 hour 58 minutes by eliminating the unscored section and shortening others, while preserving score reliability and through pre-launch equating studies. To address risks, particularly in at-home testing, ETS enhanced proctoring with live monitoring, AI-driven , and post-exam score validation, resulting in thousands of annual investigations and cancellations for confirmed irregularities, thereby upholding score without compromising access. These reforms reflect data-driven adjustments informed by psychometric evaluations, maintaining the test's utility amid evolving admissions practices.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.