Recent from talks
Contribute something
Nothing was collected or created yet.
Standardized test
View on Wikipedia
A standardized test is a test that is administered and scored in a consistent or standard manner. Standardized tests are designed in such a way that the questions and interpretations are consistent and are administered and scored in a predetermined, standard manner.[1]
A standardized test is administered and scored uniformly for all test takers. Any test in which the same test is given in the same manner to all test takers, and graded in the same manner for everyone, is a standardized test. Standardized tests do not need to be high-stakes tests, time-limited tests, multiple-choice tests, academic tests, or tests given to large numbers of test takers. Standardized tests can take various forms, including written, oral, or practical test. The standardized test may evaluate many subjects, including driving, creativity, athleticism, personality, professional ethics, as well as academic skills.
The opposite of standardized testing is non-standardized testing, in which either significantly different tests are given to different test takers, or the same test is assigned under significantly different conditions or evaluated differently.
Most everyday quizzes and tests taken by students during school meet the definition of a standardized test: everyone in the class takes the same test, at the same time, under the same circumstances, and all of the tests are graded by their teacher in the same way. However, the term standardized test is most commonly used to refer to tests that are given to larger groups, such as a test taken by all adults who wish to acquire a license to get a particular job, or by all students of a certain age. Most standardized tests are summative assessments (assessments that measure the learning of the participants at the end of an instructional unit).
Because everyone gets the same test and the same grading system, standardized tests are often perceived as being fairer than non-standardized tests. Such tests are often thought of as more objective than a system in which some test takers get an easier test and others get a more difficult test. Standardized tests are designed to permit reliable comparison of outcomes across all test takers because everyone is taking the same test and being graded the same way.[2]
Definition
[edit]
The definition of a standardized test has changed somewhat over time.[3] In 1960, standardized tests were defined as those in which the conditions and content were equal for everyone taking the test, regardless of when, where, or by whom the test was given or graded. Standardized tests have a consistent, uniform method for scoring.[4] This means that all test takers who answer a test question in the same way will get the same score for that question. The purpose of this standardization is to make sure that the scores reliably indicate the abilities or skills being measured, and not other variables.[3]
By the beginning of the 21st century, the focus shifted away from a strict sameness of conditions towards equal fairness of testing conditions.[5] For example, a test taker with a broken wrist might write more slowly because of the injury, and it would be more equitable, and produce a more reliable understanding of the test taker's actual knowledge, if that person were given a few more minutes to write down the answers to a time-limited test. Changing the testing conditions in a way that improves fairness with respect to a permanent or temporary disability, but without undermining the main point of the assessment, is called an accommodation. However, if the accommodation undermines the purpose of the test, then the allowances would become a modification of the content, and no longer a standardized test.
| Subject | Format | Standardized test | Non-standardized test | |
|---|---|---|---|---|
| History | Oral | Each student is given the same questions, and their answers are scored in the same way. | The teacher asks each student a different question. Some questions are harder than others. | |
| Driving | Practical skills | Each driving student is asked to do the same things, and they are all evaluated by the same standards. | Some driving students have to drive on a highway, but others only have to drive slowly around the block. One examiner takes points off for "bad attitude", but other examiners do not. | |
| Mathematics | Written | Each student is given the same questions, and their answers are scored in the same way. | The teacher gives different questions to different students: an easy test for poor students, another test for most students, and a difficult test for the best students. | |
| Music | Audition | All musicians play the same piece of music. The judges agreed in advance how much factors such as timing, expression, and musicality count for. | Each musician chooses a different piece of music to play. Judges choose the musician they like best. One judge gives extra points to musicians who wear a costume. |
History
[edit]China
[edit]The earliest evidence of standardized testing was in China, during the Han dynasty,[6] where the imperial examinations covered the Six Arts which included music, archery, horsemanship, arithmetic, writing, and knowledge of the rituals and ceremonies of both public and private parts. These exams were used to select employees for the state bureaucracy.
Later, sections on military strategies, civil law, revenue and taxation, agriculture and geography were added to the testing. In this form, the examinations were institutionalized for more than a millennium.[citation needed]
Today, standardized testing remains widely used, most notably in the Gaokao system.
UK
[edit]Standardized testing was introduced into Europe in the early 19th century, modeled on the Chinese mandarin examinations,[7] through the advocacy of British colonial administrators, the most "persistent" of which was Britain's consul in Guangzhou, China, Thomas Taylor Meadows.[7] Meadows warned of the collapse of the British Empire if standardized testing was not implemented throughout the empire immediately.[7]
Prior to their adoption, standardized testing was not traditionally a part of Western pedagogy. Based on the skeptical and open-ended tradition of debate inherited from Ancient Greece, Western academia favored non-standardized assessments using essays written by students. Because of this, the first European implementation of standardized testing did not occur in Europe proper, but in British India.[8] Inspired by the Chinese use of standardized testing, in the early 19th century, British company managers used standardized exams for hiring and promotions to keep the process fair and free from corruption or favoritism.[8] This practice of standardized testing was later adopted in the late 19th century in the Britain mainland. The parliamentary debates that ensued made many references to the "Chinese mandarin system".[7]
Standardized testing spread from Britain not only throughout the British Commonwealth, but to Europe and then America.[7] Its spread was fueled by the Industrial Revolution, where the increase in number of school students as a result of compulsory education laws decreased the use of open-ended assessments, which were harder to mass-produce and assess objectively.

Standardized tests such as the War Office Selection Boards were developed for the British Army during World War II to choose candidates for officer training and other tasks.[9] The tests looked at soldiers' mental abilities, mechanical skills, ability to work with others, and other qualities. Previous methods had suffered from bias and resulted in choosing the wrong soldiers for officer training.[9]
United States
[edit]Standardized testing has been a part of United States education since the 19th century, but the widespread reliance on standardized testing in schools in the US is largely a 20th-century phenomenon.
Immigration in the mid-19th century contributed to the growth of standardized tests in the United States.[10] Standardized tests were used when people first entered the US to test social roles and find social power and status.[11]
The College Entrance Examination Board began offering standardized testing for university and college admission in 1901, covering nine subjects. This test was implemented with the idea of creating standardized admissions for the United States in northeastern elite universities. Originally, the test was also meant for top boarding schools, in order to align the curriculum between schools.[12] Originally the standardized test was made of essays and was not intended for widespread testing.[12]
During World War I, the Army Alpha and Beta tests were developed to help place new recruits in appropriate assignments based upon their assessed intelligence levels.[13] The first edition of a modern standardized test for IQ, the Stanford–Binet Intelligence Test, appeared in 1916. The College Board then designed the SAT (Scholar Aptitude Test) in 1926. The first SAT test was based on the Army IQ tests, with the goal of determining the test taker's intelligence, problem-solving skills, and critical thinking.[14] In 1959, Everett Lindquist offered the ACT (American College Testing) for the first time.[15] As of 2020, the ACT includes four main sections with multiple-choice questions to test English, mathematics, reading, and science, plus an optional writing section.[16]
Individual states began testing large numbers of children and teenagers through the public school systems in the 1970s. By the 1980s, American schools were assessing nationally.[17] In 2012, 45 states paid an average of $27 per student, and $669 million overall, on large-scale annual academic tests.[18] However, indirect costs, such as paying teachers to prepare students for the tests and for class time spent administering the tests, significantly exceed the direct cost of the test itself.[18]
The need for the federal government to make meaningful comparisons across a highly de-centralized (locally controlled) public education system encouraged the use of large-scale standardized testing. The Elementary and Secondary Education Act of 1965 required some standardized testing in public schools. The No Child Left Behind Act of 2001 further tied some types of public school funding to the results of standardized testing. Under these federal laws, the school curriculum was still set by each state, but the federal government required states to assess how well schools and teachers were teaching the state-chosen material with standardized tests.[19] The results of large-scale standardized tests were used to allocate funds and other resources to schools, and to close poorly performing schools. The Every Student Succeeds Act replaced the NCLB at the end of 2015.[20] By that point, these large-scale standardized tests had become controversial in the United States not necessarily because all the students were taking the same tests and being scored the same way, but because they had become high-stakes tests for the school systems and teachers.[21]
In recent years, many US universities and colleges have abandoned the requirement of standardized test scores by applicants.[22]
Australia
[edit]The Australian National Assessment Program – Literacy and Numeracy (NAPLAN) standardized testing was commenced in 2008 by the Australian Curriculum, Assessment and Reporting Authority, an independent authority "responsible for the development of a national curriculum, a national assessment program and a national data collection and reporting program that supports 21st century learning for all Australian students".[23]
The testing includes all students in Years 3, 5, 7 and 9 in Australian schools to be assessed using national tests. The subjects covered in these tests include Reading, Writing, Language Conventions (Spelling, Grammar and Punctuation) and Numeracy.
The program presents students level reports designed to enable parents to see their child's progress over the course of their schooling life, and help teachers to improve individual learning opportunities for their students. Students and school level data are also provided to the appropriate school system on the understanding that they can be used to target specific supports and resources to schools that need them most. Teachers and schools use this information, in conjunction with other information, to determine how well their students are performing and to identify any areas of need requiring assistance.
The concept of testing student achievement is not new, although the current Australian approach may be said to have its origins in current educational policy structures in both the US and the UK. There are several key differences between the Australian NAPLAN and the UK and USA strategies. Schools that are found to be under-performing in the Australian context will be offered financial assistance under the current federal government policy.
Colombia
[edit]In 1968 the Colombian Institute for the Evaluation of Education (ICFES) was born to regulate higher education. The previous public evaluation system for the authorization of operation and legal recognition for institutions and university programs was implemented.
Colombia has several standardized tests that assess the level of education in the country. These exams are performed by the ICFES.
Students in third grade, fifth grade and ninth grade take the "Saber 3°5°9°" exam. This test is currently presented on a computer in controlled and census samples.
Upon leaving high school students present the "Saber 11" that allows them to enter different universities in the country. Students studying at home can take this exam to graduate from high school and get their degree certificate and diploma.
Students leaving university must take the "Saber Pro" exam.
Canada
[edit]Canada leaves education, and standardized testing as result, under the jurisdiction of the provinces. Each province has its own province-wide standardized testing regime, ranging from no required standardized tests for students in Saskatchewan to exams worth 40% of final high school grades in Newfoundland and Labrador.[24]
Design and scoring
[edit]Design
[edit]Most commonly, a major academic test includes both human-scored and computer-scored sections.
A standardized test can be composed of multiple-choice questions, true-false questions, essay questions, authentic assessments, or nearly any other form of assessment. Multiple-choice and true-false items are often chosen for tests that are taken by thousands of people because they can be given and scored inexpensively, quickly, and reliably through using special answer sheets that can be read by a computer or via computer-adaptive testing. Some standardized tests have short-answer or essay writing components that are assigned a score by independent evaluators who use rubrics (rules or guidelines) and benchmark papers (examples of papers for each possible score) to determine the grade to be given to a response.
Any subject matter
[edit]
Not all standardized tests involve answering questions. An authentic assessment for athletic skills could take the form of running for a set amount of time or dribbling a ball for a certain distance. Healthcare professionals must pass tests proving that they can perform medical procedures. Candidates for driver's licenses must pass a standardized test showing that they can drive a car. The Canadian Standardized Test of Fitness has been used in medical research, to determine how physically fit the test takers are.[25][26]
Machine and human scoring
[edit]Since the latter part of the 20th century, large-scale standardized testing has been shaped in part, by the ease and low cost of grading of multiple-choice tests by computer. Most national and international assessments are not fully evaluated by people.
People are used to score items that are not able to be scored easily by computer (such as essays). For example, the Graduate Record Exam is a computer-adaptive assessment that requires no scoring by people except for the writing portion.[27]
Human scoring is relatively expensive and often variable, which is why computer scoring is preferred when feasible. For example, some critics say that poorly paid employees will score tests badly.[28] Agreement between scorers can vary between 60 and 85 percent, depending on the test and the scoring session. For large-scale tests in schools, some test-givers pay to have two or more scorers read each paper; if their scores do not agree, then the paper is passed to additional scorers.[28]
Though the process is more difficult than grading multiple-choice tests electronically, essays can also be graded by computer. In other instances, essays and other open-ended responses are graded according to a pre-determined assessment rubric by trained graders. For example, at Pearson, all essay graders have four-year university degrees, and a majority are current or former classroom teachers.[29]
Use of rubrics for fairness
[edit]Using a rubric is meant to increase fairness when the test taker's performance is evaluated. In standardized testing, measurement error (a consistent pattern of errors and biases in scoring the test) is easy to determine in standardized testing. When the score depends upon the graders' individual preferences, then test takers' grades depend upon who grades the test.
Standardized tests also remove grader bias in assessment. Research shows that teachers create a kind of self-fulfilling prophecy in their assessment of test takers, granting those they anticipate will achieve with higher scores and giving those who they expect to fail lower grades.[30] In non-standardized assessment, graders have more individual discretion and therefore are more likely to produce unfair results through unconscious bias.
| Student answers | Standardized grading | Non-standardized grading |
|---|---|---|
| Grading rubric: Answers must be marked correct if they mention at least one of the following: Germany's invasion of Poland, Japan's invasion of China, or economic issues. | No grading standards. Each teacher grades however he or she wants to, considering whatever factors the teacher chooses, such as the answer, the amount of effort, the student's academic background, language ability, or attitude. | |
| Student #1: WWII was caused by Hitler and Germany invading Poland in 1939. |
Teacher #1: This answer mentions one of the required items, so it is correct. |
Teacher #1: I feel like this answer is good enough, so I'll mark it correct. |
| Student #2: WWII was caused by multiple factors, including the Great Depression and the general economic situation, the rise of national socialism, fascism, and imperialist expansionism, and unresolved resentments related to WWI. The war in Europe began with the German invasion of Poland. |
Teacher #1: This answer mentions one of the required items, so it is correct. |
Teacher #1: I feel like this answer is correct and complete, so I'll give full credit. |
| Student #3: WWII was caused by the assassination of Archduke Ferdinand in 1914. |
Teacher #1: This answer does not mention any of the required items. No points. |
Teacher #1: This answer is wrong. No points. |
Using scores for comparisons
[edit]There are two types of test score interpretations: a norm-referenced score interpretation or a criterion-referenced score interpretation.[4]
- Norm-referenced score interpretations compare test takers to a sample of peers.[4] The goal is to rank test takers as being better or worse than others. Norm-referenced test score interpretations are associated with traditional education. People who perform better than others pass the test, and people who perform worse than others fail the test.
- Criterion-referenced score interpretations compare test takers to a criterion (a formal definition of content), regardless of the scores of other examinees.[4] These may also be described as standards-based assessments, as they are aligned with the standards-based education reform movement.[31] Criterion-referenced score interpretations are concerned solely with whether or not this particular student's answer is correct and complete. Under criterion-referenced systems, it is possible for all test takers to pass the test, or for all test takers to fail the test.
Either of these systems can be used in standardized testing. What is important to standardized testing is whether all students are asked the equivalent questions, under reasonably equal circumstances, and graded according to the same standards.

A normative assessment compares each test taker against other test takers. A norm-referenced test (NRT) is a type of test, assessment, or evaluation which yields an estimate of the position of the tested individual in a predefined population. The estimate is derived from the analysis of test scores and other relevant data from a sample drawn from the population. This type of test identifies whether the test taker performed better or worse than other people taking this test. An IQ test is a norm-referenced standardized test.
Comparing against others makes norm-referenced standardized tests useful for admissions purposes in higher education, where a school is trying to compare students from across the nation or across the world. The standardization ensures that all of the students are being tested equally, and the norm-referencing identifies which are better or worse. Examples of such international benchmark tests include the Trends in International Mathematics and Science Study (TIMMS) and the Progress in International Reading Literacy Study (PIRLS).

A criterion-referenced test (CRT) is a style of test which uses test scores to show how well test takers performed on a given task, not how well they performed compared to other test takers. Most tests and quizzes that are written by school teachers are criterion-referenced tests. In this case, the objective is simply to see whether the test taker can answer the questions correctly. The test giver is not usually trying to compare each person's result against other test takers.
Standards
[edit]The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any standardized test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any standardized test as a whole within a given context.
Evaluation standards
[edit]
In the field of psychometrics, the Standards for Educational and Psychological Testing[32] place standards about validity and reliability, along with errors of measurement and issues related to the accommodation of individuals with disabilities. The third and final major topic covers standards related to testing applications, credentialing, plus testing in program evaluation and public policy.
In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation[33] has published three sets of standards for evaluations. The Personnel Evaluation Standards[34] was published in 1988, The Program Evaluation Standards (2nd edition)[35] was published in 1994, and The Student Evaluation Standards[36] was published in 2003.
Each publication presents a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing, and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. The tests are meant to provide sound, accurate, and credible information about learning and performance; however, most academic tests (standardized or not) offer narrow information of achievement. Relying on a narrow, academic-focused view achievement does not fully represent a person's potential for success (e.g., by not testing interpersonal skills or soft skills).[37]
Statistical validity
[edit]
One of the main advantages of larger-scale standardized testing is that the results can be empirically documented; therefore, the test scores can be shown to have a relative degree of validity and reliability, as well as results which are generalizable and replicable.[38] This is often contrasted with grades on a school transcript, which are assigned by individual teachers. When looking at individually assigned grades, it may be difficult to account for differences in educational culture across schools, the difficulty of a given teacher's assignments, differences in teaching style, the pressure for grade inflation, and other techniques and biases that affect grading.
Another advantage is aggregation. A well-designed standardized test provides an assessment of an individual's mastery of a domain of knowledge or skill which at some level of aggregation will provide useful information. That is, while individual assessments may not be accurate enough for practical purposes, the mean scores of classes, schools, branches of a company, or other groups may well provide useful information because of the reduction of error accomplished by increasing the sample size.
Testing issues not specific to standardization
[edit]Most tests can be classified on multiple categories. For example, a test can be both standardized and also a high-stakes test, or standardized and also a multiple-choice test. Complaints about "standardized tests" (all test takers take the same test, under reasonably similar conditions, scored the same way) are often focused on concerns unrelated to standardization and apply equally to non-standardized tests. For example, a critic may complain that "the standardized tests are all time-limited tests" (a criticism that is true for many, but not all, annual standardized tests given by schools), but the focus of the criticism is on the time limit, and not on everyone taking the same test and having their answers graded the same way.
High-stakes tests
[edit]| Low-stakes test | High-stakes test | |
|---|---|---|
| Standardized test | A personality quiz on a website | An educational entrance examination to determine university admission |
| Non-standardized test | The teacher asks each student to share something they remember from their homework. | The theater holds an audition to determine who will get a starring role. |
A high-stakes test is a test with a desired reward for good performance.[4] Some standardized tests, including many of the tests used for university admissions around the world, are high-stakes tests. Most standardized tests, such as ordinary classroom quizzes, are low-stakes tests.[4]
Heavy reliance on high-stakes standardized tests for decision-making is often controversial. A common concern with high-stakes tests is that they measure performance during a single event (e.g., performance during a single audition), when critics believe that a more holistic assessment would be appropriate. Critics often propose emphasizing cumulative or even non-numerical measures, such as classroom grades or brief individual assessments (written in prose) from teachers. Supporters argue that test scores provide a clear-cut, objective standard that serves as a valuable check on grade inflation.[39]
Norm-referenced tests
[edit]
A norm-referenced test is one that is designed and scored so that some test takers rank better or worse than others.[4] The ranking provides information about the relative ranking, which is helpful when the goal is to determine who is best (e.g., in elite university admissions).[4]
Disagreement with educational standards
[edit]A criterion-referenced test is more common and more practical when the goal is to know whether the test takers have learned the required material.[4] For example, if the goal is to know whether someone can do parallel parking, then a standardized driving test has the person park the car and measures their performance according to whether it was done correctly and safely.
However, some critics object to standardized tests not because they object to giving students the same test under the reasonably similar conditions and grading the responses the same way, because they object to the type of material that is typically tested by schools. To use the driving test example, a critic might say that it is unnecessary to know whether the driving student can handle parallel parking. In an educational setting, the critics may wish for non-academic skills or soft skills to be tested. Although standardized tests for non-academic attributes such as the Torrance Tests of Creative Thinking exist, schools rarely give standardized tests to measure "initiative, creativity, imagination...curiosity...good will, ethical reflection, or a host of other valuable dispositions and attributes".[40][41] Instead, the tests given by schools tend to focus less on moral or character development, and more on individual identifiable academic skills, such as reading comprehension and arithmetic.
Test anxiety
[edit]
Some people become anxious when taking a test. Between ten and forty percent of students experience test anxiety.[42] Test anxiety applies to both standardized and non-standardized tests.
Test anxiety can appear in any situation in which the person believes that they are being judged by others, especially if they believe that they are unlikely to receive a favorable evaluation.[43] This phenomenon is more common for high-stakes tests than for low-stakes tests. High-stakes tests (whether standardized or non-standardized) can cause test anxiety. Children living in poverty are more likely to be affected by testing anxiety than children from wealthier families.[44]
Some students say they are "bad test takers", meaning that they get nervous and unfocused on tests. For example, during a standardized driving exam, the driving student may be so nervous about the test that they make mistakes. Therefore, while the test is standard and should provide fair results, the test takers claim that they are at a disadvantage compared to test takers who are less nervous.
Multiple-choice tests and test formats
[edit]
A multiple-choice test provides the test taker with questions paired with a pre-determined list of possible answers. It is a type of closed-ended question. The test taker chooses the correct answer from the list.
Many critics of standardized testing object to the multiple-choice format, which is commonly used for inexpensive, large-scale testing and which is not suitable for some purposes, such as seeing whether the test taker can write a paragraph. However, standardized testing can use any test format, including open-ended questions, so long as all test takers take the same test, under reasonably similar conditions, and get evaluated the same way.
Teaching to the test
[edit]Teaching to the test is a process of deliberately narrowing instruction to focus only on the material that will be measured on the test. For example, if the teacher knows that an upcoming history test will not include any questions about the history of music or art, then the teacher could "teach to the test" by skipping the material in the textbook about music and art. Critics also charge that standardized tests encourage "teaching to the test" at the expense of creativity and in-depth coverage of subjects not on the test. Critics say that teaching to the test disfavors higher-order learning; it transforms what the teachers are allowed to be teaching and heavily limits the amount of other information students learn throughout the years.[45] While it is possible to use a standardized test without letting its contents determine curriculum and instruction, frequently, what is not tested is not taught, and how the subject is tested often becomes a model for how to teach the subject.
Externally imposed tests, such as tests created by a department of education for students in their area, encourage teachers to narrow the curricular format and teach to the test.[46]
Performance-based pay is the idea that teachers should be paid more if the students perform well on the tests, and less if they perform poorly.[45] When teachers or schools are rewarded for better performance on tests, then those rewards encourage teachers to "teach to the test" instead of providing a rich and broad curriculum. In 2007 a qualitative study done by Au Wayne demonstrated that standardized testing narrows the curriculum and encourages teacher-centered instruction instead of student-centered learning.[47] New Jersey Governor Chris Christie proposed educational reform in New Jersey that pressures teachers not only to "teach to the test," but also have their students perform at the potential cost of their salary and job security. The reform called for performance-based pay that depends on students' performances on standardized tests and their educational gains.[48]
Critics contend that overuse and misuse of these tests harms teaching and learning by narrowing the curriculum. According to the group FairTest, when standardized tests are the primary factor in accountability, schools use the tests to narrowly define curriculum and focus instruction. Accountability creates an immense pressure to perform and this can lead to the misuse and misinterpretation of standardized tests.[49]
See also
[edit]Major topics
[edit]- Achievement test
- Concept inventory – Knowledge assessment tool
- Educational assessment – Educational evaluation method
- Evaluation – Systematic determination of a subject's merit, worth and significance
- List of standardized tests in the United States
- Psychometrics – Theory and technique of psychological measurement
- Item response theory – Paradigm for the design, analysis, and scoring of tests
- Standards-based assessment – Assessment based on specified standards
- Test (assessment) – Educational assessment
Other topics
[edit]- Alternative assessment
- Campbell's law – Adage about perverse incentives
- High school graduation exam – High school leaving examination
- IBM 805 Test Scoring Machine
- Standards-based education reform – Educational system based on the desired goals
- Volvo effect – Term for a critique of standardized testing
References
[edit]- ^ Popham, W.J. (1999). "Why standardized tests don't measure educational quality". Educational Leadership. 56 (6): 8–15.
- ^ Phelps, Richard P. "Role & Importance of Testing". nonpartisaneducation.org. Retrieved 2016-05-17.
- ^ a b Olson, Amy M.; Sabers, Darrell (October 2008). "Standardized Tests". In Good, Thomas L. (ed.). 21st Century Education: A Reference Handbook. SAGE Publications. pp. 423–430. doi:10.4135/9781412964012.n46. ISBN 9781452265995. S2CID 241229809.
- ^ a b c d e f g h i Allen, G. Donald; Ross, Amanda (2017-11-10). "Low-stakes Tests and Labels". Pedagogy and Content in Middle and High School Mathematics. Springer. ISBN 978-94-6351-137-7.
- ^ Smith, Ember; Reeves, Richard V. (1 December 2020). "SAT math scores mirror and maintain racial inequity". Brookings Institution.
- ^ "Chinese civil service". Encyclopædia Britannica. Retrieved 2 May 2015.
- ^ a b c d e Mark and Boyer (1996), 9–10.
- ^ a b Kazin, Edwards, and Rothman (2010), 142.
- ^ a b Trahair, Richard (2015-06-01). Behavior, Technology, and Organizational Development: Eric Trist and the Tavistock Institute. Transaction Publishers. ISBN 9781412855495.
- ^ Johnson, Robert. "Standardized Tests." Encyclopedia of Educational Reform and Dissent. SAGE Publications, INC. 2010. 853–856.Web.
- ^ Garrison, Mark J. A Measure of Failure: The Political Origins of Standardized Testing. Albany: State University of New York, 2009. Print.
- ^ a b Moller, Stephanie; Potochnick, Stephanie (2008). "Standardized Tests". In Darity, William Jr. (ed.). International Encyclopedia of the Social Sciences. Gale Cengage Learning.
- ^ Gould, S. J., "A Nation of Morons", New Scientist (6 May 1982), 349–352.
- ^ Darity, William Jr. "International Encyclopedia of the Social Sciences". Encyclopedias for Background Information. Gale Cengage Learning. Retrieved 25 January 2017.
- ^ Fletcher, Dan. "Standardized Testing." Time. Time Inc., 11 Dec. 2009. Web. 09 Mar. 2014.
- ^ "What's on the ACT." ACT Test Sections. N.p., n.d. Web. 05 May 2014
- ^ Stiggins, Richard (2002). "Assessment Crisis: The Absence Of Assessment FOR Learning" (PDF). Phi Delta Kappan. 83 (10): 758–765. doi:10.1177/003172170208301010. S2CID 145683785.
- ^ a b Strauss, Valerie (March 11, 2015). "Five Reasons Standardized Testing Isn't Going to Let Up". The Washington Post. The Washington Post. Retrieved 26 January 2017.
- ^ "History and Background of No Child Left Behind". Bright Hub Education9 June 2015. Web. 12 October 2015. http://www.brighthubeducation.com/student-assessment-tools/3140-history-of-the-no-child-left-behind-act/
- ^ "Every Student Succeeds Act (ESSA) | U.S. Department of Education".
- ^ Claiborn, Charles. "High Stakes Testing". Encyclopedia of Giftedness, Creativity, and Talent. SAGE Publications, 2009. 9 April 2014.
- ^ Valerie, Strauss (June 21, 2020). "It looks like the beginning of the end of America's obsession with student standardized tests". The Washington Post.
- ^ "Home – The Australian Curriculum v8.1". www.australiancurriculum.edu.au. Retrieved 2016-05-17.
- ^ Cowley, Peter; MacPherson, Paige (2022). TESTING CANADIAN K-12 STUDENTS: Regional Variability, Room for Improvement (PDF). Fraser Institute. ISBN 978-0-88975-694-6. Retrieved December 19, 2023.
- ^ Horowitz, M. R.; Montgomery, D. L. (January 1993). "Physiological profile of fire fighters compared to norms for the Canadian population". Canadian Journal of Public Health. 84 (1): 50–52. ISSN 0008-4263. PMID 8500058.
- ^ Canadian Association of Sports Sciences; Fitness Appraisal Certification and Accreditation Program; Canadian Society for Exercise Physiology; Fitness Canada (1987). Canadian Standardized Test of Fitness (CSTF): for 15 to 69 years of age: interpretation and counselling manual. Gloucester, Ontario: Canadian Society for Exercise Physiology. ISBN 0-662-15736-2. OCLC 16048356.
- ^ ETS webage Archived 2009-06-18 at the Wayback Machine about scoring the GRE.
- ^ a b Houtz, Jolayne (August 27, 2000) "Temps spend just minutes to score state test A WASL math problem may take 20 seconds; an essay, 21⁄2 minutes" Archived 2007-03-10 at the Wayback Machine. Seattle Times "In a matter of minutes, a $10-an-hour temp assigns a score to your child's test"
- ^ Rich, Motoko (2015-06-22). "Grading the Common Core: No Teaching Experience Required". The New York Times. ISSN 0362-4331. Retrieved 2015-10-06.
- ^ Lee, Jussim (1989). "Teacher expectations: Self-fulfilling prophecies, perceptual bias, and accuracy". Journal of Personality and Social Psychology. 57 (3): 469–480. doi:10.1037/0022-3514.57.3.469.
- ^ Where We Stand: Standards-Based Assessment and Accountability (American Federation of Teachers) [1] Archived August 24, 2006, at the Wayback Machine
- ^ "The Standards for Educational and Psychological Testing". www.apa.org. Retrieved 2 May 2015.
- ^ "Joint Committee on Standards for Educational Evaluation". Archived from the original on 15 October 2009. Retrieved 2 May 2015.
- ^ Joint Committee on Standards for Educational Evaluation. (1988). The Personnel Evaluation Standards: How to Assess Systems for Evaluating Educators. Archived 2005-12-12 at the Wayback Machine Newbury Park, CA: Sage Publications.
- ^ Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards, 2nd Edition. Archived 2006-02-22 at the Wayback Machine Newbury Park, CA: Sage Publications.
- ^ Committee on Standards for Educational Evaluation. (2003). The Student Evaluation Standards: How to Improve Evaluations of Students. Archived 2006-05-24 at the Wayback Machine Newbury Park, CA: Corwin Press.
- ^ Morgan, Hani (2016). "Relying on High-Stakes Standardized Tests to Evaluate Schools and Teachers: A Bad Idea". The Clearing House: A Journal of Educational Strategies, Issues and Ideas. 89 (2): 67–72. doi:10.1080/00098655.2016.1156628. S2CID 148015644.
- ^ Kuncel, N. R.; Hezlett, S. A. (2007). "ASSESSMENT: Standardized Tests Predict Graduate Students' Success". Science. 315 (5815): 1080–81. doi:10.1126/science.1136618. PMID 17322046. S2CID 143260128.
- ^ Buckley, Jack; Letukas, Lynn; Wildavsky, Ben (2017), Measuring Success: Testing, Grades, and the Future of College Admissions, Baltimore: Johns Hopkins University Press, p. 344, ISBN 9781421424965
- ^ Kohn, Alfie (2000). The Case Against Standardized Testing: Rising the Scores, Ruining the Schools. Portsmouth, NH: Heinemann. ISBN 978-0325003252.
- ^ To teach: the journey of a teacher, by William Ayers, Teachers College Press, 1993, ISBN 0-8077-3985-5, ISBN 978-0-8077-3985-3, pg. 116
- ^ Wood; Hart; Little; Phillips (2016). "Test Anxiety and a High-Stakes Standardized Reading Comprehension Test: A Behavioral Genetics Perspective". Merrill-Palmer Quarterly. 62 (3): 233–251. doi:10.13110/merrpalmquar1982.62.3.0233. ISSN 0272-930X. PMC 5487000. PMID 28674461.
- ^ Zeidner, Moshe (2005-12-27). Test Anxiety: The State of the Art. Springer Science & Business Media. pp. 56–58. ISBN 978-0-306-47145-2.
- ^ "Tests and Stress Bias". Harvard Graduate School of Education. 12 February 2019. Retrieved 2022-10-27.
- ^ a b Williams, Mary (2015). "Standardized Testing Is Harming Student Learning". go.galegroup.com. Retrieved March 28, 2018.
- ^ "Goswami U (1991) Put to the Test: The Effects of External Testing on Teachers. Educational Researcher 20: 8-11". Archived from the original on 2013-02-02.
- ^ Au, Wayne (2007-06-01). "High-Stakes Testing and Curricular Control: A Qualitative Metasynthesis". Educational Researcher. 36 (5): 258–267. doi:10.3102/0013189X07306523. ISSN 0013-189X. S2CID 507582.
- ^ Arco, Matt (June 12, 2015). "Christie Education Speech in Iowa". NJ.com. Retrieved July 25, 2016.
- ^ Holloway, J. H. (2001). "The Use and Misuse of Standardized Tests". Educational Leadership. 59 (1): 77.
95.) Test Takers 1
96.) Test Takers 2
Further reading
[edit]- FairTest, "What's Wrong With Standardized Tests," Archived 2019-10-18 at the Wayback Machine Fact Sheet. (New York: Basic Books, 1985), pp. 172–181.
- Harris, Smith and Harris The Myths of Standardized Tests: Why They Don't Tell You What You Think They Do, Rowman & Littlefield 2011* Huddleston, Mark W. Boyer, William W.The higher civil service in the United States: quest for reform. (University of Pittsburgh Press, 1996)
- Phelps, Richard P. The Effect of Testing on Student Achievement, 1910–2010, International Journal of Testing, 10(1), 2012.
- Phelps, Richard P., Ed. Correcting Fallacies about Educational and Psychological Testing. (Washington, DC: American Psychological Association, 2008)
- Phelps, Richard P., Standardized Testing Primer. (New York, NY: Peter Lang, 2007)
- Phelps, Richard P. The Role and Importance of Standardized Testing in the World of Teaching and Training
- Ravitch, Diane, "The Uses and Misuses of Tests" Archived 2017-10-18 at the Wayback Machine, in The Schools We Deserve * Strauss, Valerie. Confirmed: Standardized testing has taken over our schools. But who’s to blame?
External links
[edit]Standardized test
View on GrokipediaDefinition and Core Principles
Definition and Purpose
A standardized test is an assessment that requires all test-takers to answer the same questions, or a selection from a common question bank, under uniform administration and scoring procedures to enable consistent comparison of performance across individuals or groups.[5] This standardization ensures that variations in results reflect differences in abilities rather than discrepancies in testing conditions, with reliability established through empirical validation on large representative samples.[6] Such tests are typically objective, often featuring formats like multiple-choice items that minimize subjective scoring, though they may include constructed-response elements scored via rubrics.[5] The core purpose of standardized testing is to measure specific knowledge, skills, or aptitudes against established norms or criteria, facilitating objective evaluations for decision-making in education, employment, and certification.[7] Norm-referenced tests compare individuals to a peer group, yielding percentile ranks or standard scores derived from a normal distribution, while criterion-referenced tests assess mastery of predefined standards independent of others' performance.[5] These instruments support high-stakes applications, such as college admissions via exams like the SAT, where over 1.9 million U.S. students participated in 2023 to demonstrate readiness, or accountability measures under policies like No Child Left Behind, which mandated annual testing in reading and mathematics for grades 3-8 from 2002 onward to track proficiency rates.[8] By providing quantifiable data, standardized tests inform resource allocation, curriculum adjustments, and identification of achievement gaps, though their validity depends on alignment with intended constructs and avoidance of cultural biases confirmed through psychometric analysis.[9][6] In professional contexts, standardized tests serve selection and licensure functions, such as the Graduate Record Examination (GRE) used by over 300 graduate programs annually to predict academic success, or civil service exams that screened applicants for U.S. federal positions since the Pendleton Act of 1883, reducing patronage by prioritizing merit-based scoring.[8] Overall, their design promotes fairness by mitigating evaluator bias, enabling large-scale assessments that individual judgments cannot match in scalability or comparability.[10]Key Characteristics of Standardization
Standardization in testing refers to the establishment of uniform procedures for test administration, scoring, and interpretation to ensure comparability of results across test-takers. This process mandates that all examinees encounter identical or statistically equivalent test items, receive the same instructions, adhere to consistent time limits, and complete the assessment under comparable environmental conditions, such as quiet settings and supervised proctoring.[5][11] Such uniformity minimizes extraneous variables that could influence performance, enabling scores to reflect inherent abilities or knowledge rather than situational differences.[12] A core feature is objective scoring, where responses are evaluated using predetermined criteria that reduce or eliminate subjective judgment, often through machine-readable formats like multiple-choice items or automated essay scoring algorithms calibrated against human benchmarks. This objectivity contrasts with teacher-made assessments, where variability in grading can introduce bias; standardized tests achieve high inter-rater reliability, typically exceeding 0.90 in psychometric evaluations, by employing fixed answer keys or rubrics validated through empirical trials.[13] Equivalent forms—alternate versions of the test with parallel difficulty and content—are developed and equated statistically to prevent advantages from prior exposure, ensuring fairness in repeated administrations such as annual proficiency exams.[14] Norming constitutes another essential characteristic, involving the administration of the test to a large, representative sample of the target population—often thousands stratified by age, gender, socioeconomic status, and geography—to derive percentile ranks, standard scores, or stanines that contextualize individual performance. For instance, norms for aptitude tests like the SAT are updated periodically using samples exceeding 1 million U.S. high school students to reflect demographic shifts and maintain relevance.[15] This process relies on psychometric techniques, including item response theory, to calibrate difficulty and discriminate ability levels, yielding reliable metrics where test-retest correlations often surpass 0.80 over short intervals.[16] Without rigorous norming, scores lack interpretive validity, as evidenced by historical revisions to IQ tests that adjusted for the Flynn effect—a documented 3-point-per-decade rise in scores due to environmental factors.[17] Finally, standardization incorporates safeguards for accessibility and equity, such as accommodations for disabilities (e.g., extended time verified through empirical validation studies) while preserving test integrity, and ongoing validation against external criteria like academic outcomes to confirm predictive utility. These elements collectively underpin the test's reliability—consistency of scores under repeated conditions—and validity—alignment with intended constructs—hallmarks of psychometric soundness.[18][19]Historical Development
Ancient and Early Modern Origins
The earliest known system of standardized testing emerged in ancient China during the Han dynasty (206 BCE–220 CE), where initial forms of merit-based selection for government officials involved recommendations and rudimentary assessments of scholarly knowledge, primarily drawn from Confucian texts.[20] This evolved into a more formalized examination process by the Sui dynasty (581–618 CE), with Emperor Wen establishing the first imperial examinations in 605 CE to recruit civil servants based on uniform evaluations of candidates' mastery of classical literature, ethics, and administrative skills.[21] These tests were administered nationwide at provincial, metropolitan, and palace levels, featuring standardized formats such as essay writing on prescribed topics from the Five Classics and policy memoranda, with anonymous grading to minimize favoritism and corruption.[22] By the Tang dynasty (618–907 CE), the system had standardized further, emphasizing rote memorization, poetic composition, and interpretive analysis under timed conditions, serving as a meritocratic tool for social mobility that bypassed hereditary privilege in favor of demonstrated competence.[23] Success rates were low, with only about 1–5% of candidates passing the highest levels across dynasties, reflecting rigorous norming against elite scholarly standards. The Song dynasty (960–1279 CE) refined the process with printed question papers and multiple-choice elements in some sections, increasing scale to thousands of examinees per cycle and institutionalizing it as a cornerstone of bureaucratic selection.[23] In contrast, ancient Western traditions, such as those in Greece and Rome, relied on non-standardized oral examinations and rhetorical displays rather than uniform written tests. Greek education in city-states like Athens involved assessments through debates and recitations evaluated subjectively by teachers, prioritizing dialectical skills over quantifiable metrics.[24] Roman systems similarly featured public orations and legal disputations for entry into professions, lacking the centralized, anonymous scoring of Chinese exams.[24] During the early modern period in China (Ming and Qing dynasties, 1368–1912 CE), the keju system persisted with enhancements like stricter content uniformity and anti-cheating measures, such as secluded testing halls, testing up to 10,000 candidates per session and maintaining predictive validity for administrative roles through empirical correlations with performance in office. In Europe, early modern assessments remained predominantly oral or essay-based in universities, with no widespread adoption of standardized formats until the 19th century, when British administrators drew indirect inspiration from Chinese models for colonial civil services.[25]19th and Early 20th Century Innovations
In the mid-19th century, educational reformers in the United States began transitioning from oral examinations to standardized written assessments to promote uniformity and objectivity in evaluating student achievement. Horace Mann, secretary of the Massachusetts Board of Education, advocated for written tests in 1845 as a means to assess pupil progress across diverse school districts, replacing subjective yearly oral exams with more consistent methods that could reveal systemic educational deficiencies.[26] This shift aligned with broader efforts to professionalize public schooling, though early implementations remained limited in scope and lacked the statistical norming of later standardized tests.[27] Pioneering psychometric approaches emerged in the late 19th century, with Francis Galton developing early mental tests in the 1880s to quantify human abilities through anthropometric measurements of sensory discrimination, reaction times, and mental imagery via questionnaires distributed to scientific acquaintances. Galton's work, influenced by his studies of heredity and individual differences, established foundational principles for measuring innate capacities empirically, though his tests correlated more with sensory acuity than higher cognitive functions.[28] These innovations laid the groundwork for differential psychology but were critiqued for overemphasizing physiological traits over intellectual ones. The early 20th century saw practical applications in educational and admissions testing. The College Entrance Examination Board (CEEB) was founded in 1900 by representatives from 12 universities to standardize college admissions, administering its first essay-based exams in 1901 across nine subjects including mathematics, history, and classical languages, with over 300 students tested nationwide.[29] Concurrently, in 1905, French psychologist Alfred Binet and physician Théodore Simon created the Binet-Simon scale, the first operational intelligence test, featuring 30 age-graded tasks such as following commands, naming objects, and pattern reproduction to identify children with intellectual delays for remedial education, as commissioned by the Paris Ministry of Public Instruction.[30][31] This scale introduced concepts like mental age, emphasizing practical utility over Galtonian sensory focus, and was revised in 1908 to enhance reliability. World War I accelerated large-scale standardization with the U.S. Army's Alpha and Beta tests, developed in 1917 under psychologist Robert Yerkes and administered to approximately 1.7 million recruits by 1918 to classify personnel by mental aptitude. The Alpha, a verbal multiple-choice exam covering arithmetic, vocabulary, and analogies, targeted literate soldiers, while the Beta used pictorial and performance tasks for illiterate or non-English speakers, enabling rapid group testing under time constraints and yielding data on national intelligence distributions that influenced postwar policy debates.[32] These military innovations demonstrated standardized tests' scalability for selection in high-stakes contexts, though results were later contested for cultural biases favoring educated urban recruits.[33]Mid-20th Century Expansion and Standardization
The expansion of standardized testing in the mid-20th century was propelled by the post-World War II surge in educational access, particularly in the United States, where the Servicemen's Readjustment Act of 1944—commonly known as the GI Bill—provided tuition assistance, subsistence allowances, and low-interest loans to over 7.8 million veterans by 1956, leading to a tripling of college enrollments from 1.5 million students in 1940 to 4.6 million by 1950.[34] [35] This influx overwhelmed traditional admissions methods reliant on subjective recommendations, prompting greater dependence on objective, scalable assessments like the Scholastic Aptitude Test (SAT), which had originated in 1926 but saw administrations rise from approximately 10,000 test-takers in 1941 to over 100,000 annually by the early 1950s to facilitate merit-based selection amid the applicant boom.[36] [27] A pivotal development occurred in 1947 with the founding of the Educational Testing Service (ETS) through the consolidation of testing operations from the College Entrance Examination Board, the Carnegie Foundation for the Advancement of Teaching, and the American Council on Education; this nonprofit entity, chartered by the New York State Board of Regents, centralized test development, administration, and scoring to enhance psychometric rigor, including the adoption of multiple-choice formats amenable to machine scoring and the establishment of national norms based on representative samples.[37] [38] Under leaders like Henry Chauncey, ETS refined procedures for equating test forms across administrations—ensuring scores reflected consistent difficulty levels—and expanded the SAT's scope, administering it to broader demographics while integrating statistical methods like item analysis to minimize content bias and maximize reliability coefficients often exceeding 0.90 for total scores.[39] [2] In K-12 education, standardized achievement tests proliferated during this era, becoming embedded in school routines by the 1950s; instruments such as the Stanford Achievement Test (revised in 1941 and widely adopted postwar) and the Iowa Tests of Basic Skills (first published in 1935 and expanded in the 1940s) were administered annually to millions of students in over half of U.S. school districts to benchmark performance against grade-level norms derived from stratified national samples, enabling comparisons of instructional effectiveness across regions.[40] [29] These tests emphasized criterion-referenced elements alongside norm-referencing, with subscores in subjects like reading and mathematics yielding percentile ranks that informed curriculum adjustments, though their validity hinged on empirical validation showing correlations of 0.50–0.70 with future academic outcomes.[27] The 1959 introduction of the American College Test (ACT), comprising sections in English, mathematics, social sciences, and natural sciences, further diversified higher-education assessments, competing with the SAT by offering content-specific measures scored on a 1–36 scale.[41] Standardization processes advanced through psychometric innovations, including the widespread use of normal distribution models for norming—where raw scores were converted to standardized scales (e.g., mean of 500 and standard deviation of 100 for SAT verbal and math sections)—facilitating inter-year comparability and predictive utility, as evidenced by longitudinal studies linking scores to college grade-point averages with coefficients around 0.50.[1] This era's emphasis on empirical reliability over anecdotal evaluation marked a shift toward data-driven educational decision-making, though it also amplified debates on test coaching effects and socioeconomic correlations in score variances.[42]Late 20th to Early 21st Century Reforms
In the United States, the No Child Left Behind Act (NCLB), signed into law on January 8, 2002, represented a major federal push for accountability through expanded standardized testing, requiring states to administer annual assessments in reading and mathematics to students in grades 3 through 8, as well as once in high school, with results disaggregated by subgroups including race, income, English proficiency, and disability status to identify achievement gaps.[43] The law tied school funding and sanctions to adequate yearly progress (AYP) benchmarks, aiming to ensure all students reached proficiency by 2014, which spurred states to develop or refine aligned tests while increasing overall testing volume from sporadic to systematic.[44] Empirical data post-NCLB showed modest gains in national math scores for grades 4 and 8 (rising 11 and 7 points, respectively, from 2003 to 2007 on the National Assessment of Educational Progress) and narrowed gaps between white and minority students, though critics noted incentives for narrowed curricula focused on tested subjects.[45] Reforms in college admissions testing during this era addressed criticisms of content misalignment and predictive validity; the SAT, administered by the College Board, underwent a significant redesign in 2005, adding a writing section with an essay component that raised the maximum score from 1600 to 2400 and aimed to better reflect high school curricula amid competition from the ACT, which saw rising usage from 28% of test-takers in 1997 to over 40% by 2007.[46] These changes responded to research questioning the SAT's verbal analogies for cultural bias and low correlation with college GPA (around 0.3-0.4), prompting shifts toward evidence-based reading and grammar assessment.[47] By 2016, further SAT revisions eliminated the penalty for guessing, emphasized real-world data interpretation, and aligned more closely with Common Core emphases on critical thinking, reflecting broader efforts to enhance fairness and utility. The ACT, in parallel, introduced optional writing in 2005 and expanded science reasoning sections, adapting to demands for multifaceted skill measurement. The adoption of the Common Core State Standards (CCSS) in 2010 by 45 states and the District of Columbia catalyzed a wave of assessment reforms, replacing many state-specific tests with consortium-developed exams like the Partnership for Assessment of Readiness for College and Careers (PARCC) and Smarter Balanced, which incorporated performance tasks, open-ended questions, and computer-based delivery to evaluate deeper conceptual understanding over rote recall.[48] These standards-driven tests, rolled out from 2014-2015, prioritized skills like evidence-based argumentation in English language arts and mathematical modeling, with initial implementation showing varied state proficiency rates (e.g., 37% in math for grade 8 nationally in early trials) but facing pushback over federal overreach perceptions and implementation costs exceeding $1 billion across states.[49] Concurrently, computerized adaptive testing (CAT) gained traction, as seen in Smarter Balanced's format where question difficulty adjusts in real-time based on prior responses, reducing test length by 20-30% while maintaining reliability (Cronbach's alpha >0.90) through item response theory algorithms that calibrate to individual ability levels.[50] This technological shift, piloted in state assessments post-NCLB, improved precision by minimizing floor and ceiling effects, though equitable access to computer infrastructure remained a challenge in under-resourced districts.[51]Design and Technical Aspects
Test Construction and Content Development
Test construction for standardized assessments follows a rigorous, multi-stage process guided by psychometric principles to ensure the instruments measure intended constructs with high fidelity, reliability, and minimal distortion from extraneous factors. This involves collaboration among subject matter experts (SMEs), psychometricians, and statisticians, adhering to frameworks like those in the Standards for Educational and Psychological Testing, which emphasize evidence-based design, documentation of procedures, and evaluation of technical quality throughout development.[52][53] The process prioritizes alignment with validated content standards, such as state curricula or college-readiness benchmarks, to support causal inferences about examinee abilities rather than superficial knowledge recall.[54] Initial content development entails creating a test blueprint that specifies domains, subdomains, item types (e.g., multiple-choice, constructed-response), cognitive demands (e.g., recall vs. application), and item distributions to reflect real-world task relevance. For instance, the College Board's SAT Suite derives specifications from empirical analyses of skills predictive of postsecondary success, including algebra, problem-solving in science, and evidence-based reading.[55] SMEs, often educators or practitioners, draft items under strict guidelines: stems must pose unambiguous problems, options should include plausible distractors without clues, and content must avoid cultural or linguistic biases that could confound ability measurement.[56] Educational Testing Service (ETS) employs interdisciplinary teams for this, with items prototyped to target precise difficulty levels—typically aiming for 0.3 to 0.7 on the p-value scale (proportion correct)—to optimize information yield across ability ranges.[57] Draft items undergo iterative reviews for content accuracy, clarity, and fairness, including sensitivity panels to detect potential adverse impacts on demographic subgroups. ETS protocols mandate multiple blind reviews and empirical checks for differential item functioning (DIF), where statistical models like Mantel-Haenszel or logistic regression identify items performing discrepantly across groups after controlling for overall ability, leading to revision or deletion if discrepancies exceed thresholds (e.g., standardized DIF >1.5).[56][57] Pretesting follows on representative samples—often thousands of examinees mirroring the target population in age, ethnicity, and socioeconomic status—to gather empirical data. Item analyses compute metrics such as point-biserial correlations (ideally >0.3 for discrimination) and internal consistency via Cronbach's alpha (>0.8 for high-stakes tests), informing selection for operational forms.[53] Final assembly balances statistical properties with content coverage, using algorithms to equate forms for comparability across administrations via methods like Item Response Theory (IRT), which models item parameters (difficulty, discrimination) on a latent trait scale. This ensures scores reflect stable ability estimates, with equating studies verifying mean score invariance within 0.1 standard deviations.[58] Ongoing validation, including post-administration analyses, refines future iterations; for example, ACT and ETS conduct annual reviews correlating item performance with external criteria like GPA to confirm construct validity.[59] These procedures, while reducing measurement error, cannot eliminate all sources of variance, as real-world causal factors like motivation or prior exposure influence outcomes, underscoring the need for multifaceted interpretations of scores.[52]Administration and Scoring Procedures
Standardized tests demand uniform administration protocols to guarantee score comparability and minimize extraneous influences on performance. These protocols, outlined in professional guidelines such as the Standards for Educational and Psychological Testing (2014), require test administrators to adhere strictly to developer-specified instructions, including precise timing of sections, standardized verbal directions, and controlled environmental conditions like quiet rooms and proper lighting.[52] [53] Trained proctors oversee sessions to enforce rules against unauthorized aids, communication, or disruptions, with at least one proctor per group of examinees typically mandated to uphold security.[60] Irregularities, such as suspected cheating, trigger documented incident reports and potential score invalidation to preserve test integrity.[61] Accommodations for disabilities follow established criteria, ensuring equivalent access without altering test constructs, as per guidelines emphasizing fairness over advantage.[52] Test security extends to handling materials pre- and post-administration, with secure storage and chain-of-custody procedures to prevent tampering or leaks.[62] Scoring procedures prioritize objectivity and consistency to reflect true ability rather than rater variability. Objective items, such as multiple-choice questions, undergo automated scoring via optical mark recognition or digital systems, yielding raw scores as the count of correct responses, often adjusted for guessing through formulas like deducting a fraction of incorrect answers.[63] Raw scores convert to scaled scores through equating processes—statistical methods like linear or equipercentile equating—that account for form difficulty differences, maintaining score meaning across administrations and yielding metrics like percentiles or standard scores with means of 100 and standard deviations of 15 or 20.[64] [65] Constructed-response items employ analytic rubrics with predefined criteria, scored by trained human raters under dual-rating systems where interrater agreement targets 80-90% exact matches or adjacent categories, with adjudication for discrepancies.[66] ETS guidelines for such scoring stress rater calibration sessions, ongoing monitoring, and empirical checks for bias to ensure reliability coefficients above 0.80.[66] Final scores aggregate section results, sometimes weighted, and undergo psychometric review for anomalies before release, typically within weeks via secure portals.[67]Standardization and Norming Processes
Standardization in psychological and educational testing entails establishing uniform protocols for test administration, scoring, and interpretation to ensure comparability across individuals and groups. This process begins with the development of test items through rigorous procedures, including content validation by subject-matter experts and pilot testing to refine items for clarity and difficulty. The test is then field-tested on a large, representative sample under controlled conditions—such as standardized instructions, timing, and environment—to collect empirical data for scaling and norm establishment.[68] Norming follows field testing and involves administering the test to a norm group, typically a stratified random sample of thousands of individuals matched to the target population's demographics, including age, gender, ethnicity, socioeconomic status, and geographic region. For national tests like the SAT, the norm group comprises over 200,000 college-bound high school seniors annually, reflecting the intended test-taker pool. Raw scores from this group are analyzed statistically to derive descriptive statistics, such as the mean and standard deviation, often assuming a normal distribution for score transformation into standard scores (e.g., mean of 100 and standard deviation of 15 for intelligence tests like the Wechsler Adult Intelligence Scale). Percentile ranks, stanines, and other derived metrics are computed to indicate relative standing within the norm group.[69][70] Norm-referenced standardization, prevalent in aptitude and achievement tests, interprets scores relative to the norm group's performance, enabling comparisons of an individual's standing (e.g., top 10% percentile). In contrast, criterion-referenced norming evaluates mastery against fixed performance standards, such as proficiency cut scores determined via methods like Angoff or bookmarking by expert panels, without direct peer comparison. Many modern standardized tests hybridize these approaches; for instance, state accountability exams under the U.S. Every Student Succeeds Act (2015) set criterion-based proficiency levels but may report norm-referenced percentiles for additional context. Norms must be periodically renormed—every 5–15 years—to account for shifts in population abilities, as seen in IQ tests where the Flynn effect necessitates upward adjustments of approximately 3 IQ points per decade. Failure to update can lead to score inflation or deflation, undermining validity.[71][72] Equating ensures comparability across multiple test forms or administrations, using techniques like equipercentile methods or item response theory (IRT) to adjust for minor content variations while preserving the underlying ability scale. This is critical for high-stakes tests, where statistical linking maintains score stability; for example, the Graduate Record Examination (GRE) employs IRT-based equating on a continuous scale from field test data. Overall, these processes prioritize empirical rigor to minimize measurement error, though critiques note potential biases if norm groups inadequately represent subgroups, prompting ongoing refinements via diverse sampling and differential item functioning analyses.[73]Validity, Reliability, and Empirical Foundations
Statistical Validity and Reliability Metrics
Standardized tests assess reliability through metrics that quantify score consistency, such as internal consistency via Cronbach's alpha (or Kuder-Richardson 20 for dichotomous items), test-retest correlations, alternate forms reliability, and inter-rater agreement for constructed-response sections. Cronbach's alpha measures how well items correlate to form a unidimensional scale, with values above 0.70 deemed acceptable and above 0.90 indicating excellent reliability; for major admissions tests, alphas typically exceed 0.90, reflecting low measurement error and high precision.[74][75] For the ACT Composite score, reliability estimates reach 0.95 for 10th graders and 0.96 for 11th graders, based on large-scale administrations.[76] Similarly, SAT sections show coefficients from 0.89 to 0.93 across internal consistency and test-retest methods.[75] GRE sections exhibit Verbal Reasoning reliability of 0.92, Quantitative Reasoning at 0.93, Analytical Writing at 0.79 (lower due to subjective scoring), and combined Verbal+Quantitative at 0.96.[77] Test-retest reliability, evaluating score stability over short intervals (e.g., 2-3 weeks), is particularly relevant for aptitude-oriented standardized tests measuring relatively stable cognitive traits, yielding coefficients often above 0.80 in achievement contexts.[78] Alternate forms reliability, used when parallel test versions exist, similarly supports consistency, as seen in equating processes for tests like the SAT to minimize form-to-form variance. These metrics collectively ensure that true score variance dominates over error, with reliability informing standard error of measurement calculations (e.g., SEM = SD * sqrt(1 - reliability)), which for high-reliability tests like the ACT yields narrow confidence intervals around scores.[79] Validity metrics evaluate whether tests measure intended constructs, encompassing content validity (alignment to domain specifications via expert judgment), criterion validity (correlations with external outcomes), and construct validity (convergent/discriminant evidence, factor structure). Predictive criterion validity for college admissions tests is gauged by correlations with first-year GPA (FYGPA), ranging from 0.51 to 0.67 for undergraduate tests when uncorrected for range restriction; SAT and ACT scores alone predict FYGPA at approximately 0.30-0.40 observed, rising to 0.50+ adjusted, though combining with high school GPA enhances this to 0.50-0.60.[80][75] For graduate tests like the GRE, observed correlations with first-year law GPA are 0.33 for Verbal+Quantitative, adjusting to 0.54 after correcting for selection effects.[77] Construct validity evidence includes factor analyses confirming general cognitive ability ("g") loading, with standardized tests correlating 0.70-0.80 with other g-loaded measures, supporting their role in assessing reasoning over narrow skills.[81]| Test Section | Reliability Coefficient (e.g., Cronbach's α or Equivalent) | Source |
|---|---|---|
| SAT (overall sections) | 0.89-0.93 | [75] |
| ACT Composite (11th grade) | 0.96 | [76] |
| GRE Verbal+Quantitative | 0.96 | [77] |
| GRE Analytical Writing | 0.79 | [77] |


