Hubbry Logo
search
logo
2322632

Intelligence quotient

logo
Community Hub0 Subscribers
Write something...
Be the first to start a discussion here.
Be the first to start a discussion here.
See all
Intelligence quotient

An intelligence quotient (IQ) is a total score derived from a set of standardized tests or subtests designed to assess human intelligence. Originally, IQ was a score obtained by dividing a person's estimated mental age, obtained by administering an intelligence test, by the person's chronological age. The resulting fraction (quotient) was multiplied by 100 to obtain the IQ score. For modern IQ tests, the raw score is transformed to a normal distribution with mean 100 and standard deviation 15. This results in approximately two-thirds of the population scoring between IQ 85 and IQ 115 and about 2 percent each above 130 and below 70.

Scores from intelligence tests are estimates of intelligence. Unlike quantities such as distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence". IQ scores have been shown to be associated with factors such as nutrition, parental socioeconomic status, morbidity and mortality, parental social status, and perinatal environment. While the heritability of IQ has been studied for nearly a century, there is still debate over the significance of heritability estimates and the mechanisms of inheritance. The best estimates for heritability range from 40 to 60% of the variance between individuals in IQ being explained by genetics.

IQ scores were used for educational placement, assessment of intellectual ability, and evaluating job applicants. In research contexts, they have been studied as predictors of job performance and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate of three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform research on human intelligence.

Historically, many proponents of IQ testing have been eugenicists who used pseudoscience to push later debunked views of racial hierarchy in order to justify segregation and oppose immigration. Such views have been rejected by a strong consensus of mainstream science, though fringe figures continue to promote them in pseudo-scholarship and popular culture.

Historically, there were attempts to classify people into intelligence categories by observing their behavior in daily life. Those other forms of behavioral observation are still important for validating classifications based primarily on IQ test scores. Both intelligence classification by observation of behavior outside the testing room and classification by IQ testing depend on the definition of "intelligence" used in a particular case and on the reliability and error of estimation in the classification procedure.

The English statistician Francis Galton (1822–1911) made the first attempt at creating a standardized test for rating a person's intelligence. A pioneer of psychometrics and the application of statistical methods to the study of human diversity and the study of inheritance of human traits, he believed that intelligence was largely a product of heredity (by which he did not mean genes, although he did develop several pre-Mendelian theories of particulate inheritance). He hypothesized that there should exist a correlation between intelligence and other observable traits such as reflexes, muscle grip, and head size. He set up the first mental testing center in the world in 1882 and he published "Inquiries into Human Faculty and Its Development" in 1883, in which he set out his theories. After gathering data on a variety of physical variables, he was unable to show any such correlation, and he eventually abandoned this research.

French psychologist Alfred Binet and psychiatrist Théodore Simon, had more success in 1905, when they published the Binet–Simon Intelligence test, which focused on verbal abilities. It was intended to identify "mental retardation" in school children, but in specific contradistinction to claims made by psychiatrists that these children were "sick" (not "slow") and should therefore be removed from school and cared for in asylums. The score on the Binet–Simon scale would reveal the child's mental age. For example, a six-year-old child who passed all the tasks usually passed by six-year-olds—but nothing beyond—would have a mental age that matched his chronological age, 6.0. (Fancher, 1985). Binet and Simon thought that intelligence was multifaceted, but came under the control of practical judgment.

In Binet and Simon's view, there were limitations with the scale and they stressed what they saw as the remarkable diversity of intelligence and the consequent need to study it using qualitative, as opposed to quantitative, measures (White, 2000). American psychologist Henry H. Goddard published a translation of it in 1910. American psychologist Lewis Terman at Stanford University revised the Binet–Simon scale, which resulted in the Stanford revision of the Binet-Simon Intelligence Scale (1916). It became the most popular test in the United States for decades.

See all
User Avatar
No comments yet.