A Brief History Of Intelligence Testing And How It's Used Today
The topic of individual human intelligence has been studied extensively throughout history. Despite significant advancements in this area, however, the scientific community still lacks a single definition that experts can agree encompasses all of the most important facets of this complex trait. Let’s dig a little deeper into popular intelligence theories throughout history, different ways in which intelligence can be measured, and what IQ tests are used for today.
What is intelligence? Key theories throughout history
Broadly, human intelligence refers to the abilities that allow a person to learn from experience, adapt to various situations, understand abstract concepts, and use learned knowledge to manipulate their environment. That said, the finer points of what intelligence is, its relevance to daily life, and how it may best be measured have been contested throughout history. Key moments in the history of intelligence theories are outlined below.
The early 1900s: The “g-factor”
One of the first major theories of intelligence was proposed by British psychologist Charles Spearman in the early 1900s. He theorized that a person’s performance on intelligence tests, or IQ tests, could be traced to a constant, underlying factor that he called the g-factor (for “general intelligence”) along with several specific factors unique to a given situation (“s-factors” or “specific factors”). He named this model the Two-Factor Theory of Intelligence, which was one of the first evidence-based intelligence theories—though it was met with some criticism even at the time. For example, L.L. Thurstone, an American psychologist, pushed back with the argument that “g” was actually an amalgamation of seven primary mental abilities.
The 1960s–1990s: Fluid vs. crystallized intelligence
The debate between Spearman and Thurstone was never definitively settled, but subsequent researchers incorporated the work of both into their own theories of intelligence. For example, American psychologist Raymond Cattell suggested that intelligence is hierarchical, with g at the top of a model of gradually narrowing abilities. He also proposed that g can be split into two distinct types of intelligence: fluid and crystallized. Fluid intelligence, also known as nonverbal intelligence, refers to critical reasoning skills that do not rely on learned knowledge. Crystallized intelligence, also known as verbal intelligence, refers to the ability to retain and use previously learned knowledge.
Another American psychologist named John Horn later correctly identified that crystallized intelligence continually increases across the lifespan, but that fluid abilities build when a person is young and decrease when they are older. Later, the work of Cattell and Horn was expanded on by psychologist John Carroll who proposed a "three-stratum" model of intelligence. According to him, the first and widest stratum contains approximately 50 narrow abilities, the middle stratum contains approximately 10 broad abilities—including crystallized and fluid intelligence—and the final and topmost stratum contains only g.
The most widely accepted intelligence model today
The combined work of Cattell, Horn, and Carroll eventually led to the development of the Cattell-Horn-Carroll (CHC) Theory of Intelligence. The CHC theory is the most comprehensive and empirically supported psychometric theory of intelligence to date. It defines 16 broad abilities that are supported by 80 narrow abilities. The 16 broad abilities combined represent g. That said, modern versions of this and other theories continue to move away from the search for a measure of general intelligence, instead focusing on how underlying factors work together to enable someone to adapt to their environment.
The CHC theory is currently considered to be the most accepted intelligence model in the broader scientific community. As such, most modern intelligence tests are designed to adhere to this model. As theorists move further from the concept of g, however, researchers and psychometricians focus more on the underlying cognitive factors of intelligence and less on defining an all-encompassing measure of intelligence such as an IQ score.
The history of intelligence testing
Intelligence testing was popular in the early 20th century in large part because of Herbert Goddard, a psychologist who adapted an intelligence assessment designed by French psychologist Alfred Binet. In 1908, Goddard published his version of the test, dubbed the Binet and Simon Tests of Intellectual Capacity. Goddard heavily promoted his version, marketing it to schools, physicians, and the criminal justice system. Note that his version of the test and his beliefs in general are now strongly associated with eugenics and discriminatory views and policies, as Goddard equated his measures of intelligence with morality and even inherent worth.
While Goddard is associated with the launch of the intelligence testing industry in the United States, it was one of his contemporaries—Lewis Terman—who would eventually become known as the definitive expert in the US on measuring intelligence. Terman was a psychologist and faculty member at Stanford University. He expanded Goddard's version of the Binet and Simon Tests to create the Stanford-Binet Intelligence Scales, which are currently in their fifth revision and are still used today.
Stanford-Binet Intelligence Scale
The Stanford-Binet is notable because it included the concept of an intelligence quotient (IQ), although not the same version that’s used today. Terman's original IQ measure was calculated by taking the ratio of an individual's “mental age” to their chronological age and multiplying it by 100. Mental ages were determined by a person's performance on an intelligence test. For example, if a test indicated that a 10-year-old child had a mental age of 12, their IQ would be calculated by the formula (12/10) x 100, yielding an IQ of 120.
IQ and intelligence testing today
The comparison of mental and chronological ages to define intelligence was short-lived. By the mid-20th century, the ratio IQ had been replaced with a more statistically valid measure: the deviation IQ. A deviation IQ relies on the normal distribution of test scores from different test-takers. In normally distributed data, the further a score is from the average, the lower the probability that the score will be achieved.
Modern IQ tests typically use an average score of 100 and a standard deviation value of 15. The standard deviation represents how far a person's score is from the average as well as how many other test-takers achieved the same or lower score. For example, a person who scores 115 on a test is exactly one deviation above the mean and in the 84th percentile, meaning that 84% of test takers had a score equal to or lower than 115.
Intelligence testing is now most often used for children in school settings. It’s considered a core feature of a comprehensive psychoeducational assessment and is commonly performed when determining if a child meets the criteria for special education services. So while this type of testing is still common in schools, the IQ measure itself is usually disregarded. Instead, school psychologists may use intelligence test scores to determine a student's cognitive profile. They can then use the information from the assessment to create, refine, and apply interventions to help the student succeed.
What can you learn from an IQ test?
Adult intelligence testing is still sometimes used today in businesses, prisons, hospitals, and clinical mental health practices. Trained professionals may use the information from testing to make informed decisions about a person's ability to interact with the world, solve certain problems, and learn new information. As in schools, IQ itself is rarely considered in these settings now.
Note that scientifically valid intelligence testing can generally only be performed by a qualified professional. While a quick web search will reveal plenty of free IQ tests, no freely available, self-guided assessment can test IQ reliably and accurately. One reason is that clinicians or other professionals who administer such a test will consider many factors beyond the responses to test items and will also understand how to correctly interpret IQ and other cognitive scores.
Professionally administered intelligence testing can help people make changes in their lives to improve their well-being or meet their goals. For example, consider a commonly measured cognitive trait known as working memory, or the ability to hold information in short-term storage. You might rely on working memory to remember a phone number while searching for a pen to write it down, for example. A deficit in working memory can lead to trouble planning, organizing, and carrying out daily tasks, even if all other cognitive factors are intact and functioning normally. In this case, a professional can use the data obtained from intelligence testing to confirm a working memory problem and provide recommendations to manage it.
Cognitive testing as a part of therapy
If you’re concerned about some aspect of your cognitive functioning, such as working memory, or are simply looking to learn more about yourself and your thought processes or the way you respond to your environment, you might be interested in therapy. A trained therapist can help you explore various elements of how you think and address any cognitive or mental health symptoms you may be experiencing.
Not everyone is able to attend in-person therapy sessions or feels comfortable meeting with a mental health professional face to face. In situations like these, online therapy can represent a more convenient option. Through an online therapy platform like BetterHelp, you can get matched with a licensed therapist with whom you can then meet virtually via phone, video call, and/or in-app messaging.
Research suggests that cognitive and intelligence testing can be accurately administered remotely. Studies also indicate that online therapy in general may be as effective as in-person therapy in many cases. That means you can generally choose the format that is most convenient for you if you’re interested in meeting with a therapist.
Takeaway
What is the meaning of intelligence testing?
Intelligence tests are considered psychological tests. They were first used by the Army during the first World War to test general mental ability. Today, modern tests of intelligence are widespread. Intelligence tests check how smart and good at handling problems a person is. An IQ (intelligence quotient) score is generally based on how well someone can reason, think critically, and learn from experience.
These are different from achievement tests or aptitude tests, which typically measure a student’s academic performance or mastery of a specific subject. Very low IQ scores could be an indication of a learning disability or other problem with cognitive development. High IQ scores could indicate advanced intelligence.
An intelligence test may be an individually administered test or administered to a group. Individual IQ tests include the Wechsler Test and the Stanford-Binet Scale. Group intelligence tests include the Otis-Lennon School Ability Test (OLSAT) and the Cognitive Abilities Test (CogAT).
What are five things an intelligence test measures?
Intelligence tests typically measure various aspects of cognitive functioning, including logical reasoning, problem-solving, verbal comprehension, memory, and mathematical ability. Together, these factors make up a person’s IQ test score.
Some tests include performance subtests that can measure things like mental processing speed and verbal IQ. These could predict job performance in some occupations.
What is the purpose of intelligence quotient (IQ) testing in schools and educational settings?
School IQ tests and IQ test scores help teachers determine what their students are good and bad at. It helps teachers determine if a student needs more help or more advanced learning methods based on intelligence measured. IQ test scores can also assess mental ability, including verbal abilities, and can help identify whether a person has an intellectual disability (formerly mental retardation).
How does the Stanford-Binet Intelligence Scale differ from other intelligence tests?
The Stanford–Binet is among the oldest and most well-known IQ tests. Because it tests spoken and unspoken skills across all age groups, it is useful for figuring out how cognitive growth is going. A child’s IQ score may be useful to diagnose learning disabilities.
What areas of cognitive ability does the Stanford-Binet Intelligence Scale assess?
The Stanford-Binet Intelligence Scale was the first intelligence test to be widely administered in public schools. It measures five key areas: fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory. These places demonstrate your cognitive strengths, according to the Stanford-Binet test.
Is intelligence testing a good idea?
People who take IQ tests should know their limits, even though they can be helpful. It tells you about some cognitive skills but not your artistic, emotional, or social intelligence. Think about it when you gauge someone's skills.
Raw scores from full-scale IQ tests are often used to determine standard scores and percentile ranks, showing not only individual differences in scores, but also patterns among certain ethnic groups, socioeconomic classes, and more.
How accurate is an intelligence test in measuring overall intelligence?
IQ tests aren't perfect, but they can give you a good idea of how smart someone is. Results of most intelligence tests could be affected by mood, health, and test fear. Very low scores or very high scores might indicate the need for further testing.
Can an intelligence quotient (IQ) change with learning and experience over time?
Because IQ is so complex, no single IQ score or test can fully measure it. IQ can change. IQ can change when you learn new skills (e.g., mathematical skills), go through new things in life, or even change the way you think. Significant changes in IQ scores don't happen often unless something important happens.
Do external factors like stress or anxiety influence IQ testing results?
Anxiety, stress, insufficient sleep, and hunger can all lower performance on cognitive tasks, and thus, IQ scores. Your state of mind during the test can change how well you focus and process information, impacting your score. You might take the same test another day and get a different score.
What are the 3 most commonly used tests for intelligence?
The three most common standardized tests used for intellectual ability include:
Stanford-Binet Intelligence Test
Wechsler Intelligence Scale
Raven’s Progressive Matrices
Other popular standard intelligence tests used in psychological science include:
Kaufman Assessment Battery for Children
Reynolds Intellectual Assessment Scales
Woodcock-Johnson Tests of Cognitive Abilities
- Previous Article
- Next Article