Final answer:
Intelligence measurement inferred from test behavior must be valid. The history of intelligence tests stems from the late 1800s, with continued developments to improve their accuracy and equity. Despite advancements, the validity and application of IQ tests in various domains remain debated topics.
Step-by-step explanation:
Because "intelligence" is often inferred from behavior on a test, its measurement must be valid. A measurement is considered valid if it accurately represents the concept it intends to measure. The long history of intelligence tests dates back to the late 1800s, with significant contributions by significant figures like Sir Francis Galton and Alfred Binet. Binet's work, in particular, led to the development of tests that were normed and standardized, to provide a consistent measure of intelligence based on a bell curve of scores within the population.
Over time, IQ testing has evolved to become more equitable and accurate, as researchers continue to refine the tests to capture a broader range of cognitive abilities. However, the validity and reliability of IQ tests remain a contentious topic, with ongoing debate regarding the specific skills they assess and the implications of using IQ scores in various settings, such as education and the legal system. The operational definition of intelligence must be reliable, meaning it should yield consistent results across different individuals and situations.
IQ testing is prevalent in educational and clinical settings for identifying individual needs, and it's also utilized in legal contexts. Yet, the history of these tests also includes darker periods where they supported harmful ideologies like eugenics. Modern administration of IQ tests is regulated, ensuring that only trained professionals can conduct and interpret these assessments. Intelligence, as a concept, has also evolved, with theories ranging from Aristotle's ancient views to modern psychologists like Charles Spearman who emphasized a general intelligence factor, 'g'.