Final answer:
The original Stanford-Binet defined IQ as intelligence quotient, a measure of intelligence on a standardized bell curve with an average score of 100. IQ scores are calculated around this mean, and a score of 130 or above indicates superior intelligence. Contemporary IQ tests like the WAIS-IV and WISC-V continue to be recalibrated for accuracy.
Step-by-step explanation:
For the original version of the Stanford-Binet, IQ was defined as intelligence quotient, which describes a score earned on a test designed to measure intelligence. The IQ test was developed by Alfred Binet and later standardized by Louis Terman from Stanford University. IQ scores are calculated based on a normed and standardized bell curve, with the average score set at 100. Scores are dispersed around the mean, and one standard deviation above or below corresponds to 15 IQ points. Therefore, a score of 115 would be considered one standard deviation above the mean, 85 would be one standard deviation below, and a score of 130 or above indicates superior intelligence.
Arthur Jensen's controversial research suggested that intelligence has genetic influences and consists of two types - Level I and Level II - with Level II exhibiting differences among ethnic groups. This stirred significant debate and claims of racial bias in his findings. Modern IQ tests, including those developed by Wechsler, are widely used and periodically recalibrated to maintain their accuracy and standardization. Wechsler's tests, like the WAIS-IV and WISC-V, measure a variety of verbal and nonverbal skills, reflecting his definition of intelligence as a global capacity to act purposefully and think rationally.