Final answer:
Norm-referenced measures are false because they compare student performance to a norm group, not a predetermined proficient score. Proficiency standards vary across states, and norm-referenced tests establish benchmarks from a large, representative sample.
Step-by-step explanation:
The statement that norm-referenced measures determine proficiency by comparing students' scores to a predetermined proficient score is false. Norm-referenced tests assess a student's performance in comparison to a representative group of peers (norm group). The scores are usually reported in percentile ranks or standard scores which indicate how well a student did relative to the norm group. On the other hand, criterion-referenced measures assess whether a student has achieved a certain level of mastery or proficiency in a skill or set of skills, based on a predetermined standard or benchmark.
There's an ongoing debate about measuring student proficiency, as states use different tests with varying standards. The inconsistency in proficiency standards leads to discrepancies where students might pass state tests but fail federal ones. This lack of uniformity calls into question the reliability and fairness of these assessments. Scoring and interpretation of results across different states is inconsistent, affecting the overall assessment of student achievement and proficiency.
Norming a test involves giving it to a large, representative sample and using the data to establish benchmarks. Test-takers' scores are then compared to these benchmarks, rather than an absolute standard of proficiency. The Stanford-Binet Intelligence Scale is an example of a test that has been norm-referenced. Universities often use percentile ranks for comparisons, such as requiring an SAT score at or above a certain percentile for admission.