Final answer:
Cohen's Kappa is the statistical procedure that measures the consistency among raters, accounting for chance agreement. It is more robust than simple percent agreement calculations.
Step-by-step explanation:
The popular statistical procedure that quantifies the degree of consistency among raters is Cohen's Kappa. Cohen's Kappa is a measure of inter-rater reliability, or how much homogeneity, or agreement, there is in the ratings given by judges. In contrast to just percentage agreement, Cohen’s Kappa takes into account the agreement occurring by chance. The formula for Cohen's Kappa is:
κ = (Po - Pe) / (1 - Pe)
where Po is the relative observed agreement among raters, and Pe is the hypothetical probability of chance agreement. This measure is more robust than a simple percent agreement calculation, as it considers the possibility of the agreement occurring by chance.
Other options like Spearman rank correlation, Mann-Whitney U test, and Kruskal-Wallis test are different types of statistical tests. Spearman rank correlation is a measure of rank correlation, the Mann-Whitney U test is a comparison of two independent groups, and the Kruskal-Wallis test is a non-parametric method for testing whether samples originate from the same distribution.