Final answer:
Researchers use Cohen's kappa to calculate inter-rater reliability, which is a measure of the consistency of judgments made by different raters.
Step-by-step explanation:
The formula researchers use to calculate inter-rater reliability is Cohen's kappa. This statistical measure is specifically designed to assess the level of agreement between two or more raters when they are categorizing, rating, or making decisions. Inter-rater reliability is important because it indicates how much homogeneity, or consistency, there is in the ratings given by different judges.
Other measures mentioned, such as the Pearson correlation coefficient and Cronbach's alpha, are also used in statistics but serve different purposes. The Pearson correlation coefficient measures the linear relationship between two continuous variables, and Cronbach's alpha assesses internal consistency or how closely related a set of items are as a group. Standard deviation is a measure of the amount of variation or dispersion of a set of values, but it is not a measure of inter-rater reliability.