Answer: B. Inter - rater reliability.
Step-by-step explanation:
Inter - rater reliability is a system of assessing the degree of consistency in two measurements gotten by different raters.
Due to human errors that could result from various individual differences, environmental factors, lack of presence of mind and not paying attention to details, there could be inconsistencies that might arise when two people are given the task of rating things.
Inter - rater reliability can be checked by;
1. Identifying how harmonious the ratings compiled by the two observers are, and,
2. Measuring (in percentage) the categories of data compiled by two raters. Of the 50 children, if the two raters marked 10 children to be in the same category, it would mean that there is 20% consistency.