82.8k views
0 votes
A _________ model can still be unfair even though it won't explicitly know which groups are being inputted into the system

A: Single attribute
B: Biased training
C: Blind attribute
D: False-negative optimized

User Altons
by
6.0k points

1 Answer

4 votes

Final answer:

A biased training model can still be unfair because inherent biases in the training data or proxy attributes can lead to discriminatory outcomes, even without explicit knowledge of the input groups.

Step-by-step explanation:

A biased training model can still be unfair even though it won't explicitly know which groups are being inputted into the system. This occurs because biases can be inherent in the training data itself, even if no explicit information about group identities is included. Biases can also be perpetuated through proxy attributes that correlate with protected characteristics, leading to discriminatory outcomes. For instance, if historic loan data is used to train a model, and that data contains underlying biases against certain groups, the model may inadvertently continue to reinforce these biases.

User Yzmir Ramirez
by
7.4k points