The article focuses on three aspects, "The first happens when a minority population is poorly represented in a data set used to train an algorithm." Secondly, "when features in data are closely correlated to one another, it makes it impossible to overcome bias by simply removing information like gender or race from the equation." the third is, "when human judgment and bias are encoded into the training data itself. Remember that supervised-learning algorithms learn by iterating variations of a mathematical function until it does a good enough job representing the relationship between data."
Existing studies' efforts to conquer statistical bias will optimistically bring about sound, open-supply answers in 2018. But figuring out and extracting bias baked into the manner in which those algorithms are skilled can be a sensitive and diffuse task. We may also, in the end, want to reframe how we consider the function many AI types of equipment play in society—viewing them no longer as impartial equipment, but as convex mirrors that quantify our very own biases, inequalities, and prejudices. We have the choice to alter the mathematics at the back of AI algorithms to achieve the destiny we want, however simplest, if we first have the braveness to face the difficult truths of our present.
Step-by-step explanation: