Final answer:
In artificial neural networks, biases are added to neurons to provide them with an additional degree of freedom, allowing the neuron to activate even with zero input. They are adjusted along with weights during training and are crucial for modeling complex patterns and non-linearity. Therefore, the correct answer option is b)
Step-by-step explanation:
In an artificial neural network, weights are parameters that determine the strength of the connection between units, or neurons, while a bias term is added to a neuron to provide it with an additional degree of freedom. The network requires biases because they allow for adjustments that are not purely dependent on the input values.
Imagine the bias as a kind of offset or intercept in a linear equation that is not influenced by the variabilities of the inputs. This ensures that even if all inputs have a value of zero, the neuron can still fire if the bias is set appropriately. The bias is not a reaction to the adjustments in the weights of the connections, nor is it the case that you can't adjust the bias with supervised learning.
In fact, during the training process, both weights and biases are adjusted through learning algorithms to minimize the loss function. Biases are crucial for neural networks to model complex patterns and introduce non-linearity to the decision boundary, thus making the model capable of solving non-trivial problems.