136k views
4 votes
A dog vs. cat dataset consisting of 10,000 photos is used to train a two-layer fully connected neural network that classifies photos as containing a dog or a cat. If each photo is 16 × 16 pixels with 3 color channels, then there are units in the input layer. Suppose that there are 10 hidden neurons. Then there are biases and weights in the neural network. Which activation function is preferred for the output layer?

1 Answer

5 votes

Final answer:

The input layer has 768 units, there are 7680 weights and 10 biases in the neural network. The preferred activation function for the output layer is the sigmoid function or the softmax function.

Step-by-step explanation:

The input layer of the neural network has units equal to the number of pixels in each photo multiplied by the number of color channels, which in this case is 16 x 16 x 3 = 768 units.

For each hidden neuron, there is a bias and a weight associated with it. Since there are 10 hidden neurons, there are 10 biases and 768 x 10 = 7680 weights connecting the input layer to the hidden layer.

The preferred activation function for the output layer in this classification task is typically the sigmoid function or the softmax function. The sigmoid function maps the output to a value between 0 and 1, representing the probability of the input belonging to a certain class. The softmax function is often used when dealing with multiple classes, as it produces a probability distribution over all possible classes.

User Miwin
by
8.3k points