Which of the following functions can be used as an activation function in the output layer if we wish to predict the probabilities of n classes (p1, p2..pk) such that the sum of p over all n equals to1?
In a simple MLP model with 8 neurons in the input layer, 5 neurons in the hidden layer, and 1 neuron in the output layer. What is the size of the weight matrices between the hidden output layer and the input hidden layer?
In a neural network, knowing the weight and bias of each neuron is the most important step. If you can somehow get the correct value of weight and bias for each neuron, you can approximate any function. What would be the best way to approach this?
Assign random values and pray to God they are correct
Search every possible combination of weights and biases till you get the best value
Iteratively check that after assigning a value how far you are from the best values, and slightly change the assigned values values to make them better
The input image has been converted into a matrix of size 28×28 and a kernel/filter of size 7×7 with a stride of 1. What will be the size of the convoluted matrix?
The number of nodes in the input layer is 10 and the hidden layer is 5. The maximum number of connections from the input layer to the hidden layer are-
Assume a simple MLP model with 3 neurons and inputs=1,2,3. The weights of the input neurons are 4,5, and 6 respectively. Assume the activation function is a linear constant value of 3. What will be the output?
Deep learning MCQ (Multiple Choice Questions), Advanced Deep learning MCQ, Deep learning MCQ Online test,Deep learning MCQ Questions and answers PDF, Deep learning Interview Questions With Answers, Deep learning Technical Questions with full explanation