Question - 1
What is representation in deep learning?
It is a way to look at data to represent or encode
It gets closer to the expected output
RGB and HSV are two different examples of representations
All of the above
Show Answer
Solutions
Answer- D
Question - 2
What is shallow learning in deep learning?
Machine learning tend to focus on learning only one or two layers of representations of the data
Machine learning tend to focus on learning 10 layers of representations of the data
Machine learning tend to focus on learning 512 layers of representations of the data
Machine learning tend to focus on learning 64 layers of representations of the data
Show Answer
Solutions
Answer- A
Question - 3
Deep learning is a biological framework for learning representations from brain
Deep learning is an analogue framework for learning representations from data
Deep learning is a mathematical framework for learning representations from data
Deep learning is a digital framework for learning representations from data
Show Answer
Solutions
Answer- C
Question - 4
What is loss function in deep learning?
To calculate loss in banks
To control the output of a neural network, you need to be able to measure how far this output is from what you expected
These are true targets of data
These are the predicted values only
Show Answer
Solutions
Answer- B
Question - 5
Support Vector Machine
Support Vector Machanism
Super Visual Machine
Support Vector Model
Show Answer
Solutions
Answer- A
Question - 6
The two key ideas of deep learning for computer vision:
Deep neural networks and kernel functions
Support Vector Machines and loss functions
Convolutional neural networks and backpropagation
None of these
Show Answer
Solutions
Answer- C
Question - 7
Three technical forces are driving advances in machine learning:
Super computers only
Pen and a piece of paper
Hardware, Datasets & benchmarks, and Algorithmic advances
All of the above
Show Answer
Solutions
Answer- C
Question - 8
Which of the following is a subset of machine learning?
Numpy
Scipy
Deep Learning
All of the above
Show Answer
Solutions
Answer- C
Question - 9
Which of the following statements is true when you use 1×1 convolutions in a CNN?
It suffers less overfitting due to small kernel size
It can help in dimensionality reduction
It can be used for feature pooling
All of the above
Show Answer
Solutions
Answer- D
Question - 10
In which neural net architecture, does weight sharing occur?
Convolutional neural Network
Recurrent Neural Network
Fully Connected Neural Network
Both 1 and 2
Show Answer
Solutions
Answer- D
Question - 11
Which of the following methods DOES NOT prevent a model from overfitting to the training set?
Early stopping
Dropout
Data augmentation
Pooling
Show Answer
Solutions
Answer- D
Question - 12
Assume that your machine has a large enough RAM dedicated to training neural networks. Compared to using stochastic gradient descent for your optimization, choosing a batch size that fits your RAM will lead to:
a more precise but slower update.
a more precise and faster update.
a less precise and slower update.
a less precise but faster update.
Show Answer
Solutions
Answer- A
Question - 13
Batch Normalization is helpful because:
It normalizes (changes) all the input before sending it to the next layer
It returns back the normalized mean and standard deviation of weights
It is a very efficient backpropagation technique
None of these
Show Answer
Solutions
Answer- A
Question - 14
What is a dead unit in a neural network?
A unit which does not respond completely to any of the training patterns
A unit which doesn’t update during training by any of its neighbour
The unit which produces the biggest sum-squared error
None of the above
Show Answer
Solutions
Answer- B
Question - 15
Which of the following statement is the best description of early stopping?
A faster version of backpropagation, such as the `Quickprop’ algorithm
Train the network until a local minimum in the error function is reached
Add a momentum term to the weight update in the Generalized Delta Rule, so that training converges more quickly
Simulate the network on a test dataset after every epoch of training. Stop training when the generalization error starts to increase
Show Answer
Solutions
Answer- D
Question - 16
For a classification task, instead of random weight initializations in a neural network, we set all the weights to zero. Which of the following statements is true?
There will not be any problem and the neural network will train properly
The neural network will not train as there is no net gradient change
The neural network will train but all the neurons will end up recognizing the same thing
None of these
Show Answer
Solutions
Answer- C
Question - 17
For an image recognition problem (recognizing a cat in a photo), which architecture of neural network would be better suited to solve the problem?
Multi Layer Perceptron
Convolutional Neural Network
Recurrent Neural network
Perceptron
Show Answer
Solutions
Answer- B
Question - 18
What is hypothesis space in deep learning?
ML/DL algorithms merely searching through a predefined set of operations, called a hypothesis space
Searching for useful representations of some input data, within a predefined space of possibilities, using guidance from a feedback signal
Both A and B
Only A
Show Answer
Solutions
Answer- C
Question - 19
What are neural networks in deep learning?
In deep learning, layered representations are (almost always) learned via models called neural networks, structured in literal layers stacked on top of each other
Neural network is a cell in brain
These networks are the brain neurons studying in neurobiology
These are models of human brain
Show Answer
Solutions
Answer- A
Question - 20
What is the training loop in deep learning?
It is repeated a sufficient number of times (typically tens of iterations over thousands of examples), yields weight values that minimize the loss function
With every step in the network processes, the weights are adjusted a little in the correct direction, and the loss score decreases
All of the above are true
Can not say
Show Answer
Solutions
Answer- C
Practice more set questions