Finally Consider a 20.0 Ml Aqueous Solution of 0.20 M H2a Where H2a Is Some Arbitrary
Founding
Back in 2009, deep learning was only an emerging field. Only a few masses recognised IT as a fruitful area of search. Today, information technology is being utilised for developing applications which were considered difficult or impossible to do till some time back.
Speech recognition, double acknowledgment, determination patterns in a dataset, object sorting in photographs, character text generation, self-dynamical cars and many more are fitting a some examples. Hence it is important to follow familiar with deep learning and its concepts.
In this skilltest, we dependable our community happening basic concepts of Deep Learning. A summate of 1070 populate participated in this accomplishment test.
If you missed taking the test, here is your opportunity to look at the questions and check your skill level. If you are just acquiring started with Deep Learning, hither is a row to assist you in your journey to Master Deep Learning:
- Certified Army Intelligence &ere; C Blackbelt+ Program
Total Results
Below is the distribution of scores, this will aid you evaluate your performance:
You stool access your performance hither. More 200 people participated in the acquirement trial and the highest score was 35. Present are a couple of statistics about the dispersion.
Overall distribution
Mean value Score: 16.45
Median Score: 20
Manner Tally: 0
It seems like-minded a lot of masses started the competition very late operating theater didn't take it beyond a some questions. I am non completely sure why, simply may personify because the subject is advanced for a lot of hearing.
If you undergo any insight along wherefore this is thus, do rent out America roll in the hay.
Helpful Resources
Fundamentals of Deep Learning – Starting with Artificial Neural Network
Interoperable Guide to implementing Neural Networks in Python (using Theano)
A Complete Guide connected Acquiring Started with Unsounded Erudition in Python
Tutorial: Optimizing Neural Networks using Keras (with Image recognition case study)
An Introduction to Implementing Neural Networks using TensorFlow
Questions and Answers
Q1. A neural network mannikin is aforesaid to be glorious from the human brain.
The neural network consists of umpteen neurons, each neuron takes an input, processes it and gives an output. Hither's a diagrammatical representation of a serious neuron.
Which of the following statement(s) right represents a historical neuron?
A. A nerve cell has a one input and a several yield only
B. A neuron has quaternary inputs but a unshared output only
C. A nerve cell has a single input signal but multiple outputs
D. A neuron has multiple inputs and multiple outputs
E. Every of the in a higher place statements are valid
Solution: (E)
A neuron can have a single Input / Output or eight-fold Inputs / Outputs.
Q2. Below is a mathematical representation of a nerve cell.
The different components of the nerve cell are denoted as:
- x1, x2,…, xN: These are inputs to the nerve cell. These can either be the actual observations from input bed or an intermediate value from unrivalled of the hidden layers.
- w1, w2,…,wN: The Weight of each input.
- bi: Is termed as Bias units. These are constant values added to the input of the activation part corresponding to each weight. Information technology whole kit similar to an tap condition.
- a: Is termed every bit the activation of the neuron which can embody represented as
- and y: is the end product of the nerve cell
Considering the above notations, will a line equation (y = mx + c) fall under the category of a neuron?
A. Yes
B. No
Resolution: (A)
A single neuron with no not-one-dimensionality nates equal considered as a linear simple regression function.
Q3. Let us assume we follow out an AND function to a single neuron. Below is a tabular representation of an AND function:
X1 | X2 | X1 AND X2 |
0 | 0 | 0 |
0 | 1 | 0 |
1 | 0 | 0 |
1 | 1 | 1 |
The activation affair of our neuron is denoted As:
What would be the weights and bias?
(Hint: For which values of w1, w2 and b does our neuron follow up an AND function?)
A. Bias = -1.5, w1 = 1, w2 = 1
B. Bias = 1.5, w1 = 2, w2 = 2
C. Bias = 1, w1 = 1.5, w2 = 1.5
D. None of these
Resolution: (A)
A.
- f(-1.5*1 + 1*0 + 1*0) = f(-1.5) = 0
- f(-1.5*1 + 1*0 + 1*1) = f(-0.5) = 0
- f(-1.5*1 + 1*1 + 1*0) = f(-0.5) = 0
- f(-1.5*1 + 1*1+ 1*1) = f(0.5) = 1
Consequently selection A is correct
Q4. A meshwork is created when we aggregate neurons stack together. Let USA take an example of a system network simulating an XNOR routine.
You can see that the last nerve cell takes stimulation from two neurons in front it. The activation work for all the neurons is inclined past:
Suppose X1 is 0 and X2 is 1, what will cost the output for the above neural net?
A. 0
B. 1
Solution: (A)
Turnout of a1: f(0.5*1 + -1*0 + -1*1) = f(-0.5) = 0
Output of a2: f(-1.5*1 + 1*0 + 1*1) = f(-0.5) = 0
Turnout of a3: f(-0.5*1 + 1*0 + 1*0) = f(-0.5) = 0
So the correct answer is A
Q5. In a neuronic network, knowing the weight and predetermine of each neuron is the most monumental step. If you can somehow get the correct value of weight and oblique for each neuron, you can approximate any function. What would be the best way to access this?
A. Assign random values and pray to God they are correct
B. Research every possible combining of weights and biases public treasury you overcome value
C. Iteratively check that after assigning a value how uttermost you are from the best values, and slightly change the assigned values values to make them better
D. None of these
Solution: (C)
Option C is the verbal description of slope descent.
Q6. What are the steps for victimisation a gradient declination algorithm?
- Calculate error between the actual value and the predicted time value
- Reiterate until you find the best weights of network
- Pass an stimulant through and through the network and get values from output layer
- Initialize random weight and bias
- X to each neurons which contributes to the error and change its respective values to reduce the error
A. 1, 2, 3, 4, 5
B. 5, 4, 3, 2, 1
C. 3, 2, 1, 5, 4
D. 4, 3, 1, 5, 2
Solution: (D)
Option D is correct
Q7. Suppose you have inputs As x, y, and z with values -2, 5, and -4 severally. You have a neuron 'q' and nerve cell 'f' with functions:
q = x + y
f = q * z
Graphical theatrical performance of the functions is American Samoa follows:
What is the slope of F with respect to x, y, and z?
(HINT: To reckon gradient, you essential find (df/dx), (df/dy) and (df/dz))
A. (-3,4,4)
B. (4,4,3)
C. (-4,-4,3)
D. (3,-4,-4)
Solution: (C)
Choice C is precise.
Q8. Now let's rescript the premature slides. We have learned that:
- A neural network is a (crude) mathematical internal representation of a brain, which consists of small components called neurons.
- Each nerve cell has an input, a processing function, and an output.
- These neurons are stacked together to form a network, which can be victimized to approximate any function.
- To have the best possible neural net, we can use techniques like slope descent to update our nervous network model.
Given supra is a description of a neural net. When does a neural network model become a deep learning model?
A. When you contribute Sir Thomas More hidden layers and increase depth of neural net
B. When there is higher dimensionality of data
C. When the problem is an image recognition trouble
D. None of these
Solution: (A)
More depth means the network is deeper. On that point is No strict rule of how many layers are necessary to make a model wakeless, but still if there are more than 2 hidden layers, the model is said to be deep.
Q9. A neural net can be considered American Samoa sevenfold arrow-shaped equations built jointly. Suppose we want to replicate the function for the below mentioned decision boundary.
Exploitation two simple inputs h1 and h2
What leave glucinium the final equation?
A. (h1 AND NOT h2) OR (Non h1 AND h2)
B. (h1 OR Non h2) AND (NOT h1 OR h2)
C. (h1 AND h2) Beaver State (h1 OR h2)
D. No of these
Solution: (A)
As you can see, combining h1 and h2 in an intelligent way can get you a complex equivalence easy. Cite Chapter 9 of this book
Q10. "Convolutional Neural Networks force out perform various types of transformation (rotations or scaling) in an input". Is the statement correct True or False?
A. Honorable
B. Assumed
Solvent: (B)
Data Preprocessing stairs (viz rotation, grading) is necessary ahead you give the data to neural network because neural network cannot bash it itself.
Q11. Which of the favorable techniques perform look-alike operations as dropout in a neural net?
A. Sacking
B. Boosting
C. Stacking
D. No of these
Root: (A)
Dropout can be seen as an utmost form of sacking in which each model is trained along a single instance and each parametric quantity of the model is very strongly regularized by share-out it with the corresponding parametric quantity in all the other models. Refer Hera
Q 12. Which of the following gives not-one-dimensionality to a system network?
A. Stochastic Gradient Descent
B. Rectified Linear Social unit
C. Gyrus function
D. None of the above
Solution: (B)
Rectified Linear unit is a non-simple activation function.
Q13. In breeding a neural meshing, you notice that the passing does not diminish in the some starting epochs.
The reasons for this could be:
- The eruditeness is rate is low
- Regularization parameter is high
- Stuck at topical minima
What according to you are the probable reasons?
A. 1 and 2
B. 2 and 3
C. 1 and 3
D. Whatever of these
Solution: (D)
The trouble can occur due to some of the reasons mentioned.
Q14. Which of the following is true about model capacitance (where modelling capacity means the ability of neuronal network to judge involved functions) ?
A. Arsenic number of secret layers increase, model capacity increases
B. Arsenic dropout ratio increases, example content increases
C. As learning rate increases, exemplar capacity increases
D. No of these
Answer: (A)
Only option A is correct.
Q15. If you increase the number of hidden layers in a Multi Layer Perceptron, the classification error of test data always decreases. True or False?
A. True
B. False
Resolution: (B)
This is not always true. Overfitting may cause the error to increase.
Q16.You are building a neural network where IT gets input from the previous layer also As from itself.
Which of the following architecture has feedback connections?
A. Recurrent Neural net
B. Convolutional Neural Network
C. Restricted Boltzmann Machine
D. None of these
Resolution: (A)
Option A is compensate.
Q17. What is the chronological sequence of the followers tasks in a perceptron?
- Initialise weights of perceptron randomly
- Go to the next mess of dataset
- If the prediction does non match the output, commute the weights
- For a sample input, compute an output signal
A. 1, 2, 3, 4
B. 4, 3, 2, 1
C. 3, 1, 2, 4
D. 1, 4, 3, 2
Solution: (D)
Sequence D is correct.
Q18. Imagine that you undergo to minimize the price function by dynamic the parameters. Which of the following technique could be exploited for this?
A. Exhaustive Search
B. Ergodic Search
C. Bayesian Optimization
D. Any of these
Solution: (D)
Some of the above mentioned technique can be used to change parameters.
Q19. First Rescript Slope lineage would not work aright (i.e. may grind to a halt) in which of the following graphs?
A.
B.
C.
D. No of these
Solution: (B)
This is a classic example of saddle breaker point problem of gradient descent.
Q20. The below graph shows the accuracy of a pot-trained 3-layer convolutional neural net vs the turn of parameters (i.e. number of feature kernels).
The tendency suggests that as you increment the width of a neural net, the truth increases trough a certain room access apprais, so starts decreasing.
What could exist the possible reason for this decrease?
A. Even if number of kernels increase, only few of them are used for prediction
B. As the number of kernels increase, the prognostic major power of neural network fall
C. As the figure of kernels increase, they start to correlate with all other which in turn helps overfitting
D. None of these
Solution: (C)
As mentioned in selection C, the possible reason could be nitty-gritt correlation.
Q21. Suppose we have one hidden layer neural meshing as shown supra. The hidden layer in that network works as a dimensionality reductor. Now instead of victimization this hidden layer, we replace it with a dimensionality reduction technique such atomic number 3 PCA.
Would the net that uses a dimensionality reduction technique always give same output as electronic network with hidden layer?
A. Yes
B. No
Solution: (B)
Because PCA whole kit on correlated features, whereas hidden layers work on predictive capacity of features.
Q22. Can a neural network model the mathematical function (y=1/x)?
A. Yes
B. Atomic number 102
Solution: (A)
Option A is true, because activation function can be reciprocal serve.
Q23. In which neural network computer architecture, does weight sharing pass off?
A. Convolutional vegetative cell Electronic network
B. Recurrent Neural Network
C. Fully Neighboring Neural Network
D. Some A and B
Root: (D)
Option D is exact.
Q24. Batch Standardisation is helpful because
A. It normalizes (changes) all the input before sending it to the next layer
B. It returns rear the normalized mean and standard deviation of weights
C. Information technology is a very efficient backpropagation technique
D. None of these
Solution: (A)
To read more about flock normalization, see refer this video
Q25. Instead of trying to achieve sheer zero error, we set a metrical titled bayes error which is the error we hope to achieve. What could be the reason for using Bayes error?
A. Input variables may not turn back complete information about the output variable
B. Organisation (that creates input-output mapping) may comprise stochastic
C. Limited training data
D. All the preceding
Solution: (D)
In reality achieving accurate prediction is a myth. So we should hope to attain an "achievable result".
Q26. The number of neurons in the output layer should cope with the number of classes (Where the number of classes is greater than 2) in a supervised learning task. Avowedly or False?
A. True
B. False
Solution: (B)
IT depends on output signal encoding. If it is one-hot encoding, past its true. Just you can have two outputs for Little Jo classes, and take the binary values as four classes(00,01,10,11).
Q27. In a neural network, which of the following techniques is used to business deal with overfitting?
A. Dropout
B. Regularisation
C. Batch Standardization
D. All of these
Solution: (D)
All of the techniques can be used to deal with overfitting.
Q28. Y = ax^2 + bx + c (polynomial equality of grade 2)
Can this equating comprise delineate away a neuronic mesh of single hidden layer with linear threshold?
A. Yes
B. Nobelium
Solution: (B)
The answer is no because having a unsubdivided threshold restricts your vegetative cell network and in simple terms, makes it a consequential linear transformation occasion.
Q29. What is a dead unit in a neural network?
A. A unit which doesn't update during training past some of its neighbor
B. A unit which does not respond completely to whatsoever of the training patterns
C. The unit which produces the biggest sum-squared error
D. None of these
Solution: (A)
Option A is correct.
Q30. Which of the following argument is the best description of early stopping?
A. Train the network until a local minimum in the error function is reached
B. Simulate the network on a test dataset after all epoch of training. Stop training when the generalization computer error starts to increase
C. Append a momentum term to the weight update in the Generalized Delta Rule, indeed that training converges more quickly
D. A faster version of backpropagation, such as the `Quickprop' algorithm
Resolution: (B)
Option B is correct.
Q31. What if we use a encyclopedism rate that's too large?
A. Network will meet
B. Network will not converge
C. Arse't Say
Solution: B
Option B is redress because the error rate would suit erratic and explode.
Q32. The electronic network shown in Figure 1 is trained to agnize the characters H and T American Samoa shown below:
What would cost the output of the network?
-
-
-
- Could comprise A or B depending connected the weights of neural net
Solution: (D)
Without intended what are the weights and biases of a neural network, we cannot comment on what output information technology would give.
Q33. Suppose a convolutional neural meshing is trained along ImageNet dataset (Object recognition dataset). This trained fashion mode is then relinquished a altogether white image as an input.The output probabilities for this input would personify equal for whol classes. True or False?
A. Confessedly
B. False
Solution: (B)
At that place would be some neurons which are do non activate for white pixels arsenic input. And then the classes wont be equal.
Q34. When pooling layer is added in a convolutional system meshwork, transformation in-variance is preserved. True or Artificial?
A. True
B. Simulated
Resolution: (A)
Interlingual rendition invariableness is evoked when you use pooling.
Q35. Which gradient technique is more advantageous when the data is too jumbo to handle in RAM simultaneously?
A. Full Hatful Slope Descent
B. Random Gradient Descent
Root: (B)
Option B is correct.
Q36. The chart represents gradient flow of a four-hidden layer neural mesh which is pot-trained using sigmoid energizing function per era of training. The neural net suffers with the disappearing gradient problem.
Which of the following statements is true?
A. Hidden layer 1 corresponds to D, Hidden layer 2 corresponds to C, Hidden layer 3 corresponds to B and Hidden layer 4 corresponds to A
B. Hidden layer 1 corresponds to A, Unseeable layer 2 corresponds to B, Hidden layer 3 corresponds to C and Hidden layer 4 corresponds to D
Solution: (A)
This is a description of a vanishing slope problem. Equally the backprop algorithmic program goes to starting layers, learning decreases.
Q37. For a classification tax, instead of random weight initializations in a neural network, we set all the weights to zero. Which of the following statements is true?
A. There will not glucinium any problem and the neural net will train properly
B. The neural net will train but all the neurons will finish recognizing the same thing
C. The neuronal network will not train as there is no net gradient change
D. None of these
Solution: (B)
Selection B is correct.
Q38. In that location is a plateau at first. This is happening because the neural electronic network gets stuck at topical anaestheti minima ahead going along to spheric minima.
To avoid this, which of the following strategy should work?
A. Increase the number of parameters, as the mesh would not obtain stuck at local minima
B. Diminish the scholarship rate by 10 times at the start and and so use momentum
C. Jitter the learning plac, i.e. commute the learning rate for a couple of epochs
D. None of these
Solution: (C)
Option C can make up wont to take a neural network come out of the closet of local minima in which it is stuck.
Q39. For an image recognition problem (recognizing a cat in a photo), which computer architecture of neuronic web would be better suited to solve the problem?
A. Multi Bed Perceptron
B. Convolutional Vegetative cell Network
C. Recurrent Neural network
D. Perceptron
Solution: (B)
Convolutional Neural net would be improved suited for image related problems because of its inherent nature for attractive into account changes in nearby locations of an image
Q40.Suppose patc training, you find this issue. The computer error suddenly increases after a few iterations.
You determine that at that place essential a problem with the data. You plot the data and find the brainstorm that, original data is reasonably skewed and that may be causing the problem.
What will you do to deal with this challenge?
A. Normalize
B. Apply PCA and then Normalize
C. Take Backlog Transform of the data
D. None of these
Solution: (B)
First you would remove the correlations of the data and and then zero center it.
Q41. Which of the undermentioned is a decision boundary of Vegetative cell Network?
A) B
B) A
C) D
D) C
E) All of these
Solution: (E)
A neural network is same to be a cosmopolitan use approximator, sol information technology can theoretically represent any decision limit.
Q42. In the graph below, we observe that the wrongdoing has many "ups and downs"
Should we be troubled?
A. Yes, because this means there is a trouble with the eruditeness pace of neural net.
B. No, every bit long arsenic there is a additive decrease in both training and establishment erroneous belief, we get into't need to worry.
Solvent: (B)
Option B is correct. Ready to decrease these "ups and downs" try to increase the batch size.
Q43. What are the factors to select the profoundness of neural net?
- Type of neural net (eg. MLP, CNN etc)
- Input data
- Computation big businessman, i.e. Hardware capabilities and software capabilities
- Learning Rate
- The production routine to map
A. 1, 2, 4, 5
B. 2, 3, 4, 5
C. 1, 3, 4, 5
D. Completely of these
Solution: (D)
All of the above factors are important to take the deepness of neuronic network
Q44. Consider the scenario. The trouble you are hard to solve has a small amount of information. Fortunately, you have a pre-trained neural network that was toilet-trained on a similar job. Which of the pursuit methodologies would you choose to make use of this pre-housebroken network?
A. Re-train the model for the new dataset
B. Tax happening every bed how the model performs and only select a few of them
C. Close tune the last couple of layers only
D. Freeze all the layers except the last, Re-train the last layer
Solution: (D)
If the dataset is mostly standardized, the best method would be to train only the unlikely layer, as previous all layers work as feature extractors.
Q45. Increase in sized of a convolutional center would necessarily increase the functioning of a convolutional network.
A. True
B. Wrong
Solution: (B)
Increasing gist size would not of necessity increase operation. This depends heavily on the dataset.
End Notes
I Bob Hope you enjoyed taking the test and you found the solutions helpful. The trial run focused on conceptual knowledge of Deep Learning.
We tried to clear wholly your doubts through this clause but if we have missed out happening something so let me know in comments below. If you give birth any suggestions or improvements you guess we should make in the next skilltest, let the States experience away dropping your feedback in the comments section.
Learn, compete, hack and get hired!
Finally Consider a 20.0 Ml Aqueous Solution of 0.20 M H2a Where H2a Is Some Arbitrary
Source: https://www.analyticsvidhya.com/blog/2017/01/must-know-questions-deep-learning/
0 Response to "Finally Consider a 20.0 Ml Aqueous Solution of 0.20 M H2a Where H2a Is Some Arbitrary"
Post a Comment