Feature extraction by using deep learning: [Essay Example], 746 words GradesFixer
exit-popup-close

Haven't found the right essay?

Get an expert to write your essay!

exit-popup-print

Professional writers and researchers

exit-popup-quotes

Sources and citation are provided

exit-popup-clock

3 hour delivery

exit-popup-persone
close
This essay has been submitted by a student. This is not an example of the work written by professional essay writers.

Feature Extraction by Using Deep Learning

Download Print

Pssst… we can write an original essay just for you.

Any subject. Any type of essay.

We’ll even meet a 3-hour deadline.

Get your price

121 writers online

blank-ico
Download PDF

A multilayer perceptron (MLP) is a class of feed-forward artificial neural network. An MLP consists of atleast three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called back-propagation for training.[39] Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.[40]

The MLP consists of three or more layers i) input layer ii)output layer iii)hidden layer of non linearly-activating nodes making it a deep neural network. Since MLPs are fully connected, each node in one layer connects with a certain weight w i j {displaystyle w_{ij}} wij to every node in the following layer.

If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then linear algebra shows that any number of layers can be reduced to a two-layer input-output model. In MLPs some neurons use a nonlinear activation function that was developed to model the frequency of action potentials, or firing, of biological neurons.

The two common activation functions are both sigmoids, and are described by

y(vi) =tanh(vi) and y(vi)=(1+e-vi)-1 y ( v i ) = tanh ? ( v i ) and y ( v i ) = ( 1 + e – v i ) – 1 {displaystyle y(v_{i})=tanh(v_{i})~~{textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}

The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here y i {displaystyle y_{i}} vi is the output of the i {displaystyle i} ith node (neuron) and v i {displaystyle v_{i}} vi is the weighted sum of the input connections.

Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through back-propagation, a generalization of the least mean squares algorithm in the linear perceptron.

We represent the error in output node j {displaystyle j} j in the n {displaystyle n} nth data point by e j ( n ) = d j ( n ) – y j ( n ) {displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)}ej(n)=dj(n)-yj(n), where d {displaystyle d} d is the target value and y {displaystyle y} y is the value produced by the perceptron. The node weights are adjusted based on corrections that minimize the error in the entire output, given by

E ( n ) = 1 2 ? j e j 2 ( n ) {displaystyle {mathcal {E}}(n)={frac {1}{2}}sum _{j}e_{j}^{2}(n)} ?(n)= Sj

ej2(n).

Using gradient descent, the change in each weight is

?wji(n)= -?

where y i {displaystyle y_{i}} yi is the output of the previous neuron and ? ? {displaystyle eta } is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations.

The derivative to be calculated depends on the induced local field v j {displaystyle v_{j}}vj, which itself varies. It is easy to prove that for an output node this derivative can be simplified to

= ej(n) ? (vj(n))- ? E ( n ) ? v j ( n ) = e j ( n ) ? ‘ ( v j ( n ) ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}

Where ? ? ‘ {displaystyle phi ^{prime }} is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is

-? E ( n ) ? v j ( n ) = ? ‘ ( v j ( n ) ) ? k – ? E ( n ) ? v k ( n ) w k j ( n ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=phi ^{prime }(v_{j}(n))sum _{k}-{frac {partial {mathcal {E}}(n)}{partial v_{k}(n)}}w_{kj}(n)} = ?(vj(n))Sk

– wkj(n)- ? E ( n ) ? v j ( n ) = e j ( n ) ? ‘ ( v j ( n ) ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}

This depends on the change in weights of the k {displaystyle k} kth nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a back-propagation of the activation function.[41]

Remember: This is just a sample from a fellow student.

Your time is important. Let us write you an essay from scratch

100% plagiarism free

Sources and citations are provided

Find Free Essays

We provide you with original essay samples, perfect formatting and styling

Cite this Essay

To export a reference to this article please select a referencing style below:

Feature extraction by using deep learning. (2019, January 03). GradesFixer. Retrieved January 14, 2021, from https://gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/
“Feature extraction by using deep learning.” GradesFixer, 03 Jan. 2019, gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/
Feature extraction by using deep learning. [online]. Available at: <https://gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/> [Accessed 14 Jan. 2021].
Feature extraction by using deep learning [Internet]. GradesFixer. 2019 Jan 03 [cited 2021 Jan 14]. Available from: https://gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/
copy to clipboard
close

Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you.

    By clicking “Send”, you agree to our Terms of service and Privacy statement. We will occasionally send you account related emails.

    close

    Attention! this essay is not unique. You can get 100% plagiarism FREE essay in 30sec

    Recieve 100% plagiarism-Free paper just for 4.99$ on email
    get unique paper
    *Public papers are open and may contain not unique content
    download public sample
    close

    Sorry, we cannot unicalize this essay. You can order Unique paper and our professionals Rewrite it for you

    close

    Thanks!

    Your essay sample has been sent.

    Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.

    thanks-icon Order now
    boy

    Hi there!

    Are you interested in getting a customized paper?

    Check it out!
    Having trouble finding the perfect essay? We’ve got you covered. Hire a writer

    GradesFixer.com uses cookies. By continuing we’ll assume you board with our cookie policy.