close
test_template

Feature Extraction by Using Deep Learning

About this sample

About this sample

close

Words: 746 |

Pages: 2|

4 min read

Published: Jan 4, 2019

Words: 746|Pages: 2|4 min read

Published: Jan 4, 2019

A multilayer perceptron (MLP) is a class of feed-forward artificial neural network. An MLP consists of atleast three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called back-propagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.

The MLP consists of three or more layers i) input layer ii)output layer iii)hidden layer of non linearly-activating nodes making it a deep neural network. Since MLPs are fully connected, each node in one layer connects with a certain weight w i j {displaystyle w_{ij}} wij to every node in the following layer.

If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then linear algebra shows that any number of layers can be reduced to a two-layer input-output model. In MLPs some neurons use a nonlinear activation function that was developed to model the frequency of action potentials, or firing, of biological neurons.

The two common activation functions are both sigmoids, and are described by

y(vi) =tanh(vi) and y(vi)=(1+e-vi)-1 y ( v i ) = tanh ? ( v i ) and y ( v i ) = ( 1 + e - v i ) - 1 {displaystyle y(v_{i})=tanh(v_{i})~~{textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}

The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here y i {displaystyle y_{i}} vi is the output of the i {displaystyle i} ith node (neuron) and v i {displaystyle v_{i}} vi is the weighted sum of the input connections.

Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through back-propagation, a generalization of the least mean squares algorithm in the linear perceptron.

We represent the error in output node j {displaystyle j} j in the n {displaystyle n} nth data point by e j ( n ) = d j ( n ) - y j ( n ) {displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)}ej(n)=dj(n)-yj(n), where d {displaystyle d} d is the target value and y {displaystyle y} y is the value produced by the perceptron. The node weights are adjusted based on corrections that minimize the error in the entire output, given by

E ( n ) = 1 2 ? j e j 2 ( n ) {displaystyle {mathcal {E}}(n)={frac {1}{2}}sum _{j}e_{j}^{2}(n)} ?(n)= Sj

ej2(n).

Using gradient descent, the change in each weight is

?wji(n)= -?

where y i {displaystyle y_{i}} yi is the output of the previous neuron and ? ? {displaystyle eta } is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations.

The derivative to be calculated depends on the induced local field v j {displaystyle v_{j}}vj, which itself varies. It is easy to prove that for an output node this derivative can be simplified to

= ej(n) ? (vj(n))- ? E ( n ) ? v j ( n ) = e j ( n ) ? ' ( v j ( n ) ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}

Where ? ? ' {displaystyle phi ^{prime }} is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is

-? E ( n ) ? v j ( n ) = ? ' ( v j ( n ) ) ? k - ? E ( n ) ? v k ( n ) w k j ( n ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=phi ^{prime }(v_{j}(n))sum _{k}-{frac {partial {mathcal {E}}(n)}{partial v_{k}(n)}}w_{kj}(n)} = ?(vj(n))Sk

- wkj(n)- ? E ( n ) ? v j ( n ) = e j ( n ) ? ' ( v j ( n ) ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}

Get a custom paper now from our expert writers.

This depends on the change in weights of the k {displaystyle k} kth nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a back-propagation of the activation function.

Image of Alex Wood
This essay was reviewed by
Alex Wood

Cite this Essay

Feature extraction by using deep learning. (2019, January 03). GradesFixer. Retrieved November 19, 2024, from https://gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/
“Feature extraction by using deep learning.” GradesFixer, 03 Jan. 2019, gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/
Feature extraction by using deep learning. [online]. Available at: <https://gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/> [Accessed 19 Nov. 2024].
Feature extraction by using deep learning [Internet]. GradesFixer. 2019 Jan 03 [cited 2024 Nov 19]. Available from: https://gradesfixer.com/free-essay-examples/feature-extraction-by-using-deep-learning/
copy
Keep in mind: This sample was shared by another student.
  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours
Write my essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

close

Where do you want us to send this sample?

    By clicking “Continue”, you agree to our terms of service and privacy policy.

    close

    Be careful. This essay is not unique

    This essay was donated by a student and is likely to have been used and submitted before

    Download this Sample

    Free samples may contain mistakes and not unique parts

    close

    Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

    close

    Thanks!

    Please check your inbox.

    We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

    clock-banner-side

    Get Your
    Personalized Essay in 3 Hours or Less!

    exit-popup-close
    We can help you get a better grade and deliver your task on time!
    • Instructions Followed To The Letter
    • Deadlines Met At Every Stage
    • Unique And Plagiarism Free
    Order your paper now