By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 746 |
Pages: 2|
4 min read
Published: Jan 4, 2019
Words: 746|Pages: 2|4 min read
Published: Jan 4, 2019
A multilayer perceptron (MLP) is a class of feed-forward artificial neural network. An MLP consists of atleast three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called back-propagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.
The MLP consists of three or more layers i) input layer ii)output layer iii)hidden layer of non linearly-activating nodes making it a deep neural network. Since MLPs are fully connected, each node in one layer connects with a certain weight w i j {displaystyle w_{ij}} wij to every node in the following layer.
If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then linear algebra shows that any number of layers can be reduced to a two-layer input-output model. In MLPs some neurons use a nonlinear activation function that was developed to model the frequency of action potentials, or firing, of biological neurons.
The two common activation functions are both sigmoids, and are described by
y(vi) =tanh(vi) and y(vi)=(1+e-vi)-1 y ( v i ) = tanh ? ( v i ) and y ( v i ) = ( 1 + e - v i ) - 1 {displaystyle y(v_{i})=tanh(v_{i})~~{textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}
The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here y i {displaystyle y_{i}} vi is the output of the i {displaystyle i} ith node (neuron) and v i {displaystyle v_{i}} vi is the weighted sum of the input connections.
Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through back-propagation, a generalization of the least mean squares algorithm in the linear perceptron.
We represent the error in output node j {displaystyle j} j in the n {displaystyle n} nth data point by e j ( n ) = d j ( n ) - y j ( n ) {displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)}ej(n)=dj(n)-yj(n), where d {displaystyle d} d is the target value and y {displaystyle y} y is the value produced by the perceptron. The node weights are adjusted based on corrections that minimize the error in the entire output, given by
E ( n ) = 1 2 ? j e j 2 ( n ) {displaystyle {mathcal {E}}(n)={frac {1}{2}}sum _{j}e_{j}^{2}(n)} ?(n)= Sj
ej2(n).
Using gradient descent, the change in each weight is
?wji(n)= -?
where y i {displaystyle y_{i}} yi is the output of the previous neuron and ? ? {displaystyle eta } is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations.
The derivative to be calculated depends on the induced local field v j {displaystyle v_{j}}vj, which itself varies. It is easy to prove that for an output node this derivative can be simplified to
= ej(n) ? (vj(n))- ? E ( n ) ? v j ( n ) = e j ( n ) ? ' ( v j ( n ) ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}
Where ? ? ' {displaystyle phi ^{prime }} is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is
-? E ( n ) ? v j ( n ) = ? ' ( v j ( n ) ) ? k - ? E ( n ) ? v k ( n ) w k j ( n ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=phi ^{prime }(v_{j}(n))sum _{k}-{frac {partial {mathcal {E}}(n)}{partial v_{k}(n)}}w_{kj}(n)} = ?(vj(n))Sk
- wkj(n)- ? E ( n ) ? v j ( n ) = e j ( n ) ? ' ( v j ( n ) ) {displaystyle -{frac {partial {mathcal {E}}(n)}{partial v_{j}(n)}}=e_{j}(n)phi ^{prime }(v_{j}(n))}
This depends on the change in weights of the k {displaystyle k} kth nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a back-propagation of the activation function.
Browse our vast selection of original essay samples, each expertly formatted and styled