Delta rule

Gradient descent learning rule in machine learning
(Learn how and when to remove this message)

In machine learning, the delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network.[1] It can be derived as the backpropagation algorithm for a single-layer neural network with mean-square error loss function.

For a neuron j {\displaystyle j} with activation function g ( x ) {\displaystyle g(x)} , the delta rule for neuron j {\displaystyle j} 's i {\displaystyle i} -th weight w j i {\displaystyle w_{ji}} is given by

Δ w j i = α ( t j y j ) g ( h j ) x i , {\displaystyle \Delta w_{ji}=\alpha (t_{j}-y_{j})g'(h_{j})x_{i},}

where

It holds that h j = i x i w j i {\textstyle h_{j}=\sum _{i}x_{i}w_{ji}} and y j = g ( h j ) {\displaystyle y_{j}=g(h_{j})} .

The delta rule is commonly stated in simplified form for a neuron with a linear activation function as

Δ w j i = α ( t j y j ) x i {\displaystyle \Delta w_{ji}=\alpha \left(t_{j}-y_{j}\right)x_{i}}

While the delta rule is similar to the perceptron's update rule, the derivation is different. The perceptron uses the Heaviside step function as the activation function g ( h ) {\displaystyle g(h)} , and that means that g ( h ) {\displaystyle g'(h)} does not exist at zero, and is equal to zero elsewhere, which makes the direct application of the delta rule impossible.

Derivation of the delta rule

The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with j {\displaystyle j} outputs can be measured as

E = j 1 2 ( t j y j ) 2 . {\displaystyle E=\sum _{j}{\tfrac {1}{2}}\left(t_{j}-y_{j}\right)^{2}.}

In this case, we wish to move through "weight space" of the neuron (the space of all possible values of all of the neuron's weights) in proportion to the gradient of the error function with respect to each weight. In order to do that, we calculate the partial derivative of the error with respect to each weight. For the i {\displaystyle i} th weight, this derivative can be written as

E w j i . {\displaystyle {\frac {\partial E}{\partial w_{ji}}}.}

Because we are only concerning ourselves with the j {\displaystyle j} -th neuron, we can substitute the error formula above while omitting the summation:

E w j i = w j i [ 1 2 ( t j y j ) 2 ] {\displaystyle {\frac {\partial E}{\partial w_{ji}}}={\frac {\partial }{\partial w_{ji}}}\left[{\frac {1}{2}}\left(t_{j}-y_{j}\right)^{2}\right]}

Next we use the chain rule to split this into two derivatives:

E w j i = ( 1 2 ( t j y j ) 2 ) y j y j w j i {\displaystyle {\frac {\partial E}{\partial w_{ji}}}={\frac {\partial \left({\frac {1}{2}}\left(t_{j}-y_{j}\right)^{2}\right)}{\partial y_{j}}}{\frac {\partial y_{j}}{\partial w_{ji}}}}

To find the left derivative, we simply apply the power rule and the chain rule:

E w j i = ( t j y j ) y j w j i {\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right){\frac {\partial y_{j}}{\partial w_{ji}}}}

To find the right derivative, we again apply the chain rule, this time differentiating with respect to the total input to j {\displaystyle j} , h j {\displaystyle h_{j}} :

E w j i = ( t j y j ) y j h j h j w j i {\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right){\frac {\partial y_{j}}{\partial h_{j}}}{\frac {\partial h_{j}}{\partial w_{ji}}}}

Note that the output of the j {\displaystyle j} th neuron, y j {\displaystyle y_{j}} , is just the neuron's activation function g {\displaystyle g} applied to the neuron's input h j {\displaystyle h_{j}} . We can therefore write the derivative of y j {\displaystyle y_{j}} with respect to h j {\displaystyle h_{j}} simply as g {\displaystyle g} 's first derivative:

E w j i = ( t j y j ) g ( h j ) h j w j i {\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right)g'(h_{j}){\frac {\partial h_{j}}{\partial w_{ji}}}}

Next we rewrite h j {\displaystyle h_{j}} in the last term as the sum over all k {\displaystyle k} weights of each weight w j k {\displaystyle w_{jk}} times its corresponding input x k {\displaystyle x_{k}} :

E w j i = ( t j y j ) g ( h j ) w j i [ i x i w j i ] {\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right)g'(h_{j})\;{\frac {\partial }{\partial w_{ji}}}\!\!\left[\sum _{i}x_{i}w_{ji}\right]}

Because we are only concerned with the i {\displaystyle i} th weight, the only term of the summation that is relevant is x i w j i {\displaystyle x_{i}w_{ji}} . Clearly,

( x i w j i ) w j i = x i . {\displaystyle {\frac {\partial (x_{i}w_{ji})}{\partial w_{ji}}}=x_{i}.}
giving us our final equation for the gradient:
E w j i = ( t j y j ) g ( h j ) x i {\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right)g'(h_{j})x_{i}}

As noted above, gradient descent tells us that our change for each weight should be proportional to the gradient. Choosing a proportionality constant α {\displaystyle \alpha } and eliminating the minus sign to enable us to move the weight in the negative direction of the gradient to minimize error, we arrive at our target equation:

Δ w j i = α ( t j y j ) g ( h j ) x i . {\displaystyle \Delta w_{ji}=\alpha (t_{j}-y_{j})g'(h_{j})x_{i}.}

See also

References

  1. ^ Russell, Ingrid. "The Delta Rule". University of Hartford. Archived from the original on 4 March 2016. Retrieved 5 November 2012.