Backpropagation

optimization algorithm for artificial neural networks From Wikipedia, the free encyclopedia

Remove ads

Backpropagation is a method of computing the gradient artificial neural networks to perform tasks more accurately.[1] The algorithm was first used for this purpose in 1974 in papers published by Werbos, Rumelhart, Hinton, and Williams. The term backpropagation is short for "backward propagation of errors".

It works especially well for feed forward neural networks (networks without any loops) and problems that require supervised learning.

Remove ads

How it works

The idea is to test how wrong the neural network is and then correct it. This is repeated many times.

With a little more detail:

  1. You create a loss function (Also known as a cost function), which shows how far the answers from the neural net are from the real answers. (This is often done many times. After that you take the average)
  2. You calculate how to adjust the parameters (weights and biases) inside the neural net by taking the partial derivative of the loss with respect to those parameters. Specifically, the chain rule is used to find the derivative with respect to each parameter.
  3. You adjust the parameters. How you adjust the parameters is based on your training method, with one of the simplest, yet still commonly used ways being stochastic gradient descent .

This is repeated until the neural network is good enough at its job. This happens when the error measured by the loss function is low.

Remove ads

References

Further Reading

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads