Backpropagation for dummies

Sachin Joglekar's blog

Backpropagation is a method of training an Artificial Neural Network. If you are reading this post, you already have an idea of what an ANN is. However, lets take a look at the fundamental component of an ANN- the artificial neuron.

actfn001

The figure shows the working of the ith neuron (lets call it $latex N_i$) in an ANN. Its output (also called its activation) is $latex a_i$. The jth neuron $latex N_j$ provides one of the inputs to $latex N_i$.

How does $latex N_i$ produce its own activation?

1. $latex N_i$ has stored in itself, a weightage assigned to each of its inputs. Lets say that the weightage assigned by $latex N_i$ to $latex N_j$ is $latex w_{i, j}$. As the first step, $latex N_i$ computes a weighted sum of all its inputs. Lets call it $latex in_i$. Therefore,

$latex in_i = sum_{k in Inputs(N_i)}…

View original post 2,267 more words

Advertisements

发表评论

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / 更改 )

Twitter picture

You are commenting using your Twitter account. Log Out / 更改 )

Facebook photo

You are commenting using your Facebook account. Log Out / 更改 )

Google+ photo

You are commenting using your Google+ account. Log Out / 更改 )

Connecting to %s