Neuron → A function that gives a number between 0 and 1. That number is called activation.

Neural Network → An enormous function composed of smaller functions.

Layers

Hidden Layers → The middle layers that start to recognize the patterns.

Certain neurons firing makes other neurons fire.

Recognition

Each one of the connections between the neurons on the previous layer and the neurons on the next has a weight. These weights are numbers. Taking this into account we could multiply the value of activation by the value of the weight sum them and get the values on the pixels that we are evaluating for each node on the second layer. We could also give negative weights for the pixels surrounding the ones we want so we can better determine the image.

As we want to get numbers between 0 and 1 we use a sigmoid function which takes inputs and when negative it gives a number very close to 0 and when positive very close to 1.

We also could want to only detect significant input if we have a certain degree of certainty that the image is what we want. We add a bias into the sigmoid function that makes it so we don't accept an image that is not clearly what we want.

Better notation

For a better notation, we could use a matrix and vectors and do the vector product to find the answer.

image.png

$$ a⁽¹⁾= σ (W a⁽⁰⁾ + b) $$

Better Function

Using sigmoid was slow to learn so they changed it for ReLU which is a function that gives 0 for negative values and the identity for positive ones.

Learning

Learning → Finding the right weights and biases so it solves the problem at hand.