DEV Community

Yisus777
Yisus777

Posted on

Explain neural networks Like I'm Five

Explain me the neural networks like i'm five

Latest comments (18)

Collapse
 
yisus777 profile image
Yisus777 • Edited

So basically the neural networks are lots of algorithms that are interconnected to solve complex problems 'learning' from the fails or the incorrect outputs to improve these solutions?.

Collapse
 
cess11 profile image
PNS11 • Edited

They come in many varieties but the most basic one is called a perceptron. It is based on an old idea about nerve cells and how they work, where it accepts data in in one end, does some computation with it and then delivers an interpretation in the other end. First you teach them, then you use them, sort of. The better the training, the better the result 'in production'.

To begin with it has some random ideas, usually expressed as numeric values. This is what is called weights, some numbers signifying something. When it encounters data in the training phase it multiplies or otherwise computes its own numbers with this data and the weights and then checks whether the result is the same as in the answers for the training data set.

If it fits it silently applauds itself and waits for more data. If it doesn't it adjusts its weights in the appropriate direction and then waits for more data.

When you have just one it isn't very exciting but if you translate e.g. images to numbers representing colour, brightness or something of each pixel and then train a matrix of these perceptrons on such image translations that you have also decided what they are or should mean to the net, then you can also use it to detect those same things and others that are similar if you only translate these new images to the data format that your perceptron flock knows how to make guesses about.

This is because you control thresholds for whether an answer from one of your perceptrons is OK or not given the data you supply, you could be somewhat lenient and see if it is enough for your use case and if it isn't you retrain your net with a more demanding and exact threshold.

The devil is in the details. It depends on what you want to do how you design and train your neural networks. The idea is however to produce lots and lots of 'nerve cells' that are basically the same but encapsulate different data values that are then used for interpretations, based on what you told them about their guesses in the training phase.

In picolisp a perceptron object could look something like this:

(class +Perceptron +Entity)
(rel w1 (+Number))
(rel w2 (+Number)) 
(rel res (+Number)) 
(rel set (+Joint) percs (+PercParent)) 

(dm train> (Data Truth)
    (let D (dostuff Data (: w1) (: w2))
        (if (= D Truth)
            (applaud This)
            (if (> D Truth)
                (and (=: w1 (adjustup (: w1))) (=: w2 (adjustup (: w2))) )
                (and (=: w1 (adjustdown (: w1))) (=: w2 (adjustdown (: w2))) ) ]

And then you'd probably write an 'interpret> method as well, which wouldn't adjust these values, instead it would just leave its result in a field for later collection or pass it on for further processing or something. It depends heavily on the application how these details might look, and I'm fairly certain the above won't work very well, besides it being quite chatty pseudocode.

Collapse
 
ikemkrueger profile image
Ikem Krueger

I have a hard time to get a grip on the lisp syntax.

Collapse
 
cess11 profile image
PNS11

It's actually not as much a syntax as a data structure notation.

If you point out where it slips I'll try to explain in better detail.

Collapse
 
piotroxp profile image
Piotr Słupski • Edited

Suppose you have a stream.

The stream has certain divergences and certain rotations.

By placing the rotations and divergences accordingly, based on how strong the data is moving the walls of the riverbed (erosion and accumulation), incoming data streams can get diverged and rotated to obtain patterns from within that stream.

So if you send in a stream of pixels, the set of divergences and rotations will find faces, or dogs or cars - you can even aid the river by forcefully putting in points that you think are good for the divergence and rotation, based on intution :)

I don't know if it's 5 yr level, but thats's how I would explain it to my 5 year old if I had one ;)

![Neural]

Collapse
 
nishi_id profile image
Nishimiya

Thx for the explanation. You deserve some like

Collapse
 
isaacleimgruber profile image
IsaacLeimgruber

Grandma forgot her legendary soup recipe and you have 10 attempts to make the best attempt to her soup. For each attempt grandma, your mom and your dad will each taste the resulting soup from different combinations of the feedbacks they gave from your last soups. At the last attempt you will have soup quite close to grandma's.
The inputs are the ingredients, the weights are the quantities of each ingredient and the neurons are your different soup attempts. The feedbacks are the losses.

Collapse
 
rhymes profile image
rhymes

a-w-e-s-o-m-e!

Collapse
 
nestedsoftware profile image
Nested Software

I think this is the best answer in the spirit of "explain like I'm five." I always think it's too hard to do it for something like this, but you did a great job here!

Collapse
 
isaacleimgruber profile image
IsaacLeimgruber • Edited

Haha I did not expect grandma'soup to have such success and had small doubt about the analogies :)

Collapse
 
dheeraj326 profile image
Dheeraj.P.B

Isn't 5 a bit too much? Can we make it 10? 🙂

Collapse
 
alephthoughts profile image
Abhishek Sharma

Consider a light bulb which lights up (activates) only when a threshold amount of current is produced. Call this lightbulb a neuron and the amount of electricity passing through it a weight.
Now provide input electricity(data) through an arrangement of layers of different coloured light bulbs which light up on different amounts of electricity (initialized at random). The amount of current passing through each layer of bulbs changes the hue and colour of the bulbs at the end. This collections of lightbulbs(neurons) is an example of Neural Network.

Collapse
 
nestedsoftware profile image
Nested Software • Edited

Neural networks are based on some ideas about how neurons in the brain work, although I think there are some significant differences between computer neural networks and the neural networks in living organisms.

In machine learning, a neural network is initialized to have certain neurons connected in a particular way. Each neuron can perform a given calculation on its input and produces an output, which can go into more neurons.

You run an input through the network and get a result at the very end. Depending on whether the result is judged to be good or bad, the weights of the connections between the neurons are adjusted (this is done by a process called 'back propagation'). You run a large number of inputs, and based on the result of each run the weights are adjusted, which affects what will happen on the next run.

It's not trivial, but if you can set up the network properly, the result is that the network can effectively learn over time how to produce results that are considered good for a given input.

Modern successes in machine learning, like Deep Mind's AlphaGo, rely on the ability to use extremely large distributed computing resources to train the neural network.

I don't know much about neural networks myself, but I found this article very helpful in learning some basics: Hacker's guide to neural networks

Collapse
 
jochemstoel profile image
Jochem Stoel

He said like I am five.

Collapse
 
idanarye profile image
Idan Arye

https://xkcd.com/1364/

Collapse
 
isaacdlyman profile image
Isaac Lyman • Edited

'explainlikeimfive' doesn't need to be taken literally. It just means "a simple explanation." People reading this page will benefit from many different levels of explanation, so please don't disparage Paritosh. I think his response is very clear and useful.

Thread Thread
 
jochemstoel profile image
Jochem Stoel

No it has to be taken absolutely literally.

Collapse
 
andrewlucker profile image
Andrew Lucker

A bunch of numbers go in. The numbers get combined using weight constants into more numbers. This step continues for a few times. Each combination step is called a layer. The final layer is more numbers as output. All weights start out as random. The first set of weights isn't very good. Using math the weights get improved gradually. The result is a trained neural network.