What is a Neural Network?
Before we get started with the how of building a Neural Network, we need to understand the what first.
Neura...
For further actions, you may consider blocking this person and/or reporting abuse
Nice, but never seems to converge on array([[ 0.92, 0.86, 0.89]]). What's a good learning rate for the W update step? It should probably get smaller as error diminishes.
Actually, there is a bug in sigmoidPrime(), your derivative is wrong. It should return self.sigmoid(s) * (1 - self.sigmoid(s))
Hey Max,
I looked into this and with some help from my friend, I understood what was happening.
Your derivative is indeed correct. However, see how we return
o
in the forward propagation function (with the sigmoid function already defined to it). Then, in the backward propagation function we pass o into the sigmoidPrime() function, which if you look back, is equal to self.sigmoid(self.z3). So, the code is correct.Hey! I'm not a very well-versed in calculus, but are you sure that would be the derivative? As I understand, self.sigmoid(s) * (1 - self.sigmoid(s)), takes the input s, runs it through the sigmoid function, gets the output and then uses that output as the input in the derivative. I tested it out and it works, but if I run the code the way it is right now (using the derivative in the article), I get a super low loss and it's more or less accurate after training ~100k times.
I'd really love to know what's really wrong. Could you explain why the derivative is wrong, perhaps from the Calculus perspective?
The derivation for the sigmoid prime function can be found here.
There is nothing wrong with your derivative. max is talking about the actual derivative definition but he's forgeting that you actually calculated sigmoid(s) and stored it in the layers so no need to calculate it again when using the derivative.
Awesome tutorial, many thanks.
But I have one doubt, can you help me?
what means those T's? self.w2.T, self.z2.T etc...
T is to transpose matrix in numpy.
docs.scipy.org/doc/numpy-1.14.0/re...
Hello, i'm a noob on Machine Learning, so i wanna ask, is there any requirement for how many hidden layer do you need in a neural network? The hidden layer on this project is 3, is it because of input layer + output layer? Or it is completely random?
Hi, this is a fantastic tutorial, thank you. I'm currently trying to build on this to take four inputs rather than two, but am struggling to get it to work. Do you have any guidance on scaling this up from two inputs?
Ok, I believe i miss something. Where are the new inputs (4,8) for hours studied and slept? And the predicted value for the output "Score"?
Thanks for the great tutorial but how exactly can we use it to predict the result for next input? I tried adding 4,8 in the input and it would cause error as:
input:
Traceback (most recent call last):
[[0.5 1. ]
[0.25 0.55555556]
[0.75 0.66666667]
[1. 0.88888889]]
Actual Output:
File "D:/try.py", line 58, in
[[0.92]
[0.86]
[0.89]]
print ("Loss: \n" + str(np.mean(np.square(y - NN.forward(X))))) # mean sum squared loss
Predicted Output:
[[0.17124108]
ValueError: operands could not be broadcast together with shapes (3,1) (4,1)
[0.17259949]
[0.20243644]
[0.20958544]]
Process finished with exit code 1
after training done, you can make it like
Q = np.array(([4, 8]), dtype=float)
print "Input: \n" + str(Q)
print "Predicted Output: \n" + str(NN.forward(Q))
Samay, this has been great to read.
Assume I wanted to add another layer to the NN.
Would I update the backprop to something like:
def backward(self, X, y, o):
# backward propgate through the network
self.o_error = y - o
self.o_delta = self.o_error*self.sigmoidPrime(o)
Nice guide. I have one question:
Shouldn't the input to the NN be a vector? Right now the NN is receiving the whole training matrix as its input. The network has two input neurons so I can't see why we wouldn't pass it some vector of the training data.
Tried googling this but couldnt find anything useful so would really appreciate your response!
I am not a python expert but it is probably usage of famous vectorized operations ;)
Causes of unintentional weight loss
Unintentional weight loss has many different causes.
It might be caused by a stressful event like a divorce, losing a job, or the death of a loved one. It can also be caused by malnutrition, a health condition or a combination of things.
Read moore bit.ly/3J3UxSQ
what is the 'input later' ?
Pretty sure the author meant 'input layer'.
Great article!
Yep! Just fixed it :)
Great Tutorial!
I translated this tutorial to rust with my own matrix operation implementation, which is terribly inefficient compared to numpy, but still produces similar result to this tutorial. Here's the docs: docs.rs/artha/0.1.0/artha/ and the code: gitlab.com/nrayamajhee/artha
Excellent article for a beginner, but I just noticed Bias is missing your neural network. Isn't it required for simple neural networks?
And also you haven't applied any Learning rate. Will not it make the Gradient descent to miss the minimum?
Great introduction! I have used it to implement this:
github.com/ayeo/letter_recognizer
Great tutorial, explained everything so clearly!!
Nice!
(2 * .6) + (9 * .3) = 7.5 wrong.
It is 3.9
Good catch! That is definitely my mistake. If one replaces it with 3.9, the final score would only be changed by one hundredth (.857 --> .858)!
Great article for beginners like me! Thank you very much!
Great article actually helped me understand how neural network works
Hi, in this line:
for i in xrange(1000):
it told me that 'xrange' is not defined. Could you please explain how to fix it?
With newer python version function is renamed to "range"