Awesome tutorial, many thanks. But I have one doubt, can you help me?
self.z2_error = self.o_delta.dot(self.W2.T) # z2 error: how much our hidden layer weights contributed to output error self.z2_delta = self.z2_error*self.sigmoidPrime(self.z2) # applying derivative of sigmoid to z2 error self.W1 += X.T.dot(self.z2_delta) # adjusting first set (input --> hidden) weights self.W2 += self.z2.T.dot(self.o_delta) # adjusting second set (hidden --> output) weights
what means those T's? self.w2.T, self.z2.T etc...
T is to transpose matrix in numpy. docs.scipy.org/doc/numpy-1.14.0/re...
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink.
Hide child comments as well
Confirm
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Awesome tutorial, many thanks.
But I have one doubt, can you help me?
what means those T's? self.w2.T, self.z2.T etc...
T is to transpose matrix in numpy.
docs.scipy.org/doc/numpy-1.14.0/re...