Hello, I'm Shrijith. I'm building git-lrc, an AI code reviewer that runs on every commit. It is free, unlimited, and source-available on Github. Star Us to help devs discover the project. Do give it a try and share your feedback for improving the product.
## Breaking downtanh into its constituent operations
We have the definition of tanh as follows:
e can see that the above formula has:
- exponentiation
- subtraction
- addition
- division
What the Value class cannot do now
a = Value(2.0)
a + 1
The above doesn't work, because a is of Value type whereas 1 is of int type.
We can fix this by automatically trying to convert 1 into a Value type in the __add__ method:
class Value:
...
def __add__(self, other):
other = other if isinstance(other, Value) else Value(other) # convert non-Value type to Value type
out = Value(self.data + other.data, (self, other), '+')
Now, the following code works:
a = Value(3.0)
a + 1 # Gives out `Value(data=4.0)`
The same line is added to __mul__ as well to provide that automatic type conversion:
other = other if isinstance(other, Value) else Value(other)
Now, the following will work:
a = Value(3.0)
a * 2 # will give out `Value(data=6.0)`
A (Potentially) Surprising Bug
How about the following code - will this work?
a = Value(3.0)
2 * a
The answer is that - no, that doesn't work:
So, to solve the ordering problem, in python we must specify __rmul__ (right multiply):
class Value:
....
def __rmul__(self, other): # other * self
return self * other
Now, the following will work:
Implementing The Exponential Function
We add the following method for calculating exponents in the Value class.
def exp(self):
x = self.data
out = Value(math.exp(x), (self,), 'exp')
def _backward():
self.grad += out.data * out.grad # d(e^x) = e^x. Then apply chain rule
out._backward = _backward
return out
Adding Support for a / b
We want to support division of Value objects. And it happens that we can reformulate a / b in a more convenient way:
a / b
= a * (1 / b)
= a * (b**-1)
To implement the above scheme we will require a pow (power) method:
def __pow__(self, other):
assert isinstance(other, (int, float)), "only supporting int/float powers for now"
out = Value(self.data**other, (self,), f'**{other}')
def _backward():
self.grad += (other * self.data**(other-1)) * out.grad
out._backward = _backward
return out
The above method implements the power rule to calculate the derivative of a power expression:
We also need subtraction and we do it using addition and negation:
def __neg__(self): # -self
return self * -1
def __sub__(self, other): # self - other
return self + (-other)
The Test - Replace old tanh with its constituent formula
The code:
# inputs x1, x2
x1 = Value(2.0, label='x1')
x2 = Value(0.0, label='x2')
# weights w1, w2
w1 = Value(-3.0, label='w1')
w2 = Value(1.0, label='w2')
# bias of the neuron
b = Value(6.8813735870195432, label='b')
# x1*w1 + x2*w2 + b
x1w1 = x1 * w1; x1w1.label = 'x1*w1'
x2w2 = x2 * w2; x2w2.label = 'x2*w2'
x1w1x2w2 = x1w1 + x2w2; x1w1x2w2.label = 'x1*w1 + x2*w2'
n = x1w1x2w2 + b; n.label = '```
python
#
e = (2*n).exp()
o = (e - 1) / (e + 1)
o.label = 'o'
o.backward()
draw_dot(o)
The result:
You can check via the lat post, that the output/result of the tanh operation was 0.7071. Even after the change it is the same. So looks like we were able to break down tanh into more fundamental operations such as exp, pow subtract, etc
Reference
The spelled-out intro to neural networks and backpropagation: building micrograd - YouTube
*AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.*
Any feedback or contributors are welcome! It's online, source-available, and ready for anyone to use.
⭐ Star it on GitHub:
HexmosTech
/
git-lrc
Free, Unlimited AI Code Reviews That Run on Commit
AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.
See It In Action
See git-lrc catch serious security issues such as leaked credentials, expensive cloud operations, and sensitive material in log statements
git-lrc-intro-60s.mp4
Why
- 🤖 AI agents silently break things. Code removed. Logic changed. Edge cases gone. You won't notice until production.
- 🔍 Catch it before it ships. AI-powered inline comments show you exactly what changed and what looks wrong.
- 🔁 Build a habit, ship better code. Regular review → fewer bugs → more robust code → better results in your team.
- 🔗 Why git? Git is universal. Every editor, every IDE, every AI…



Top comments (0)