DEV Community

Amit
Amit

Posted on • Edited on

Dev Log (D-01 to D-04). Implementing neural networks from scratch.

Disclaimer:-I am a noob.(So don't use this if you are learning something).

Getting MNIST data (found this code in https://github.com/tinygrad/tinygrad).

import numpy as np
import gzip
import requests

def fetch(url):
    response = requests.get(url)
    return response.content

def parse(file):
    return np.frombuffer(gzip.decompress(file), dtype=np.uint8).copy()
# Thank to https://github.com/tinygrad/tinygrad mnist example
BASE_URL = "https://storage.googleapis.com/cvdf-datasets/mnist/"
X_train = parse(fetch(f"{BASE_URL}train-images-idx3-ubyte.gz"))[0x10:].reshape((-1, 28*28)).astype(np.float64)
Y_train = parse(fetch(f"{BASE_URL}train-labels-idx1-ubyte.gz"))[8:].astype(np.int8)
X_test = parse(fetch(f"{BASE_URL}t10k-images-idx3-ubyte.gz"))[0x10:].reshape((-1, 28*28)).astype(np.float64)
Y_test = parse(fetch(f"{BASE_URL}t10k-labels-idx1-ubyte.gz"))[8:].astype(np.int8)

# Print the shapes of the loaded data
print("Shape of X_train:", X_train.shape)
print("Shape of Y_train:", Y_train.shape)
print("Shape of X_test:", X_test.shape)
print("Shape of Y_test:", Y_test.shape)
Enter fullscreen mode Exit fullscreen mode

Implementing NN. getting help from help from https://developer.ibm.com/articles/l-neural/

class Neuron:
    def __init__(self):
        # Weight is an vector.
        self.W =  np.random.uniform(-1.,1.,size=(784,)).astype(np.float64)
    def forward(self,X):
        self.wX = X.dot(self.W)
        return self.wX

    def sigmoid(self):
        if self.wX > 0:   
            z = np.exp(-self.wX)
            return 1/(1+z)
        else:
            z = np.exp(self.wX)
            return z/(1+z)
    def sigmoid2(self):
        return np.exp(self.wX) / (1 +  np.exp(self.wX))

    def tanH(self):
        z = np.exp(2*self.wX)
        return (z-1)/(z+1)

# neuron = Neuron()
# print(neuron.forward(train_data[4]))
# print(neuron.sigmoid())
Enter fullscreen mode Exit fullscreen mode

I am stuck at my activation function bcz dot product of Input and weights is to large.

class NLayer:
    neurons = []
    def __init__(self,number_of_neurons):
        for n in range(0,number_of_neurons):
            self.neurons.append(Neuron())

    def forward(self,X):
        for n in self.neurons:
            n.forward(X)
            print(n.tanH())

layer_1 = NLayer(784)
layer_1.forward(train_data[0])
Enter fullscreen mode Exit fullscreen mode
class NNetwork:
    neuronsLayers = []
    def __init__(self):
        pass
    def train(self):
        # Train and update weights. 
        pass
    def test(self):
        # Don't update weights. check weights.
        pass
    def predict(self):
        # Run Model. 
        pass
Enter fullscreen mode Exit fullscreen mode

I am still stuck at the error when executing the activation function code. OverFollow due to number being too large.

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read more →

Top comments (0)

Image of Docusign

🛠️ Bring your solution into Docusign. Reach over 1.6M customers.

Docusign is now extensible. Overcome challenges with disconnected products and inaccessible data by bringing your solutions into Docusign and publishing to 1.6M customers in the App Center.

Learn more