DEV Community

Cover image for Implementation of Perceptron...
Pranjal Sharma
Pranjal Sharma

Posted on

Implementation of Perceptron...

Hi! Today, I will implement the fundamental building block of neural networks: the Perceptron.


The perceptron is a foundational unit in neural networks and serves as a basic building block for more complex architectures. Understanding the perceptron is essential for grasping how neural networks function.

1. Introduction to Perceptrons

A perceptron is a simple model of a biological neuron. Introduced by Frank Rosenblatt in 1958, it is one of the earliest models of artificial neural networks. The perceptron takes several input signals, processes them, and produces an output signal.

Perceptron

2. Structure of a Perceptron

A perceptron consists of:

  • Input Nodes: Represent the input features. Each input node corresponds to a feature in the dataset.
  • Weights: Associated with each input node. These weights determine the importance of each input in making the decision.
  • Bias: Added to the weighted sum of the inputs to allow the activation function to shift.
  • Activation Function: Processes the weighted sum of the inputs and the bias. Common activation functions include the step function, sigmoid, and ReLU (Rectified Linear Unit).

3. Mathematical Representation

Mathematically, a perceptron can be represented as follows:

Perceptron

Where:

  • ( y ) is the output of the perceptron.
  • ( x_i ) are the input features.
  • ( w_i ) are the weights associated with the inputs.
  • ( b ) is the bias term.
  • ( f ) is the activation function.

Implementation

We'll start with a basic implementation using Python. Here, we'll create a Perceptron class with weights and an activation function (a simple step function).

import numpy as np

class Perceptron:
    def __init__(self, input_size, learning_rate=0.001, epochs=1000):
        self.weights = np.zeros(input_size + 1)
        self.learning_rate = learning_rate
        self.epochs = epochs

    def activation(self, x):
        return 1 if x > 0 else 0

    def predict(self, x):
        z = self.weights.T.dot(x)
        return self.activation(z)

    def train(self, training_inputs, labels):
        for _ in range(self.epochs):
            for x, y in zip(training_inputs, labels):
                x = np.insert(x, 0, 1)  # Adding bias term
                prediction = self.predict(x)
                self.weights += self.learning_rate * (y - prediction) * x

    def accuracy(self, test_inputs, test_labels):
        correct_predictions = 0
        for x, y in zip(test_inputs, test_labels):
            x = np.insert(x, 0, 1)  # Adding bias term
            if self.predict(x) == y:
                correct_predictions += 1
        return correct_predictions / len(test_inputs)

# Sample data for training
training_inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
labels = np.array([0, 0, 0, 1])

# Create and train the perceptron
perceptron = Perceptron(input_size=2)
perceptron.train(training_inputs, labels)

# Test the perceptron
test_inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
test_labels = np.array([0, 0, 0, 1])

for test_input in test_inputs:
    test_input_with_bias = np.insert(test_input, 0, 1)  # Adding bias term
    print(f"Input: {test_input}, Predicted Output: {perceptron.predict(test_input_with_bias)}")

# Calculate and print the accuracy
accuracy = perceptron.accuracy(test_inputs, test_labels)
print(f"Accuracy: {accuracy * 100:.2f}%")
Enter fullscreen mode Exit fullscreen mode

This basic implementation is straightforward but may not be suitable for more complex tasks.

TensorFlow Implementation

Let's implement the perceptron using TensorFlow, which provides a more beginner-friendly syntax.

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

# Sample data
X = tf.constant([[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]], dtype=tf.float32)
y = tf.constant([[0.0], [1.0], [1.0], [1.0]], dtype=tf.float32)

# Define the Perceptron model
model = Sequential([Dense(1, input_dim=2, activation='sigmoid')])

# Compile the model
model.compile(optimizer='sgd', loss='binary_crossentropy')

# Train the model
epochs = 1000
history = model.fit(X, y, epochs=epochs, verbose=0)

# Print the final loss
final_loss = history.history['loss'][-1]
print(f'Final Loss: {final_loss:.4f}')

# Test the model
predictions = model.predict(X).round()
print(f'Predictions:\n{predictions}')
Enter fullscreen mode Exit fullscreen mode

PyTorch Implementation

Now, let's implement the perceptron using PyTorch, which offers more flexibility to developers.

import torch
import torch.nn as nn
import torch.optim as optim

class Perceptron(nn.Module):
    def __init__(self, input_dim):
        super(Perceptron, self).__init__()
        self.fc = nn.Linear(input_dim, 1)

    def forward(self, x):
        x = self.fc(x)
        return torch.sigmoid(x)

# Sample data
X = torch.tensor([[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]], dtype=torch.float32)
y = torch.tensor([[0.0], [1.0], [1.0], [1.0]], dtype=torch.float32)

# Define model, loss function, and optimizer
input_dim = X.shape[1]
model = Perceptron(input_dim)
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)

# Train the model
epochs = 10000
for epoch in range(epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

    if (epoch + 1) % 100 == 0:
        print(f'Epoch [{epoch + 1}/{epochs}], Loss: {loss.item():.4f}')

# Test the model
model.eval()
with torch.no_grad():
    test_output = model(X)
    predictions = test_output.round()
    print(f'Predictions:\n{predictions}')
Enter fullscreen mode Exit fullscreen mode

Stay tuned for the next blog where we'll delve into Multi-Layer Perceptrons (MLP).

Stay connected! Visit my GitHub.
Code

Join our Telegram Channel and let the adventure begin! See you there, Data Explorer! 🌐🚀

Top comments (0)