DEV Community

Cover image for Inside a Neural Network: How Hidden Layers, Weights, and Biases Work
Rijul Rajesh
Rijul Rajesh

Posted on

Inside a Neural Network: How Hidden Layers, Weights, and Biases Work

In my previous article, we discussed activation functions and gradients.

Now let’s see how they are actually used.

For that, we first need to understand a neuron.


What is a neuron

A neuron is the smallest unit in a neural network.

It simply does the following:

w . x + b

Here:

  • w is the weight
  • x is the input
  • b is the bias

Effect of weight and bias

Suppose there are two straight lines:

  • w = 1, b = 0 → gentle slope, passes through the origin
  • w = 2, b = -1 → steeper slope, shifted downward

From this, we know that:

  • Increasing the weight makes the line steeper
  • Changing the bias moves the line up or down

Python visualization

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(-5, 5, 100)

# Two neurons
y1 = 1 * x + 0      # w=1, b=0
y2 = 2 * x - 1      # w=2, b=-1

plt.figure()
plt.plot(x, y1, label="w=1, b=0")
plt.plot(x, y2, label="w=2, b=-1")
plt.xlabel("Input (x)")
plt.ylabel("Output")
plt.title("Effect of Weight and Bias")
plt.legend()
plt.show()
Enter fullscreen mode Exit fullscreen mode


But this only creates straight lines, and stacking them won’t help much.
This is where activation functions come in.


Why activation functions exist

Suppose we apply the ReLU activation function.

  • Before activation → straight line
  • After ReLU → negative values are cut to 0, positive values remain unchanged

So, activation functions introduce non-linearity.
Without them, neural networks are just fancy linear equations.

Python visualization

def relu(z):
    return np.maximum(0, z)

z = 2 * x - 1       # linear output
a = relu(z)         # activated output

plt.figure()
plt.plot(x, z, label="Before activation")
plt.plot(x, a, label="After ReLU")
plt.xlabel("Input (x)")
plt.ylabel("Value")
plt.title("ReLU Activation Function")
plt.legend()
plt.show()
Enter fullscreen mode Exit fullscreen mode


What is a hidden layer

A hidden layer is basically multiple neurons, each with its own weight and bias.
All neurons see the same input, and their outputs are combined together.

This allows the network to learn more complex patterns than a single neuron.

Python visualization (simple hidden layer)

We will be setting up 2 hidden neurons
Each neuron has 1 weight and 1 bias.

Neuron Weight Bias
Neuron 1 1.0 0.5
Neuron 2 -1.0 0.5
import numpy as np
import matplotlib.pyplot as plt

# Input
x = np.linspace(-5, 5, 100)

# -------- Hidden Layer --------

# Neuron 1
w1 = 1.0
b1 = 0.5
z1 = w1 * x + b1        # linear output
a1 = np.maximum(0, z1) # ReLU activation

# Neuron 2
w2 = -1.0
b2 = 0.5
z2 = w2 * x + b2
a2 = np.maximum(0, z2) # ReLU activation

# -------- Output Layer --------

# Output neuron simply combines hidden neurons
y_output = a1 + a2

# -------- Visualization --------

plt.figure()
plt.plot(x, a1, label="Hidden neuron 1")
plt.plot(x, a2, label="Hidden neuron 2")
plt.plot(x, y_output, label="Final output", linewidth=2)
plt.xlabel("Input (x)")
plt.ylabel("Value")
plt.title("Network with One Hidden Layer")
plt.legend()
plt.show()

Enter fullscreen mode Exit fullscreen mode

By adding the hidden neuron outputs, the network produces a more flexible and expressive output.

Wrapping up

Hope you now have a better idea of how activation functions are utilized, and have also gained an understanding of weights, biases, and hidden layers.

We will be exploring more advanced concepts soon, now that these fundamentals are covered.

You can try the examples out via the Colab notebook.

If you’ve ever struggled with repetitive tasks, obscure commands, or debugging headaches, this platform is here to make your life easier. It’s free, open-source, and built with developers in mind.

👉 Explore the tools: FreeDevTools

👉 Star the repo: freedevtools

Top comments (0)