DEV Community

Krishna Kumar
Krishna Kumar

Posted on

Pruning in Deep Learning: Structured vs Unstructured

Deep learning models are becoming larger and more powerful every year. From mobile vision systems to large language models, the number of parameters has exploded. But do we really need all those parameters?

This is where model pruning comes in.

Pruning is a model compression technique that removes unnecessary parameters from neural networks while maintaining performance. It helps in reducing model size, improving inference speed, and lowering computational cost.

In this blog, we’ll explore:

What is pruning?

Why pruning is needed?

Structured vs Unstructured pruning

Practical trade-offs

🚀 Why Do We Need Pruning?

Modern neural networks:

Require high memory

Consume more power

Have slower inference on edge devices

Are expensive to deploy

For example:

Mobile apps need lightweight models

Embedded systems have limited RAM

Edge AI requires fast inference

Pruning solves these issues by removing redundant weights.

🌳 What is Model Pruning?

Model pruning is the process of removing parameters (weights, neurons, filters, or even layers) from a trained neural network to make it smaller and faster.

The idea is simple:

Many weights in a trained neural network contribute very little to the final prediction.

So we remove them.

Pruning generally follows this workflow:

Train the full model

Remove less important weights

Fine-tune the pruned model

🔹 1. Unstructured Pruning

📌 What is Unstructured Pruning?

Unstructured pruning removes individual weights from the network based on some importance criteria (usually small magnitude weights).

It creates sparse matrices — meaning many weights become zero.

How It Works

Calculate magnitude of weights
Remove weights below a threshold
Set them to zero
Fine-tune the model

Advantages

Can achieve very high compression rates
Minimal accuracy drop
More flexible

Disadvantages

Sparse matrices are not always hardware-friendly
Requires special libraries for speed improvement
Irregular memory access

Example

If a layer has 1000 weights, and 70% are pruned:
Only 300 active weights remain
But structure of the layer stays the same

🔹 2. Structured Pruning

What is Structured Pruning?

Structured pruning removes entire neurons, channels, filters, or layers instead of individual weights.

Instead of making matrices sparse, it changes the architecture itself.

** How It Works**

Evaluate importance of filters or neurons
Remove the least important ones
Rebuild the network
Fine-tune

Advantages

Hardware-friendly
Faster inference
Easy deployment
No need for sparse computation libraries

Disadvantages

Slightly higher accuracy drop (if aggressive)
Less granular control compared to unstructured pruning

Example

If a CNN layer has 64 filters and 20 are removed:
The new layer has 44 filters
Model becomes physically smaller

** When to Use Which?**
Use Unstructured Pruning When:
Maximum compression is needed
You have sparse acceleration support
Research experimentation

Use Structured Pruning When:
Deploying to real devices
Mobile / edge AI
Need real inference speed-up

Real-World Applications

MobileNet optimization
Edge AI devices
Autonomous vehicles
NLP model compression
LLM efficiency improvements
Large-scale models often combine:
Pruning
Quantization
Knowledge distillation

Together, they create efficient AI systems.

Final Thoughts

Pruning is not just about reducing size — it's about making AI practical.

As models grow larger, efficiency techniques like pruning become essential. Structured pruning is practical for deployment, while unstructured pruning offers maximum compression.

The future of AI is not just bigger models — but smarter, leaner models.

Top comments (0)