I've been dabbling with deep learning for quite some time by doing coursework, revising fundamentals, and building projects whenever possible.
But there was always this feeling of not being confident enough to say that I actually knew deep learning.
After struggling with consistency and a bunch of other issues, I finally decided to properly follow a course from start to finish.
The course is on YouTube.
I am following the Deep Learning with PyTorch course by CampusX.
It has been a great learning experience because I am finally understanding many technical nuances and gaining real experience by building models from scratch and improving them step by step.
I kept thinking of writing a blog someday or sharing updates on X, but I kept procrastinating. I used to feel that if I spent time writing or posting, I would not be able to manage my learning goals.
Nevertheless, I finally broke that procrastination loop today and decided to share my learnings with you all.
This post is just a brief overview of what I have covered so far. More technical and detailed blogs will follow soon.
What I Have Learned So Far
1. PyTorch Basics
We started with the fundamentals of PyTorch, its comparison with TensorFlow, and the key features that make it so widely used in modern systems. One major highlight was the dynamic computation graph, which makes experimentation and debugging much easier.
2. Tensors in PyTorch
Next, we went deep into tensors. We practiced various tensor operations and understood how tensors, which are multi dimensional arrays, are used to represent real world data and perform efficient computations.
3. Autograd
Then came one of the most powerful tools in PyTorch: Autograd.
It is the automatic differentiation engine that lets us compute gradients without manually writing long mathematical derivatives for every function. This was a big conceptual win.
4. Building a Training Pipeline from Scratch
We worked on the Breast Cancer Detection dataset and built a basic neural network with a few layers. For learning purposes, we did not use PyTorch’s built in loss functions or optimizers at first. We implemented them manually to understand what really happens under the hood.
5. Using the nn Module
After that, we shifted to PyTorch’s nn module, used built in loss functions and optimizers, and improved the model performance. This showed how PyTorch simplifies real world model development.
6. Dataset and DataLoader
I learned about the Dataset and DataLoader classes, which are extremely important in the training pipeline. They help manage data loading, batching, and shuffling efficiently.
7. Fashion MNIST Project
Then we worked on the Fashion MNIST dataset. I built an ANN and improved it step by step using:
- GPU acceleration
- Hyperparameter tuning with Optuna
- Better training practices
This part felt very practical and close to real world experimentation.
I still have a few lectures left to cover in the playlist, so I'll take some time to share the completed blog!
It has been a really solid learning and building experience.
Huge thanks to Nitish Sir for such an amazing course.
If you want a deep, conceptual, and hands on experience with deep learning using PyTorch, you should definitely check out this playlist.
I will be back soon with more detailed blogs for individual lessons.
Till then, keep learning, keep building, and keep becoming the best version of yourself.
Peace out 💖✌🏻🧘🏻♂️




Top comments (0)