What I Learned While Working on My First CNN-Based Project
As a final-year E&TC engineering student, I recently worked on my first Convolutional Neural Network (CNN) based project focused on brain tumor detection using MRI images. This post is not a tutorial; it’s a reflection on what I learned and what initially confused me.
Why I chose this project
I wanted to work on a project that was:
- Academically meaningful
- Related to deep learning
- Practical enough to understand how theory translates into code Medical image analysis using CNNs seemed challenging but interesting, especially because it connects machine learning with real-world impact.
Dataset and problem understanding
The dataset included brain MRI images labeled as tumor and non-tumor cases. Before writing any code, I realized that understanding the problem statement and dataset is more important than jumping right into model building.
Some early questions I had:
- What does each label actually represent?
- Are images already preprocessed?
- Is the dataset balanced? Ignoring these questions at first caused confusion later.
Understanding CNNs beyond theory
I had studied CNNs in theory, but implementing them was different.
Things that became clearer only after coding include:
- Convolution layers are feature extractors, not magic blocks.
- Pooling layers mainly help reduce spatial dimensions and overfitting.
- More layers do not automatically mean better accuracy. I also learned that small changes in structure can significantly affect results.
Challenges I faced
Some difficulties I encountered were:
- Overfitting on training data
- Confusion between validation accuracy and actual performance
- Choosing hyperparameters without just copying values from others These issues forced me to slow down and understand why things were happening, rather than simply fixing errors.
Importance of explainability
While working on this project, I realized that accuracy alone is not enough, especially in medical applications. This made me explore concepts of explainability to better understand model predictions and decision-making.
Even simple visualization techniques helped me trust the model more.
What I learned overall
This project taught me that:
- Machine learning is iterative, not linear.
- Debugging ML models requires patience and careful observation.
- Reading code and results is as important as writing code. Most importantly, I learned how much I still need to improve, which is a positive realization.
What I plan to improve next
- Better model evaluation techniques
- Cleaner project structure
- A deeper understanding of explainable AI methods I plan to keep refining this project and documenting my learning along the way.
Project Reference
Final note
I’m sharing this to document my learning, not to claim expertise. If you’re also starting with CNNs or academic ML projects, I’d be happy to exchange ideas and learn together.
Thanks for reading!
Top comments (0)