DEV Community

Cover image for 🚀 I Built My First CNN for Brain Tumor Detection - Here’s What Actually Confused Me
Tanmay P. Tawade
Tanmay P. Tawade

Posted on • Edited on

🚀 I Built My First CNN for Brain Tumor Detection - Here’s What Actually Confused Me

🧠 From Theory to Reality

As a final-year E&TC engineering student, I recently built my first Convolutional Neural Network (CNN) project for brain tumor detection using MRI images.

This is not a tutorial.

It’s a breakdown of:

  • what I thought I understood
  • what actually confused me
  • what changed after I implemented everything

🤔 Why I Chose This Project

I wanted something that was:

  • Academically meaningful
  • Related to deep learning
  • Practical enough to connect theory with real-world use

Medical image analysis stood out because it’s not just technical - it has real-world impact.


📊 Understanding the Dataset (Where I Initially Went Wrong)

Before writing any code, I should have asked:

  • What exactly do the labels represent?
  • Are the images already preprocessed?
  • Is the dataset balanced?

I didn’t take these seriously at first - and it caused confusion later.

👉 Lesson:

Understanding your dataset is more important than building the model.


🖼️ Sample MRI Data

Dataset Image

Even a quick visual inspection of data would have helped me understand patterns early.


⚙️ CNNs: What Changed After Implementation

I had studied CNNs before, but coding them changed everything.

Here’s what became clear:

  • Convolution layers are feature extractors, not magic
  • Pooling reduces dimensions and overfitting, not just “data size”
  • More layers ≠ better performance

👉 Biggest realization:

Small architectural changes can significantly impact results.


🧩 A Simple CNN Structure

CNN Architecture

Block diagram of CNN

This helped me finally visualize how data flows through the network.


⚠️ Challenges I Faced

This is where things got real.

  • Overfitting on training data
  • Confusing validation accuracy with real performance
  • Randomly choosing hyperparameters

At one point, I genuinely thought:

“If accuracy is high, the model must be good.”

That assumption was wrong.


🔍 Why Explainability Became Important

In medical applications, accuracy alone isn’t enough.

I started exploring model explainability to answer:

  • Why is the model predicting tumor?
  • Which part of the image matters most?

Even simple visualization methods helped me trust the model more.


🧠 Model Interpretation Example

Grad-CAM Output (No Tumor Image)

Grad-CAM Output (Tumor Image)

Seeing highlighted regions made predictions more meaningful.


📈 What This Project Taught Me

  • Machine learning is iterative, not linear
  • Debugging requires patience and observation
  • Reading results is as important as writing code

👉 Most important:

Copying solutions is easy. Understanding them is not.


🔧 What I Plan to Improve Next

  • Better evaluation techniques (beyond accuracy)
  • Cleaner project structure
  • Deeper understanding of explainable AI

🔗 Project Reference

You can check the full implementation here:

👉 Github Demo

Includes:

  • CNN model implementation
  • Data preprocessing
  • Experimentation and results

💬 Final Thought

I’m not an expert - just someone learning by building.

If you’ve worked on a CNN project:
👉 What confused you the most in the beginning?

Let’s learn together.


👨🏻‍💻 Author

Tanmay Tawade

Top comments (0)