The PhD Student Who Didn't Know What She Was Missing
I am a PhD student at VUB working on H+c analysis for CMS experiment at CERN. My days are filled with ROOT histograms, machine learning models for jet tagging, endless debugging and lots of chats with Gemini, ChatGPT, and other AI models. I have trained neural networks, transformers: I thought I knew AI.
Then I took the 5-Day AI Agents Intensive Course, and realized: I had been using AI tools without understanding AI systems.
This was my first real AI course, and it changed everything.
Day 1: The Question That Broke My Brain
"What's the difference between an LLM and an agent?"
My honest first thought: "Aren't they the same thing?"
Then they showed the architecture: Planner, Executor, Memory, Evaluator.
And I saw myself. That's literally my research workflow:
Plan today's analysis strategy
Execute the code
Remember what failed yesterday
Evaluate if I'm making progress
I'm doing the agent loop manually: Every Single Day.
The realization: What if the system could do this reasoning autonomously?
Day 4: The Validation part
Later on Day 4, the course showed how to validate agent decisions.
As an experimental physicist, this felt familiar. We validate everything: multiple checks, cross-references etc.
The insight: Agents need the same rigor we apply to detector data. Not blind trust but systematic validation.
Day 5: Building My Advisory Panel
For my capstone project, I built an AI Advisory Panel: a multi-agent system where specialized agents collaborate to solve problems.
The concept: Instead of one agent trying to do everything, create a panel of experts:
One agent analyzes the problem
Another suggests solutions
A third evaluates trade-offs
They debate and reach consensus
I thought about my research group meetings: that's exactly what we do. Different people, different expertise, working toward the best answer.
Why I built it this way: I wanted to create something broadly useful, something that could help everyone regardless of their field. Everyone benefits from multiple perspectives.
Demo can be found here: https://youtu.be/hKYb8bk01AI
What Actually Changed
Before: I saw AI as passive tools waiting for my commands.
After: I see AI as systems I can design to reason, collaborate, and act autonomously.
The practical difference: I'm now thinking about building a full research partner agent: one that doesn't just execute tasks but actually collaborates on the physics. Something that can:
Suggest alternative analysis approaches I haven't considered.
Point out potential systematic issues before they become problems.
Help brainstorm solutions when I'm stuck.
Learn from my reasoning patterns and complement my thinking.
Not replacing me but Partnering with me.
What I'm Taking Forward
Now that I have built something general-purpose, I want to create my specialized research partner agent. Something deeply integrated with my analysis, that knows files, understands CMS detector, and can have actual scientific discussions with me. Still learning, still building.
Thank You
To Google and Kaggle: For making this accessible and intense. The daily structure worked perfectly.
To the Discord community: For the debugging help and the reminder that we are all figuring this out together.
Top comments (0)