From Prompts to Pipelines: What I Learned About Building Real AI Agents
When I joined the AI Agents Intensive, I thought “agents” were nothing but just smart chatbots. And by the end, I was building ProdigyFlow — a full multi-agent pipeline that loads raw data, cleans it, analyzes it, creates charts, and finally produces an automated report.
This course completely changed how I think about AI.
The biggest shift for me was understanding that agents are not one big model — they’re a team. Each agent has a job, its own tools, and a clear role in the pipeline.
In ProdigyFlow, we built:
- a Cleaning Agent to fix messy data
- an Analysis Agent to derive insights
- a Visualization Agent to generate graphs
- and a Main File to make them work together smoothly
Seeing these parts collaborate felt like building a small company inside Python.
There were a lot of behind-the-scenes struggles too:
- API keys broke so many times we memorized our
.envfile 😭 - GitHub workflows failed at 2 AM for reasons we did not understand
- We learned how important logs, error handling, and reproducible runs really are
- Python libraries (Pandas, Matplotlib, tqdm) became our superpowers
But every challenge made the final system more real.
This project also became a true team win. My teammate Priyamvadha and I figured things out together — fixing bugs, testing agents, and redesigning the pipeline more than once.
Here are the project links if you want to explore:
🔗 GitHub: https://github.com/komalharshita/prodigyflow
🔗 Kaggle Submission: https://www.kaggle.com/competitions/agents-intensive-capstone-project/writeups/ProdigyFlow
🔗 My LinkedIn: https://www.linkedin.com/in/komalharshita/
Huge thanks to Google, Kaggle, and the mentors for creating this opportunity. I started this course writing simple prompts, and finished by building an end-to-end intelligent pipeline.
Top comments (1)
a new skill unlocked!!!