DEV Community

Cover image for 5 Days to Clarity: Demystifying AI Agents
Deeksha Garg
Deeksha Garg

Posted on

5 Days to Clarity: Demystifying AI Agents

Before I enrolled in the 5-day AI agents intensive course, I merely knew the definition of agents. And certainly, this is what you can expect from someone in her second year of college and who has never taken up a course on AI before. While signing up for the course, I thought okay, this course will teach me the basics of agents. However, throughout the course, I found myself not just reading the theory, but doing hands on lab where I actually saw the code coming into action and the concepts studied being implemented. By the end of the fifth day, I found myself studying the best practices to deploy an agent.

DAY 1

The whitepaper felt as if a teacher is explaining the concepts to us. The simple analogy that described the model as the agent's brain, tools as its hands, the orchestration layer as the nervous system, and deployment as the body and legs, straight from day 1, helped me visualize things. Until now, I was used to giving a prompt to chatgpt and receiving a result. The whitepaper enlightened me on the 'Think, Act, Observe' loop that happens behind the scenes. What struck me was how we are progressing towards self evolving systems, i.e., agentic systems which can expand their resources by creating new tools or even agents at runtime.

DAY 2

Who thought that agents themselves could also be used as tools? The delegation of tasks felt like the Indian government system, where the central government delegates tasks to several ministries. Before diving deep into the course, I thought making AI agents was all about the technical know-how. I never gave importance to documentation, and did not know that anything such as best practices for tool use existed in the AI world. Day 2 changed my views on that. I began to see how the entangled "N x M" integration problem can make a mess, and why we need a protocol such as the MCP.

DAY 3

What if I tell the agent- "My favourite color is blue" and later ask it- "What is my preferred hue?" What if I tell the agent that I have stopped liking blue, will the agent know that it needs to update its database? Does the agent need to remember that I said "Good Morning" at the beginning of our conversation? These were the types of questions that were answered in day 3. An agent without a memory is like an assistant with amnesia. No matter in which field your agent is specialized, its use is limited without sessions and memory, which are the building blocks of context engineering.

DAY 4

If I ask a calculator what is 2+3, I know it should give me 5. On the other hand, if I ask a writer agent to write an article on world economy, then there is no correct answer. In that case, how do we check the correctness of our agent? Also if an agent I built made a mistake, I will have to go through its entire thinking process. Did it call the correct tools? Did the tools called gave it the right information? Day 4 made me believe that debugging an agent is actually a more mind boggling and lifetime running process, as compared to building the agent itself. What stayed with me was how we can automate the process by implementing LLM-as-a-judge, and make it more reliable by introducing human-in-the-loop.

DAY 5

Day 5 focused on deployment, and A2A protocol- a framework which allows different agents to 'talk' to each other. What the course was trying to teach us was not only about building agents, but about building agents that real-world businesses could depend upon. The continuous evaluation required for an agent is a proof that we still cannot build fully trustworthy agents, and the remote control is still in the hand of human.

During these 5 days, I journeyed the world of AI agents, visiting the concept of a single agent to a multi agent system, and from evaluating an agent to deploying it. I spent the next 15 days after course completion in building a project to implement all the learnings.

Introducing my project: SketchSensei

This is for everyone who has ever tried to build a realistic human head, and got stuck. Figuring out the head orientation and proportions is challenging for a beginner. Loomis guidelines ease out things, but then again, drawing them manually is difficult and slow. SketchSensei comes to our rescue in such a case. It is a Loomis style drawing assistant, which not only overlays Loomis guidelines over the inputted image for you, but also generates instructions for drawing the head step-by-step. All you have to do is, just grab a pencil and start drawing, learning art the Loomis way!

Conclusion

At the end, I would like to thank Google x Kaggle for not just providing this course, but for making it so amazing by equipping it with all the material required to get these concepts into a beginner's head.

Top comments (0)