DEV Community

Cover image for My Road to AI Agents: A Google & Kaggle Intensive Course Writing Challenge
Ghazala Arfeen
Ghazala Arfeen

Posted on

My Road to AI Agents: A Google & Kaggle Intensive Course Writing Challenge

This is a submission for the Google AI Agents Writing Challenge: [Learning Reflections OR Capstone Showcase]

My Learning Journey / Project Overview

Project Overview

The Ultimate Career Coach: An Agentic AI for Full-Spectrum Job Search

The Ultimate Career Coach is a next-generation, agentic AI career platform designed to support job seekers throughout the entire employment lifecycle—from self-assessment and skill discovery to job search execution, interview preparation, and continuous career growth.

Unlike traditional job boards or static recommendation systems, this platform leverages autonomous AI agents that collaborate to perform specialized tasks. These agents analyze a user’s skills, experience, and career goals, dynamically explore job market data, generate personalized job-matching strategies, and adapt recommendations as the user’s profile or market conditions change.

The system integrates skills intelligence, job market analysis, and task orchestration to deliver a holistic and personalized career coaching experience. By combining reasoning, planning, and tool-use capabilities, the platform functions as an intelligent career partner—capable of guiding users through resume optimization, role targeting, interview readiness, and long-term career planning.

This project demonstrates how agentic AI architectures can move beyond single-prompt interactions to create adaptive, goal-driven systems that operate continuously and contextually in real-world scenarios.

Key Concepts / Technical Deep Dive

Key Takeaways from the AI Agents Intensive Course

This course significantly deepened my understanding of agentic AI systems and how they differ from traditional prompt-based applications. One of the most important insights was learning how agents can decompose complex objectives into actionable steps and execute them autonomously while remaining aligned with a broader goal.

Key learnings include:

How agent specialization improves clarity, scalability, and system design.

The importance of orchestration and planning in enabling agents to work collaboratively.

How tool usage and structured workflows allow agents to interact meaningfully with external data and systems.

Developing this project helped me translate abstract concepts—such as autonomy, memory, and coordination—into a concrete application. The course reshaped my perspective on AI development, highlighting how intelligent agents can function as adaptive systems that evolve rather than static models responding to isolated prompts.

Overall, the experience strengthened my ability to design and reason about multi-agent systems and reinforced the value of agentic architectures for solving complex, real-world problems.

Architecture / System Design
At the base of the system is a Gemini model configuration, for example, gemini-2.5-pro, created with an explicit retry policy. The configuration specifies maximum attempts, an initial delay, a backoff factor, and HTTP status codes such as 429 or 503 that should trigger automatic retries. Every LLM Agent shares this configuration, so transient model errors are handled before they ever reach the user.
On top of the model, the notebook defines several specialised agents with focused descriptions such as “you critique resumes” or “you compare skills to target roles.” These agents are wrapped as AgentTool instances so they can be called by the planner agent like ordinary tools. The planner is another LlmAgent whose job is to read user intent, select the appropriate specialist tools, and collate their outputs into a final reply.
State and orchestration are handled by ADK’s Runner and DatabaseSessionService. Each interaction is tagged with an application name, a user identifier, and a session identifier. The session service points at a SQLite database such as sqlite:///career_agent_memory.db, which stores user messages, agent decisions, tool calls, and responses. When a user returns with the same session id, the system reloads that history and resumes the conversation.
To keep context efficient over long sessions, the Runner can be wrapped in an App with an EventsCompactionConfig. This summarises older events into a single compaction event while keeping a configurable number of recent turns verbatim, so the model receives a condensed history that preserves the coaching narrative without exceeding context limits.
Finally, a small web UI is exposed through ADK’s web server on port 8000. Because the environment is Kaggle, a helper inspects list_running_servers(), extracts the base URL and token, and constructs a Kaggle-proxy URL that points at the ADK server. The notebook renders this as an HTML block with a button labelled “Open ADK Web UI,” so the agent system can be explored from a browser.
Features / Components
The domain logic is implemented as a few typed Python tools. A parse_resume function accepts free-form resume text and returns a structured object with fields such as skills, years of experience, and education entries. An analyze_skill_gap function compares that structure with target role definitions and computes missing or underdeveloped skills. A match_jobs function takes a list of skills and returns suggested job titles with approximate fit scores. Each function is registered as an ADK tool with TypedDict schemas describing its arguments and return types, so tool calls are validated and the LLM receives structured data rather than unstructured text.
On top of these tools, the notebook defines specialist agents for resume critique, skill assessment, mock interviewing, and progress tracking. Each agent is a LlmAgent with focused instructions that tell it when and how to use the tools. The planner orchestrates them; for a prompt like “Here is my resume; help me become a data analyst and give me an interview question,” it can call the resume agent, then the skill assessment agent, then the mock interview agent, and finally compose their outputs into one answer.

Tools & Technologies
The implementation uses Python in a Kaggle or Jupyter environment. Gemini provides the language model, accessed via ADK’s LlmAgent interface. Orchestration relies on ADK components such as Runner, AgentTool for turning agents into callable tools, App for higher-level configuration, and EventsCompactionConfig for history summarisation.
Persistent memory is handled by DatabaseSessionService pointing at a SQLite database. The ADK web UI runs as an HTTP service and is exposed through Kaggle’s proxy. Observability comes from two plugins: a custom CountInvocationPlugin that increments counters in before_agent_callback and before_model_callback, and ADK’s LoggingPlugin, which records structured traces of each run.

Workflow / Process Description
A typical user flow begins when the ADK web server is started from the notebook, and the helper prints a proxy URL button. The user clicks the button to open the planner interface and starts a conversation. The Runner creates or loads a session for the given application, user id, and session ID.
When the user submits a message, the planner agent receives the text along with any existing session history. It interprets the intent, chooses which AgentTool instances to call, and issues tool calls via ADK. Each specialist agent may then call Python tools like parse_resume or match_jobs, receive structured outputs, and turn them into natural-language explanations. The planner aggregates the specialist responses and sends the final message back through the web UI.
Each step—model call, tool invocation, agent decision—is recorded in the SQLite database. If the conversation becomes long, the App’s compaction configuration summarises earlier events. When the user comes back another day with the same session id, the system restores the compacted history so the coach can remember their previous goals and recommendations.

Results / Outcomes
The result is a working career coach system that shows how to turn a collection of ADK and Gemini snippets into a coherent, stateful application. Instead of isolated prompts, there is a transparent flow: resume text is parsed into structure, skills are compared to role definitions, roles and gaps are proposed, and interactive coaching is layered on top. For students, the coach can refer back to earlier details, propose a sequence of next steps, and provide mock interviews that reflect the roles under discussion. For developers and reviewers, the use of tools, agents, a planner, and plugins makes behaviour observable and debuggable.

Reflections & Takeaways

Challenges, Limitations, and Future Improvements:
The current version is deliberately scoped and simplified. Role profiles and fit scores are based on predefined data rather than live labour-market signals, and the UI is a basic planner interface rather than a polished student dashboard. All agents share a single Gemini configuration, and there is no systematic evaluation against human advisors or other tools.
However, the architecture is designed to grow. Future work could involve connecting the system to job and skills APIs, adding tools for portfolio planning and application tracking, and migrating storage and serving to production-grade infrastructure. Memory strategies could be tuned so that compaction emphasises long-term goals and key decisions rather than treating all history uniformly.

Top comments (0)