Hey everyone,
I recently completed the Google × Kaggle AI Agents Intensive (5-Day course) — and this experience genuinely changed how I look at AI systems.
I joined because I kept seeing the term “AI Agents” everywhere, but honestly… I didn’t fully get it.
Before:
“Agents are basically chatbots with extra steps.”
Now:
Agents are systems that can think, plan, take actions, and collaborate with other agents to solve a task.
Here’s what I learned and how I built CyberGuard — a cybersecurity-focused AI agent system.
Day 1 — My Mental Shift
This was the biggest realization:
- The LLM is the brain
- Tools and services are the hands
- The agent decides how to solve a task using those tools
It’s like working with a teammate who can:
- Search
- Run models
- Make decisions
- Remember what happened earlier
Once that clicked, I knew what I wanted to build.
The Problem I Wanted to Solve
Today, anyone can be tricked by:
- Fake password reset emails
- Apps that request dangerous permissions
- Links pretending to be legitimate websites
Real attackers combine these:
Email scam leads to installing a malicious app → total account access.
Traditional security tools usually detect only one of these at a time.
That gap scared me enough to build something meaningful.
Introducing CyberGuard
A multi-agent system that examines:
| Component | Role | Detection Scope | Logic / Mechanism |
|---|---|---|---|
| Email Agent | Phishing Analyzer | Content & Links: Scans email bodies, sender metadata, and embedded URLs. | Analyzes semantic patterns for social engineering cues and cross-references URLs with reputation blacklists to identify phishing vectors. |
| Android Agent | Malware Scanner | Permissions & Runtime: Inspects application behavior and access rights. | Evaluates AndroidManifest.xml for over-privileged requests (e.g., SMS/Contact access) that indicate potential malware or spyware behavior. |
| Root Agent | Risk Orchestrator | Decision Engine: Aggregates and correlates signals. | Synthesizes alerts from worker agents to determine a Combined Risk severity level, reducing false positives through cross-validation. |
It returns a clear verdict:
- SAFE
- HIGH RISK
- CRITICAL (extreme danger)
With an explanation someone can actually understand.
How I Built It (Simplified)
Phishing Detection Model
- TF-IDF analysis + URL heuristic feature
- Logistic Regression
- Test performance: F1 Score ≈ 0.96
- Served via FastAPI on port 8001
Malware Behavior Model
- Random Forest Classifier on Android permission sets
- Test performance: F1 Score ≈ 0.95
- Served via FastAPI on port 8002
Both connect to a Root Agent running Gemini through the Agent Development Kit (ADK).
The Root Agent:
- Calls models only when necessary
- Correlates results from both services
- Generates a Markdown “Security Audit” report with recommended actions
Example (shortened):
Severity: HIGH RISK
Reason: Suspicious link in email + risky app permissions
Recommendation: Delete the email and uninstall the app
Testing the Full System
I tested four scenarios:
| Scenario | Expected | Result |
|---|---|---|
| Normal email + safe app | Safe | Safe |
| Phishing email only | High | High |
| Malware-like app only | High | High |
| Combination attack | Critical | Critical |
Everything worked end-to-end.
It finally felt like a product, not just a notebook.
Challenges I Faced
- Making different microservices communicate reliably
- Debugging agents when their reasoning drifted
- Matching permissions properly across datasets
- Styling the UI took longer than expected
But every time a correct “CRITICAL” verdict appeared, it reminded me why I built this.
What’s Next
I’d like to:
- Improve permission datasets for real Android apps
- Integrate live threat-intelligence sources
- Add browser extension to catch phishing instantly
- Strengthen prompt security (agent safety is a real concern)
The system works, but there’s so much room to grow.
Final Thoughts
I started this course thinking that AI agents were too advanced or complex for me.
By Day 5, I had created a working cybersecurity assistant.
This wasn’t just about learning — it made me feel capable of building real, impactful AI systems.
If you’re curious about AI beyond chatbots:
Try building an agent.
It changes how you think.
Thank you for reading.
Happy to share the project or answer questions anytime.
Top comments (0)