The Motivation: Solving Alert Fatigue 🛡️
Security teams are drowning in logs. I built SOC-AI solo during the AI Agents Assemble Hackathon (hosted on @wemakedevs) to prove that AI can handle the "boring" triage while humans keep control.
The Architecture
- Triage: Groq (Llama-3.3) for 500ms log analysis.
- Orchestration: Kestra for semi-autonomous remediation.
- Frontend: Next.js 15 (Live on Vercel!).
Technical Deep Dive – Reliable AI & Orchestration
One of the biggest challenges in AI agents is hallucination. In a security context, a "hallucinated" IP address or action could be a disaster. I solved this using two core patterns:
1. Forcing Structured Output with Zod & Groq
I didn't just ask the AI to "analyze the log." I defined a strict contract using Zod. This ensures that the high-speed Llama-3.3 model on Groq returns a precise JSON object that my backend can trust.
// Define the Triage Schema
const SecurityTriageSchema = z.object({
severity: z.enum(["low", "medium", "high", "critical"]),
threat_type: z.string().describe("Categorization like Brute Force, SQLi, etc."),
action_suggested: z.enum(["block_ip", "disable_user", "monitor"]),
reasoning: z.string()
});
// Forcing Groq to adhere to the schema
const chatCompletion = await groq.chat.completions.create({
model: "llama-3.3-70b-versatile",
response_format: { type: "json_object" }, // Crucial for reliable JSON
messages: [
{ role: "system", content: "You are a SOC Triage Agent. Output ONLY JSON." },
{ role: "user", content: `Analyze this log: ${rawLogData}` }
],
});
2. Semi-Autonomous Remediation via Kestra
Trust is everything in security. Instead of letting the AI run wild, I built a Human-in-the-Loop system.
When the AI suggests an action (like block_ip), it appears on my Next.js Dashboard. Only after I click "Approve" does the backend trigger a Kestra Workflow.
Why Kestra?
- Retries: If the Firewall API is down, Kestra handles the retry logic.
- Audit Trail: Every action taken is logged visually in the Kestra UI.
- Separation of Concerns: My Next.js app handles the UI, while Kestra handles the heavy infrastructure automation.
The Result
By combining Groq's speed with Kestra's reliability, SOC-AI can triage a log and present a remediation plan to an analyst in less than a second.
This project was a solo build, and it taught me that the future of AI isn't just about the "chat" - it's about orchestration.
Links & Demo
- GitHub: https://github.com/kaushik0010/soc-ai
- Video Walkthrough: https://youtu.be/LbuHXiPznJE
What are your thoughts on semi-autonomous security? Would you trust an AI to suggest your firewall rules? Let's talk in the comments!
Top comments (0)