Introduction
In this project, I built a production-style AI application that provides relationship advice using a multi-agent system. Instead of relying on a single LLM response, the system simulates three distinct “advisors” with different perspectives, then aggregates their opinions into a final recommendation.
The goal was not just to build an AI app, but to explore a full modern development workflow including:
- Agent orchestration
- Persistent memory
- Authentication
- CI/CD pipelines
- Security considerations
- AI-assisted development using Claude Code
Live app: https://cs-7180-project3.vercel.app/
Repo: https://github.com/NingxiY/CS7180_project3.git
System Overview
The application allows users to describe a personal situation. The system then generates advice from three different agents:
- Cosmic Reader — symbolic / abstract interpretation
- Behavior Analyst — pattern-based reasoning
- Experienced Advisor — practical human advice
These outputs are then passed to a judge agent, which synthesizes them into a final answer.
This design improves reliability by:
- Reducing single-model bias
- Encouraging diverse reasoning
- Producing more structured outputs
Architecture
The system is implemented using a modern full-stack architecture:
- Frontend & API: Next.js (App Router)
- Authentication: Clerk
- Database: Neon (PostgreSQL)
- Deployment: Vercel
- CI/CD: GitHub Actions
Key Components
1. Agent Orchestrator
A central orchestrator coordinates:
- Parallel agent execution
- Response normalization
- Final aggregation via judge agent
Each agent returns structured output:
ADVICE:
REASONING:
CONFIDENCE:
2. Persistent Memory
User interactions are stored in a database and reused in future sessions.
For each user:
- Past sessions are saved
- Agent-specific memory is retrieved
- Context is injected into future prompts
This allows the system to:
- Maintain continuity
- Improve personalization
- Avoid repeated generic responses
3. Authentication
Authentication is handled using Clerk.
Key design decision:
- All API routes validate identity using
auth() - User identity is derived from the session, not request payload
This avoids:
- Identity spoofing
- Trusting client-side input
AI Development Workflow (Claude Code)
A major part of this project was using Claude Code as a development partner.
Skills
Custom skills were used to:
- Generate structured prompts
- Improve agent consistency
- Maintain response schemas
Hooks
Automated hooks were configured to:
- Detect code changes
- Suggest testing and build checks
- Trigger documentation reminders
AI PR Review
Each pull request included:
- AI-generated summaries
- Structured change descriptions
- Testing and risk analysis
This significantly improved:
- Development speed
- Code clarity
- Documentation quality
CI/CD Pipeline
A GitHub Actions pipeline was implemented with the following stages:
- Install dependencies
- Unit and integration tests
- Coverage reporting
- Build verification
- Playwright E2E tests
- Security audit
This ensures that:
- Every commit is validated
- The app builds correctly
- Core flows remain functional
Security Considerations
A lightweight security review was conducted using a custom agent.
Key Findings
- Authentication is enforced in API routes
- SQL queries use parameterized queries (no injection risk)
- No server secrets exposed to frontend
-
However:
- No rate limiting on API endpoints
- User ID originally trusted from request payload
-
npm auditdoes not fail CI
Improvements
- Derive user identity from Clerk session
- Add per-user rate limiting
- Enforce middleware protection for API routes
- Strengthen CI security gates
Challenges
1. CI/CD Issues
- Lockfile mismatch caused install failures
- Environment variables missing in CI
- Next.js build failed due to missing Clerk keys
These were resolved by:
- Synchronizing package-lock.json
- Adding GitHub secrets
- Adjusting dependency install strategy
2. Async Execution Bugs
A recurring issue was:
RuntimeError: event loop is closed
Fix:
- Ensured safe async invocation
- Avoided reusing closed loops
- Added safeguards for repeated agent calls
3. Multi-Agent Prompt Design
Ensuring consistent output required:
- Strict formatting instructions
- Explicit labels (ADVICE, REASONING, etc.)
- Controlled variability between agents
Reflection
This project changed how I think about software development.
Instead of writing everything manually, I worked with AI as a collaborator:
- Generating code
- Debugging issues
- Structuring systems
- Writing documentation
However, AI is not a replacement for understanding:
- Most debugging still required reasoning
- System design decisions remained critical
- Misconfigurations (CI, env variables) required manual fixes
The most important takeaway:
AI accelerates development, but engineering judgment still matters.
Conclusion
This project demonstrates that combining:
- Multi-agent AI design
- Modern web frameworks
- CI/CD practices
- AI-assisted development
can produce a robust, production-style application.
It also highlights a new development paradigm:
Building software is no longer just coding — it is orchestrating systems, tools, and AI.
Demo
- Live app: (your Vercel URL)
- Repository: (your GitHub link)
Final Thoughts
If I had more time, I would:
- Improve memory retrieval strategies
- Add rate limiting and abuse protection
- Experiment with more specialized agents
- Explore real-world deployment constraints
But even in its current form, this system demonstrates a strong foundation for AI-powered applications.
Top comments (0)