This is a submission for the Agentic Postgres Challenge with Tiger Data
What I Built
We've all been there: weeks of meetings, whiteboard sessions, and endless debates just to land on a high-level design for a new software project. It's a slow, expensive, and high-stakes process where a single early mistake can lead to months of wasted effort.
We were inspired by a simple question: What if we could use AI agents not just to write code, but to automate the crucial architectural decisions that come before the code?
That's why we built The Genesis Engine, an agent-first platform that transforms a single product idea into a complete, expert-vetted microservices architecture in minutes.
The engine is powered by a swarm of three specialized AI agents that collaborate to produce a comprehensive technical plan:
- 🏗️ The Architect Agent: Designs a complete, cloud-native microservices blueprint with detailed technical specifications.
- 🔧 The Systems Analyst: Acts as the "red team," stress-testing the blueprint for technical risks like performance bottlenecks, security vulnerabilities, and scalability limits.
- 💼 The BizOps Analyst: Analyzes the design from a business perspective, evaluating operational complexity, cloud hosting costs, and team structure requirements.
The result is not just a diagram, but a rich, multi-faceted, and fully searchable knowledge base that gives development teams a massive head start and helps them build the right thing, faster.
Demo
- GitHub Repository:
https://github.com/CheravGoyalShorthillsAI/Software_Architecture_Generator - Screenshots:
https://shorthillstech-my.sharepoint.com/:f:/g/personal/amrit_shorthills_ai/EmrhxYPiaWZGuZzgfpH8Oq0B8mYIjRfnkzBwrHLN3ou6xQ?e=XOEJTb - Live Demo Video:
Here’s a quick walkthrough of The Genesis Engine in action, using the same prompt from our demo video:
Sample Prompt:
I want to make an agentic application using google adk with following capabilities:
An agent to read the document and summarize it.
An agent to take user query as input and ask the 5 most relevent follow-up question based on the user query.
Do the web search based on user query context retrieved from document and responses from the follow-up questions.
- Example Generated SVG Diagram For Above Prompt:
https://shorthillstech-my.sharepoint.com/:u:/g/personal/amrit_shorthills_ai/EWOFGEJbkQNLnuIWdmfCmf8BWLqk_RHuce6StvNrIUU61w?e=kl14xR
1. Production Ready Architecture
In under two minutes, the agent swarm completes its work, and the full blueprint is revealed. This includes a fully-generated architecture diagram, a detailed breakdown of every microservice, and the critical findings from the analyst agents.
2. Interactive Discovery: Hybrid Search
The entire report is an interactive knowledge base. Users can interrogate the AI's findings using hybrid search, which combines keyword precision with semantic understanding to find the exact insights they need.
How I Used Agentic Postgres
Agentic Postgres was the perfect foundation for this project. The unique features provided by Tiger Data, like the Tiger CLI and fast forks, allowed us to build a true multi-agent system, not just a simple AI application.
Fast, Zero-Copy Forks & Tiger CLI: This was the star of the show. The moment the Architect Agent designs a blueprint, the backend programmatically calls the Tiger CLI to create an instantaneous, zero-copy fork of the database. The Systems Analyst and BizOps Analyst are then deployed into this isolated fork to perform their analysis in parallel without any interference. This is the core of our multi-agent collaboration. We even built in a resilient fallback to the primary database, ensuring the application completes its run even if a fork command fails.
Multi-Agent Collaboration & Tiger MCP: Our project uses the database itself as the communication protocol, which is the core philosophy of Tiger MCP. The Architect agent doesn't send a direct message; it writes a state change (the new blueprint) to the database. This state change is the "message" that triggers the entire downstream workflow of forking and analysis. It’s a robust, event-driven approach to agent communication.
Novel Hybrid Search (pg_text + pgvector): This feature transformed the project from a generator into an interactive discovery tool. For every finding the analyst agents produce, we used a Gemini embedding model to generate a vector and stored it in a
pgvectorcolumn. The search endpoint then executes a powerful query that usespg_textfor precise keyword filtering andpgvector's<=>operator for semantic ranking. This allows users to ask complex questions and get the exact insights they need instantly.Agent-First Application & Developer Productivity: The Genesis Engine is the definition of an agent-first application. The human provides a single sentence; the agents perform 100% of the complex cognitive work. The output—a complete technical plan, downloadable diagrams, and Mermaid code—is a massive developer productivity hack that automates weeks of manual work.
Fluid Storage: While not called directly, Fluid Storage is the high-performance engine that makes this all possible. The ability for multiple agents to simultaneously write large amounts of analysis into their own database forks without performance degradation is a direct demonstration of Fluid Storage's power.
Overall Experience
Building with Agentic Postgres was a fantastic experience that truly opened our eyes to the future of AI application development.
What worked well? The speed of zero-copy forks is a genuine game-changer. The ability to spin up an entire isolated database state in seconds is the key to enabling complex, parallel agent workflows. Implementing the hybrid search with pgvector and pg_text was also surprisingly straightforward and incredibly powerful.
What surprised you? The biggest surprise was the mental shift from seeing a database as passive storage to seeing it as the central nervous system for an agent swarm. Using database state changes as the medium for agent communication (the MCP pattern) felt like a much more robust and scalable approach than a traditional messaging queue.
Challenges or learnings? The main challenge was debugging the Tiger CLI integration from within the application, which is a natural part of working with powerful new tools. It taught us the importance of building resilient systems, leading to the fallback mechanism that ensures the application is always functional. The process also highlighted how critical effective prompt engineering is to guiding the agents to produce structured, reliable output.
Overall, this challenge was an incredible opportunity to build something truly experimental and push the boundaries of what's possible with AI and Postgres.
Team Submission:
- Amrit Kumar (@amritkumar06)
- Bharti Saini (@bhartisaini10)
- Cherav Goyal (@chgoyal01)
Thanks for the amazing challenge




Top comments (0)