This is a submission for the Agentic Postgres Challenge with Tiger Data
What I Built
Overview
GitResume is an AI-powered platform that analyzes GitHub repositories to provide coders with professional insights & career guidance. This application uses 4 specialized AI agents running in parallel to evaluate code quality, technology choices, career readiness, & innovation across selected repositories.
Built with Tiger Cloud's Agentic Postgres, this transforms what was previously a 1-2 min sequential analysis process into a sub-10 sec real-time experience. Coders can select their best 4-6 repositories & receive comprehensive feedback including individual repository breakdowns, career trajectory detection, & actionable recommendations for professional growth.
The system integrates GitHub API for repository data, implements multi-agent coordination through Tiger Cloud's database forks, & provide a clean web interface for github portfolio analysis & assessment for career planning.
The Core Problem
Many of us coders struggle to effectively communicate our technical abilities. Traditional resumes list technologies & job titles, but they don't capture what really matters: how we actually code, solve problems, & build solutions.
For coders, especially developers, our GitHub repositories are our real portfolio - they contain the evidence of our skills, growth, & technical decision-making. Yet translating that code into career opportunities remains a challenge.
The Solution
GitResume analyzes our selected repositories (typically 4-6 of our best projects) and provides:
• Multi-agent analysis across 4 key dimensions: code architecture, technology choices, career readiness, & innovation.
• Individual repository insights with specific feedback on each project.
• Career trajectory detection based on our actual coding patterns.
• Actionable recommendations for professional development.
Why It Matters
GitResume addresses a real need in the developers community: turning our actual work into career advancement opportunities. By analyzing the code we've already written, it provides insights that help us understand our strengths, identify growth areas, & position ourself more effectively for the roles we want.
It demonstrates how modern database architecture can enable new categories of developer productivity tools that provide immediate, actionable value.
Demo
🔗 Live Application
Experience GitResume in action - analyze your GitHub repositories & receive professional insights in under 10 secs.
Check it out here:- GitResumeAssessment
📂 GitHub Repository
Checkout my source code here:-
Divya4879
/
GitResume
Transform your GitHub into a professional resume with multi-agent AI analysis.
🐅 GitResume : TigerData-Powered Github Resume Analyzer
Transform your GitHub repositories into professional developer insights with AI-powered multi-agent analysis
GitResume leverages Tiger Cloud's Agentic Postgres architecture to provide comprehensive analysis of GitHub repositories through 4 specialized AI agents. The platform integrates Tiger CLI for service management and implements a multi-agent system that analyzes real repository code, providing actionable career guidance and professional development recommendations.
🎥 Live Demo
🔗 Check it out here: GitResumeAssessment
🚀 Key Features
🤖 Multi-Agent AI Analysis System
-
4 Specialized AI Agents working in parallel:
- Code Architect: Analyzes code structure, design patterns, and architectural quality.
- Tech Scout: Evaluates technology stack, framework usage, and modern practices.
- Career Advisor: Assesses professional readiness and portfolio quality.
- Innovation Detector: Identifies cutting-edge technologies and problem-solving approaches.
🐅 Advanced Tiger Cloud Integration
- pg_text Search: Semantic pattern detection across repositories.
- Agent Learning Evolution: AI agents improve accuracy over…
🎥 Project Demo
A complete walkthrough of GitResumeAssessment's features, from entering GitHub username, repository selection to the GitResume professional level assessment & guidance.
📸 Project Snapshots
How I Used Agentic Postgres
1. Tiger CLI : Service Orchestration
Agentic Postgres Feature: Tiger CLI provides command-line interface for managing Tiger Cloud services, enabling programmatic DB operations & service lifecycle management.
How I Used It: Automated the creation and management of Tiger services for multi-agent coordination. The CLI integration allows GitResume to dynamically provision database infrastructure for each analysis session.
Why It's Better: Eliminates manual database setup, enables on-demand scaling, & provides programmatic control over DB resources. This transforms GitResume from a static application to a dynamic, infrastructure-aware system.
// Automated Tiger service creation for multi-agent system
async initializeMultiAgentSystem(username: string): Promise<void> {
try {
// Create Tiger service programmatically
const serviceResult = execSync('./bin/tiger service create --name advanced-gitresume', {
encoding: 'utf-8',
cwd: process.cwd()
});
this.tigerServiceId = serviceResult.trim().split(' ').pop() || '';
console.log(`🎯 Tiger Service Created: ${this.tigerServiceId}`);
} catch (error) {
console.log('⚠️ Tiger service creation failed, try again');
}
}
2. Fast Zero-Copy Forks : Agent Isolation
Agentic Postgres Feature: Zero-copy database forks create instant, isolated database instances without data duplication, enabling parallel processing with complete data isolation.
How I Used It: Each of the 4 AI agents(code-architect, tech-scout, career-advisor, innovation-detector) gets its own dedicated database fork, allowing true parallel analysis without data conflicts.
Why It's Revolutionary: Traditional databases require expensive data replication for isolation. Tiger's zero-copy forks enable instant agent workspaces, reducing setup time from mins to secs & enabling real-time multi-agent collaboration.
// Create isolated workspaces for each AI agent
const agents = ['code-architect', 'tech-scout', 'career-advisor', 'innovation-detector'];
for (const agent of agents) {
try {
const forkResult = execSync(`./bin/tiger fork create --service ${this.tigerServiceId} --name ${agent}-workspace`, {
encoding: 'utf-8',
cwd: process.cwd()
});
const forkId = forkResult.trim().split(' ').pop() || '';
this.agentForks.set(agent, forkId);
// Initialize agent-specific schema
await this.initializeAgentWorkspace(agent, forkId);
} catch (error) {
console.log(`⚠️ Fork creation failed for ${agent}, using shared workspace`);
}
}
3. Agent Workspace Schema Design
Agentic Postgres Feature: Full PostgreSQL compatibility with agent-specific table structures & indexing for optimized AI workloads.
How I Used It: Each agent fork contains specialized tables for insights, learnings, & pattern detection, enabling agents to build knowledge over time & share insights across analysis sessions.
Why It's Powerful: Transforms AI agents from stateless functions to learning entities with persistent memory, enabling continuous improvement & cross-session knowledge retention.
// Agent-specific schema for learning and insights
private async initializeAgentWorkspace(agent: string, forkId: string): Promise<void> {
const schema = `
CREATE TABLE IF NOT EXISTS ${agent}_insights (
id SERIAL PRIMARY KEY,
repository TEXT,
pattern TEXT,
insight TEXT,
confidence FLOAT,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS ${agent}_learnings (
id SERIAL PRIMARY KEY,
pattern_type TEXT,
learning TEXT,
success_rate FLOAT,
updated_at TIMESTAMP DEFAULT NOW()
);
`;
console.log(`📊 Initialized workspace for ${agent}`);
}
4. Parallel Agent Coordination
Agentic Postgres Feature: Multi-database coordination enabling simultaneous operations across multiple isolated environments with eventual consistency.
How I Used It: Orchestrated 4 specialized agents to analyze repositories simultaneously, with each agent contributing unique insights that are aggregated into comprehensive career profiles.
Why It's Game-Changing: Reduced analysis time from 1-2 mins (sequential) to under 10 secs (parallel), while maintaining data integrity & enabling sophisticated cross-agent pattern detection.
// Parallel agent execution with real-time coordination
async analyzeWithAdvancedAgents(username: string, repositories: string[]) {
// Initialize multi-agent system with Tiger forks
await this.initializeMultiAgentSystem(username);
// Run all agents in parallel across repositories
const agentPromises = repositories.map(async (repo) => {
return await this.runParallelAgentAnalysis(username, repo);
});
// Aggregate results from all agents
const repoAnalyses = await Promise.all(agentPromises);
// Cross-repository pattern detection
const crossRepoPatterns = await this.detectCrossRepoPatterns(allInsights);
return {
insights: allInsights,
careerProfile: await this.generateCareerProfile(allInsights, crossRepoPatterns),
crossRepoPatterns,
learningEvolution: await this.updateAgentLearnings(allInsights)
};
}
5. pg_text Search : Semantic Pattern Detection
Agentic Postgres Feature: PostgreSQL's full-text search capabilities with to_tsvector and plainto_tsquery functions, enabling semantic analysis & pattern matching across large text datasets.
How I Used It: Implemented cross-repository semantic analysis to detect technology patterns, coding approaches, & architectural decisions across a developer's entire portfolio. The system searches for semantic relationships between repositories using natural language processing.
Why It's Revolutionary: Traditional keyword matching misses semantic relationships. pg_text search enables GitResume to understand that "authentication," "auth," "JWT," & "OAuth" are related concepts, providing deeper insights into a developer's expertise patterns across projects.
// pg_text search implementation for semantic pattern detection
private async pgTextSearchPatterns(insights: AgentInsight[]): Promise<any[]> {
const searchTerms = ['react', 'typescript', 'api', 'authentication', 'testing', 'deployment'];
const patterns: any[] = [];
for (const term of searchTerms) {
// Real PostgreSQL full-text search query
const query = `
SELECT repository, pattern, insight,
ts_rank(to_tsvector('english', insight), plainto_tsquery($1)) as relevance
FROM agent_insights
WHERE to_tsvector('english', insight) @@ plainto_tsquery($1)
ORDER BY relevance DESC
LIMIT 10;
`;
if (semanticMatches.length > 1) {
patterns.push({
pattern: `semantic-${term}`,
searchMethod: 'pg_text_search',
relevanceScore: semanticMatches.reduce((sum, i) => sum + i.score, 0) / semanticMatches.length
});
}
}
return patterns;
}
6. Fluid Storage : Dynamic Resource Scaling
Agentic Postgres Feature: Intelligent storage management that dynamically scales resources based on workload complexity, enabling efficient processing of varying data sizes without manual configuration.
How I Used It: Implemented adaptive repository analysis where large or complex repositories (10MB+ or high star count) automatically triggers distributed processing across multiple agent forks, while smaller repositories use optimized with single-fork processing.
Why It's Game-Changing: Eliminates the "one-size-fits-all" limitation of traditional databases. GitResume automatically adapts its processing strategy based on repository complexity, ensuring optimal performance whether analyzing a simple script or a massive enterprise codebase.
// Fluid Storage: Dynamic scaling based on repository complexity
private async fetchRepositoryData(username: string, repo: string): Promise<any> {
// Assess repository complexity for intelligent scaling
const repoComplexity = await this.assessRepositoryComplexity(username, repo, token);
if (repoComplexity.isLarge) {
console.log(`Using Fluid Storage for large repository: ${repo}`);
return await this.fluidStorageFetch(username, repo, token);
} else {
console.log(`Using standard fetch for repository: ${repo}`);
return await this.standardRepositoryFetch(username, repo, token);
}
}
private async fluidStorageFetch(username: string, repo: string, token: string): Promise<any> {
// Distributed fetching across multiple agent forks for large repositories
const agents = Array.from(this.agentForks.keys());
// Fluid Storage: Distribute file analysis across agent forks
const importantFiles = (tree.tree || []).filter((file: any) =>
file.type === 'blob' && this.isAnalysisWorthy(file)
).slice(0, 20); // Intelligent file limiting
return {
info: repoInfo,
tree: tree.tree || [],
readme,
fluidStorage: {
used: true,
agentsUsed: agents.length,
filesDistributed: importantFiles.length,
distributionStrategy: 'agent-fork-based'
}
};
}
Project Architecture
┌──────────────────────────────────────────────────────────────────────────────┐
│ USER WORKFLOW │
└──────────────────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ Enter GitHub │──▶│ Select Top 3–6 │──▶│ Initiate Analysis │
│ Username │ │ Repositories │ │ Process │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────────┐
│ TIGER CLOUD LAYER │
└──────────────────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ Tiger Service │──▶│ Database Forks │──▶│ Agent Workspaces │
│ Creation │ │ (4 Agent Instances) │ │ Initialization │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│ │ │
▼ ▼ ▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ ./bin/tiger create │ │ code-architect │ │ tech-scout │
│ (base service) │ │ workspace │ │ workspace │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│
▼
┌───────────────────────────┐ ┌───────────────────────────┐
│ career-advisor │ │ innovation-detector │
│ workspace │ │ workspace │
└───────────────────────────┘ └───────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────────┐
│ PARALLEL AGENT PROCESSING │
└──────────────────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ GitHub API │──▶│ Repository Data │──▶│ File Analysis │
│ Integration │ │ Fetch │ │ Engine │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│ │ │
▼ ▼ ▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ Code Architect │ │ Tech Scout │ │ Career Advisor │
│ • Structure & Patterns │ │ • Frameworks & Tools │ │ • Readiness & Portfolio │
│ • Code Quality Insights │ │ • Languages & Modernity │ │ • Professional Gaps │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│ │ │
▼ ▼ ▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ Innovation Detector │──▶│ Cross-Repo Analysis │──▶│ Pattern Detection │
│ • Creativity & Problem │ │ • Consistency & Evolution │ │ • Learning & Insights │
│ Solving Evaluation │ │ │ │ │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────────┐
│ INTELLIGENT SYNTHESIS │
└──────────────────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ Agent Results │──▶│ Career Profile │──▶│ Final Report │
│ Aggregation │ │ Generation │ │ Assessment │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│ │ │
▼ ▼ ▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ Repo Insights │ │ Role Detection │ │ Hiring Path │
│ • Score: 1–10/10 │ │ • Full-Stack, Senior, etc │ │ • Next Projects, Gaps │
│ • Actionable Feedback │ │ • Confidence Analysis │ │ • Conceptual Readiness │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────────┐
│ USER RESULTS │
└──────────────────────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────┐ ┌───────────────────────────┐ ┌───────────────────────────┐
│ Professional Dashboard│──▶│ Actionable Insights │──▶│ Career Roadmap │
│ • Summary Visualization │ │ • Personalized Guidance │ │ • Long-Term Planning │
└───────────────────────────┘ └───────────────────────────┘ └───────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────────┐
│ PERFORMANCE METRICS │
│ • Analysis Time: <10s (↓ from 1–2 mins) │
│ • GitHub API Calls: <100 (↓ from 5000+) │
│ • Parallel Agent Execution: 4x │
│ • Real-Time Updates: Live Progress Tracking │
└──────────────────────────────────────────────────────────────────────────────┘
Overall Experience
What Worked Well
Tiger Cloud's architecture enables genuine innovation in developer tooling. The database fork concept is transformative - giving each AI agent its own isolated workspace while maintaining data consistency is exactly what this multi-agent system needed. The documentation is surprisingly comprehensive for a cutting-edge platform, making the learning curve smoother than expected for my first-time experience with Agentic Postgres.
What Surprised Me
The performance improvement was staggering. Moving from my previous non-Tiger implementation (1-2 mins) to Tiger Cloud's Agentic Postgres (5-10 secs) wasn't just optimization, it fundamentally changed the entire user experience from "submit and wait" to "watch real-time analysis." The efficiency & speed of Tiger Cloud services exceeded my expectations.
Key Challenges & Solutions
Challenge 1: Free Tier Service Limitations
• Problem: Only 2 services per free tier, but I initially wanted 4+ dedicated agent workspaces.
• Reality Check: Hit this limit immediately during development.
• Solution: Redesigned architecture with intelligent fallback - agents share workspaces when fork creation fails.
• Learning: Always design for graceful degradation, especially with cloud resource constraints.
Challenge 2: GitHub API Rate Limits Crisis
• Problem: First local test consumed 5023+ requests in one analysis run, hitting the 5000/hour limit.
• Impact: Had to wait 1 hour before I could test again - full panic mode!
• Solution: Complete optimization overhaul using Tiger Cloud's caching capabilities.
• Result: Reduced to <100 requests per analysis through intelligent file filtering and Tiger storage.
// Emergency optimization that saved the project
const importantFiles = tree.tree?.filter((file: any) =>
file.type === 'blob' && (
file.path.includes('README') ||
file.path.endsWith('.js') ||
file.path.endsWith('.ts') ||
file.path === 'package.json'
)
).slice(0, 10); // Ruthless limiting to essential files only
Challenge 3: Tiger Cloud Service Outages
• Problem: Encountered Tiger Cloud outages during development.
• Reality: Had to build robust fallback systems for production reliability.
• Solution: Implemented some fallbacks to maintain functionality even when Tiger services are unavailable.
Development Reality Check
This was my first experience with Tiger Cloud, Tiger CLI, & Agentic Postgres, essentially learning everything from scratch. Despite being new to the platform, I managed to build a working multi-agent system in over 20 hrs of development. The fact that a newcomer could achieve this level of integration speaks volumes about Tiger Cloud's developer experience.
Additional complexity: This was also my first Next.js project, adding another learning curve, but the combination worked seamlessly.
Key Learnings
- Agentic Postgres isn't just a database, it's a platform for building intelligent, collaborative systems.
- Zero-copy forks enable architectural patterns that simply weren't possible with traditional databases.
- Resource constraints drive innovation - the free tier limitations forced better design decisions, for me atleast.
- Performance optimization through intelligent caching can be more impactful than code optimization.
- Always plan for service outages - robust fallbacks are essential for production applications.
My Experience building with Agentic Postgres
Tiger Cloud transformed what could have been a slow, batch-processing tool into a real-time, interactive developer assistant. The 750MB storage limit on the free tier proved more than adequate, & the service creation limitations actually led to a more efficient architecture.
Bottom line: Tiger Cloud didn't just improve my application, it enabled an entirely new category of developer productivity tool that provides immediate, actionable value.
Thank You
Building GitResume has been an incredible journey for me. Tiger Cloud didn't just provide a database - it provided a new way of thinking about AI Agents & Applications. The ability to give each AI agent its own workspace through zero-copy forks opened up architectural possibilities I'd never imagined.
To the Tiger Data team: Thank you for creating a technology that enables developers like me to build things that seemed impossible just months ago. The seamless integration between Tiger CLI, database forks, & Agentic Postgres features made this hackathon project feel less like wrestling with infrastructure and more like pure innovation.
To the developer community: GitResume exists because we all know that our code tells our story better than any traditional resume ever could. I hope this platform helps fellow developers showcase their true capabilities & land the opportunities they deserve ✨.
The future of developer tools is collaborative AI systems, & Tiger Cloud has given us the foundation to build that future. GitResume is just the beginning.
And thank you, dear reader, for reading till the end 😊











Top comments (18)
This is really great Divya. There could be an option to add custom repos .. I think it’s fetching only the most popular ones or maybe just the pinned repos.
The project architecture diagram is best (explained a lot of stuff). You could also add it as a GitHub gist and embed it here (that’s what I usually do).
Thank you for checking it out and all of your suggestions.
I'll be implementing your suggestions .
Really inspiring read — it’s a powerful reminder that your real work lives on GitHub, not just on paper. The way GitResume uses AI and Tiger Cloud to turn your coding history into actionable career insight is super clever and clearly adds real value.
Thank you for checking it out.
Awesome project!!!
All the best ..
Thank you 😊
Amazing proejct as alaways, Divya. It's truly awesome how you always come up with something incredible. And it's inspiring how you confront the challenges you face during your projects.
Honestly, your dedication is really admirable.
All the very best ✨✨
Thank you Twelve 😁😊
I just tried is out and it's freaking cool!
Thank you for checking it out 😁
This is a really good one! Cheers
Thank you for checking it out :)
the main language I use in my projects is PHP, it completely missed it.
Thank you for checking it out. Will be adding a wider skillset to it.
Did it not pick up anything for your profile?
I agree with your summary—this addresses a real need for developers, helping us turn our actual work into career opportunities.
Firstly, thank you for checking it out.
Also, that's so true, that's why i wanted this, like for techies, and esp. devs, github- our actual code is what really matters rather than keywords, and seo/fancy stuff right, so i made this. That's why the repos are limited to 6 max as well, coz in a resume, we only chose the best ones, generally 3 projects.
Nice project well documented steps
Thank you for checking it out.