The Problem: Doom-Scrolling for Jobs
I graduated in July 2024 (B.Tech CS), and I quickly realized a painful truth: Job hunting is 90% manual data entry and 10% actual interviewing.
I was spending hours scrolling through portals, filtering by keywords, and trying to guess if a job description actually matched my resume. I wanted a tool that didn't just look for "Python" or "React," but actually understood the context of my experience.
So, I stopped applying for a week and built TackleIt.
It’s an AI-powered platform that scrapes real-time job listings and uses Google Gemini to match them against your specific profile.
The High-Level Architecture 🏗️
I didn't want to build a simple CRUD app. I wanted a production-grade, scalable system that cost me $0 to run while I looked for work.
Here is the full system architecture diagram I designed:
🔍 How to Read This Architecture
I designed the system in logical clusters to separate concerns. Here is how the data flows through the diagram above:
1. The Client Layer (Top Row) Everything starts with Next.js deployed on AWS Amplify. I chose Amplify because it handles the CI/CD pipeline automatically. When a user lands on tackleit.xyz, the request routes through Hostinger DNS to the Amplify edge network.
2. The Application Layer (Middle Left) This is the engine room.
API Gateway: Acts as the front door for the backend.
AWS Lambda + Docker: Instead of renting a server (EC2) that costs money 24/7, I packaged my FastAPI backend into a Docker container. It sits in AWS ECR and is deployed to Lambda. It only spins up when a user actually makes a request.
3. The Intelligence Layer (Middle Right - The "Brain") You’ll see Google Gemini AI tagged in the diagram. This is where the logic differs from a standard job board.
The Lambda function sends the parsed resume data + job description to Gemini 2.5 Flash.
Gemini returns a Match Score and reasoning, which is sent back to the user.
4. The Automation Cluster (Bottom Right) This is my favorite part. I use GitHub Actions not just for testing, but as a "Cron Job."
Every week, a workflow triggers a set of Python Scrapers.
These scrapers fetch fresh jobs and hydrate the MongoDB Atlas database, so the app always has new content without me lifting a finger.
🛠️ The Tech Stack
Frontend: Next.js, TypeScript, Tailwind CSS, Framer Motion
Backend: FastAPI (Python), Docker
Database: MongoDB Atlas
AI: Google Gemini 2.5 Flash
Infrastructure: AWS (Lambda, API Gateway, ECR, Amplify)
Why Gemini? 🧠
I tested a few models, but Gemini 2.5 Flash hit the sweet spot for this architecture. It has a massive context window (great for reading messy resumes)
In the code, I don't just ask "Is this a match?" I provide the model with a specific persona: “You are a strict technical recruiter. Analyze this candidate's resume against this job description...”
Try It Out
The project is fully live and operational. You can use it right now to find jobs or just to poke around the dashboard.
👉 Live Site: https://tackleit.xyz 👉 GitHub Repo: https://github.com/praneeth552/Jobfinder.git
I’m actively looking for feedback on the Data Layer optimization and the Lambda Cold Start times. If you have experience with serverless FastAPI, let me know in the comments!

Top comments (0)