When developers think of AI tools, we usually think of GitHub Copilot or ChatGPT. But when I was looking at optimizing my professional brand and navigating the job market, I realized there was a massive gap: Why don't we have an AI Copilot for LinkedIn?
As a Computer Science student at the University of Loralai and a Python developer, I wanted an AI assistant that could:
Help me analyze job descriptions directly against my profile.
Draft engaging, context-aware comments on posts to build my network.
Automatically audit and improve my LinkedIn profile based on modern recruiter expectations.
Instead of paying for expensive, closed-source SaaS tools, I decided to build exactly what I needed.
Say hello to LinkedIn AI Copilot — an open-source Chrome extension backed by FastAPI and Groq (Llama 3) that integrates right into the LinkedIn interface.
🔗 Check out the repository on GitHub here! https://github.com/buzdaryasir06/linkedin-ai-copilot

`` (If you find it useful, I’d love a star! ⭐)
🏗️ The Architecture: How It Works Under the Hood
To keep the tool fast, secure, and maintainable, I split the architecture into a lightweight Chrome Extension frontend and a robust Python backend.
- The Chrome Extension (Vanilla JS + Manifest V3) I wanted the extension to feel native to LinkedIn.
Content Scripts: These extract the post context or job description text directly from the DOM using Vanilla JavaScript.
Job Scanner Orchestration: One of the coolest features is the Job Scanner. The extension automatically scans LinkedIn job search pages and injects color-coded badges directly onto the job cards based on how well my profile matches the requirements.
Background Worker: It proxies the requests to the backend, keeping the architecture secure.
- The Backend (FastAPI + Groq) The extension acts as a thin client, passing the payload to a local Python backend.
FastAPI: I chose FastAPI because it is incredibly fast, easy to type-check via Pydantic schemas, and naturally handles asynchronous requests.
Groq & Llama 3: Speed is critical for a browser extension. Waiting 5-10 seconds for an LLM response ruins the flow. By routing the prompts (via app/services.py) to the Groq API running Llama 3, the comment generations and job profile audits are returned almost instantaneously.
Local Persistence: Data like the user's base profile and tracked jobs are stored in an async local SQLite database to prevent unnecessary repetitive data-fetching.
✨ Core Features Designed to Save Time
Feature 1: The Context-Aware Comment Generator
"Great post!" is a terrible comment. But drafting a highly strategic, engaging technical comment takes mental energy. The Copilot generates three distinct options based on the post text:
Strategic: Focused on networking.
Authority: Adding technical insight or thought-leadership.
Questioning: Designed to spark genuine engagement.
Feature 2: Job Match Analysis & Batch Scoring
When you are looking for jobs, you often open 15 tabs at once. The Job Scanner scores jobs in batches against your local profile. If a job is a 95% match, it injects a green badge to let you know it's worth your time. If it's a 30% match, you can skip it without wasting 10 minutes reading the description. When you click into a specific job, it generates customized application notes and resume adjustment recommendations.
Feature 3: Full Profile Enhancement Audit
Building a great profile is hard. The Copilot analyzes six core sections: Headline, About, Experience, Skills, Featured, and Context. It doesn't just grade you; it provides actionable, ranked recommendations and rewrites (like shifting an experience bullet into an impact-driven CAR format).
🚀 What I Learned Building This
DOM Traversal is messy: LinkedIn updates its DOM classes constantly. Relying heavily on specific class names is fragile. Instead, utilizing semantic HTML structure and more forgiving query selectors was a key learning.
Speed is a Feature: Swapping from standard OpenAI models to Groq for inference completely transformed the user experience. The latency drop makes the tool feel like an integrated feature rather than a heavy API call.
Prompt Engineering in Production is Hard: It took serious iteration in prompts.py to get the AI to output consistently parsable JSON that the frontend could confidently inject into the DOM without breaking.
Final Thoughts & Next Steps
This project started as a tool to solve my own problems—automating the repetitive parts of networking and job hunting—but it quickly grew into the most complex and rewarding full-stack application I’ve ever built.
It fuses my Python backend skills with creative extension development, and I’m incredibly proud to open-source it to the community.
If you are a developer looking for an edge in the job market, or you're just curious about how Chrome Extensions can leverage local FastAPI backends, I invite you to try it out!
👉 Take a look at the code or try it locally here.
I'm completely open to pull requests, feedback, or any questions about the architecture in the comments below! Have you ever built a Chrome extension that interacts with an external AI API? Let me know your experience!
Top comments (0)