DEV Community

Cover image for SkillGap: Building an AI-Powered Career Assistant
Jing
Jing

Posted on

SkillGap: Building an AI-Powered Career Assistant

SkillGap: Building an AI-Powered Career Assistant

Jing Ng & Liuyi · Spring 2026


Introduction: The Vision of SkillGap

The Problem

Job descriptions are hard to read. They are long, full of buzzwords, and it is genuinely difficult to know if you qualify. Most people either talk themselves out of jobs they could get, or apply to roles they are not ready for.

There is no quick, honest way to compare your skills to what a job actually needs. That is the problem we wanted to solve.

The Solution

SkillGap does the comparison for you. Paste a job description into the app. It checks it against your saved skill profile and gives you a match score, a breakdown of your skills into three groups (Have / Missing / Bonus), and a personalized AI learning roadmap.

You get a clear, actionable plan instead of a guess. As you learn and update your profile, your results improve too.

A Two-Person, Two-Tier Approach

We divided the work so both of us could build at the same time without conflicts.

Jing built the foundation: JWT authentication, skill profile CRUD, the keyword extraction engine, and the animated match ring.

Liuyi built the smarter layer: Claude API integration for the learning roadmap, the TDD testing framework, framer-motion UI animations, Skeleton UI loading states, and the AI evaluation suite.

Two Ways We Used AI

We used two AI tools in different roles throughout the project.

Claude Web handled planning and artifacts. Jing used it to turn early ideas into a prioritized plan. Liuyi used it to convert a hand-drawn wireframe into a proper UI reference, and later to generate the evaluation results webpage from real test data.

Antigravity handled coding inside the IDE. Both of us used it to write, refactor, and debug code across the frontend and backend.

The split was simple: Claude Web for planning, Antigravity for building.


Part I: The Foundation: Building the Core

JWT Authentication and User Accounts

Everything in SkillGap depends on knowing who the user is. We built JWT-based authentication first: signup, login, logout, and protected routes that reject unauthenticated requests at the API boundary.

Passwords are salted and hashed before they reach the database. JWT tokens are validated on every protected request. We built this first because every other feature depends on it.

On top of auth, we built full CRUD operations for the skill profile. Users can add, edit, and delete skills at any time. Changes persist immediately, so every new analysis uses the latest version of the profile.

Data Modeling with SQLAlchemy

We designed the SQLAlchemy schema to support the full app from the start. The core tables cover users, skill profiles, and analysis history. The history table lets users look back at past analyses and track how their match scores improve over time.

We used Alembic for migrations. Any schema change is a migration file that can be reviewed, tested, and rolled back cleanly instead of a risky manual database edit.

The Keyword Extraction Engine

This is the core algorithm of the product. The engine takes a job description and a user's skill profile, compares both against a curated list of tech skills, and produces three outputs: skills the user has that the job wants, skills the job wants that the user does not have, and bonus skills the user brings that the job did not ask for.

The matching uses keyword normalization so common variations map to the same skill. "React," "ReactJS," and "React.js" all resolve to the same entry. The vocabulary covers frontend, backend, infrastructure, and data tools.

The engine lives in its own module, separate from the auth and profile modules. That made it straightforward to unit test in isolation and easy to extend later.

The Animated Match Ring

A raw percentage is hard to read at a glance. We built an animated SVG ring that shows the match score visually: green for a strong match, yellow for partial, red for a significant gap. The ring draws progressively on page load so the result feels like it is being calculated rather than just appearing.

Below the ring, the three-column view shows the exact skills behind the score so users can see what drove the number.

Claude Web for Early Product Definition

Before writing any code, Jing used Claude Web to work through the project design in plain language. The conversations helped pin down the core user flow, figure out which features to build first, and surface questions that the PRD did not answer yet. For example, what should the app do when a job description mentions a skill that is not in the curated list? What should a new user see before they have set up a profile?

Getting those decisions made before touching the code saved significant debugging time later.


Part II: The Brain: AI and Engineering

Claude API Integration for Learning Roadmaps

The extraction engine tells you what skills you are missing. The Claude API tells you what to do about it.

After the engine produces a list of missing skills, the FastAPI backend sends those skills and the original job description to the Claude API. Claude returns a structured JSON roadmap. Each step in the roadmap has a skill focus, recommended resources, estimated time, and a short explanation of why that skill appears at that point in the sequence.

The React frontend parses the JSON and renders it as a row of cards, one per topic, so the plan is easy to read and act on. It is not a generic list of links. It is a personalized, ordered plan built around what the specific job needs.

TDD for the Engine, AI Evaluation for the Roadmap

AI features are harder to test than deterministic code because the output changes every time. We handled the two parts with different approaches.

For the extraction engine, we used strict TDD. Tests were written before any implementation code. Every module in server/tests/ has a corresponding test_<module_name>.py file, and the CI pipeline blocks any merge that drops pytest coverage below 80%.

For the Claude roadmap, we built an AI Assessment Test Suite that scores each response on three dimensions: Relevance (do the resources match the missing skill?), Specificity (does it give concrete next steps rather than vague advice?), and Completeness (does it cover every missing skill?). These scores feed into the eval dashboard.

Backend Security

Two security issues came up during development that were not in the original plan.

The first was client-side score spoofing. The match score was being calculated on the React frontend and sent to the FastAPI backend for storage, which meant a user could manipulate the value before it was saved. We moved the calculation entirely to the server so any client-provided score is ignored.

The second was unhandled SQLAlchemy exceptions. Database errors were returning 500 responses that included full stack traces, which exposes internal implementation details. We added a global error handler that logs the full detail server-side and returns a clean, safe error message to the client.

Both issues were caught in code review before they reached production.

UI Polish with framer-motion and Skeleton UI

We did a focused UI pass to make the app feel responsive, especially during the Claude API call which takes a few seconds.

We used framer-motion to add staggered animations to the skill columns and roadmap cards. Results animate in one after another rather than all appearing at once, which makes the page feel like it is presenting information rather than dumping it.

While the Claude API call is in flight, the results panel shows a Skeleton UI placeholder that matches the shape of the real content. Users see the layout of what is coming instead of a blank screen, which makes the wait feel much shorter.

Wireframe to Production: Liuyi's UI Workflow

Liuyi started with a hand-drawn sketch of the dashboard layout. That sketch went into Claude Web, which generated a detailed wireframe with component suggestions and layout reasoning. The wireframe became the design spec for the production implementation, which was then built with Antigravity inside the IDE.

The workflow was: sketch to Claude Web wireframe to production build. Having a concrete reference before writing any React code meant fewer design guesses and fewer rewrites.


Part III: Quality and Evaluation

CI/CD with GitHub Actions

Every pull request goes through a multi-stage GitHub Actions pipeline before it can merge: lint (ruff and ESLint), security scan, pytest, and build. The coverage check is a hard gate. If backend coverage drops below 80%, the pipeline fails and the PR is blocked.

No secrets live in the codebase. API keys, JWT secrets, and database connection strings are stored in GitHub Secrets and injected at runtime. The app deploys to Render automatically when a PR merges to main.

The pipeline enforces the quality standards that are easy to skip when they only live in a document. A rule wired into CI runs every single time.

AI-as-a-Judge Evaluation

To evaluate the roadmaps Claude generates, we used a second AI call as the judge. After Claude produces a roadmap, a batch evaluation job sends that roadmap plus the original missing skills and job description to an LLM evaluator. The evaluator scores the response on Relevance, Specificity, and Completeness using a fixed rubric.

Using AI to evaluate AI output is the practical approach for testing non-deterministic results at scale. It does not replace human review, but it makes it possible to run evaluations across many test cases and get consistent, structured scores that can be tracked over time.

The Evaluation Webpage Artifact

Liuyi ran the full evaluation suite on real test data and passed the results into Claude Web. Claude Web generated an evaluation webpage artifact: a clean, readable summary of all the scores with notes explaining what drove the high and low marks.

The page served two purposes. It gave us a concrete quality signal on the Claude API integration. And it produced shareable documentation that shows exactly how the AI output was measured and what the results were.


Conclusion

Efficiency Gains from an AI-First Workflow

Using Claude Web for planning and Antigravity for implementation made the whole development process significantly faster. The biggest gains came from two places: early planning conversations that cleared up ambiguity before any code was written, and in-IDE acceleration that handled boilerplate and caught errors during implementation rather than after.

Why the Two AI Roles Needed to Stay Separate

Claude Web is a conversation. It works well for open-ended thinking, generating artifacts, and working through decisions that are not well-defined yet. Antigravity operates directly on your files. It works well for writing, refactoring, and debugging real code.

Mixing the two roles would have been slower. Copying code from a chat window into a file adds friction. Using a conversational tool for planning loses the iterative loop that surfaces better decisions. Keeping each tool in its role made both more effective.

Future Roadmap

The most valuable technical upgrade from here would be replacing keyword extraction with sentence-transformer embeddings for semantic skill matching. The current engine only matches exact keywords. It cannot recognize that "component-based UI development" means React. Semantic matching would make the app useful for a much wider range of job descriptions, including ones written in plain English rather than keyword lists.

Other ideas include resume parsing so users can build their skill profile from an uploaded document, and a reverse analysis mode where an employer checks a candidate's profile against a job posting.

Final Thoughts

SkillGap started as a class project and became something we are proud to show. The modular architecture meant both of us could work independently without stepping on each other's code. A shared .antigravityrules file meant we agreed on conventions before they became conflicts. And splitting AI responsibilities cleanly between planning and building gave us speed without losing control of the output quality.

The biggest thing we learned: AI tools make a good engineering process faster. They do not replace the need for one.


Tech Stack Skill Gap Analyzer · Jing Ng & Liuyi · Spring 2026


SkillGap — Tech Stack Skill Gap Analyzer

A web app that helps job seekers identify skill gaps from job descriptions Paste a JD, see your match score, and get an AI-generated learning roadmap.

Team 7: Jing Ng · Liuyi Yang

Tech Stack

  • Frontend: React 18, TypeScript, Vite, Tailwind CSS, Zustand
  • Backend: FastAPI, Python 3.11, SQLAlchemy, PostgreSQL
  • AI: Claude API (claude-sonnet-4-20250514)

Features

Screenshot 2026-03-12 at 4 28 16 PM

Screenshots

Login page

Login page

Sign up page

Sign up page

Profile Setup Page

Screenshot 2026-03-12 at 2 44 13 PM

Main Dashboard - Skill Match Analysis

Screenshot 2026-03-12 at 4 07 02 PM

System Architecture

System architecture

Request flow when user submits a job description

Request flow

The user pastes a job description, our server compares it against their saved skill profile, calculates a match score, and asks Claude AI to generate a learning plan for the gap.

Frontend + Backend + Database + Claude API

Screenshot 2026-03-12 at 3 56 49 PM

Project Structure

SkillGap/
├── client/           # React frontend (Vite, Tailwind, Zustand)
├── server/           # FastAPI backend
│   ├── tests/        # pytest test suite
│   ├──

Top comments (0)