<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SRI VILLIAM SAI</title>
    <description>The latest articles on DEV Community by SRI VILLIAM SAI (@srivilliamsai).</description>
    <link>https://dev.to/srivilliamsai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/srivilliamsai"/>
    <language>en</language>
    <item>
      <title>How I Built an Auto-Generating Resume System with Node.js Microservices and Kafka</title>
      <dc:creator>SRI VILLIAM SAI</dc:creator>
      <pubDate>Thu, 26 Mar 2026 05:20:41 +0000</pubDate>
      <link>https://dev.to/srivilliamsai/how-i-built-an-auto-generating-resume-system-with-nodejs-microservices-and-kafka-3l4o</link>
      <guid>https://dev.to/srivilliamsai/how-i-built-an-auto-generating-resume-system-with-nodejs-microservices-and-kafka-3l4o</guid>
      <description>&lt;p&gt;Have you ever forgotten to update your resume after completing a course or finishing an internship? I have, many times. So I built a system that does it automatically.&lt;/p&gt;

&lt;p&gt;Resume Ecosystem is an open source project that auto-generates a verified, living resume from your real achievements. Every internship, course, hackathon, or project you log gets streamed through Kafka, verified, scored, and reflected in your resume without you touching a PDF.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/srivilliamsai/resume-ecosystem-node" rel="noopener noreferrer"&gt;https://github.com/srivilliamsai/resume-ecosystem-node&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With Static Resumes
&lt;/h2&gt;

&lt;p&gt;Most developers maintain a Word document or Canva template they update once every few months. The result is always outdated, always missing something, and always a pain to format.&lt;/p&gt;

&lt;p&gt;I wanted a system where your resume updates itself as you grow. Log an achievement, get it verified, watch your resume rebuild automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The system is built as 8 microservices communicating through Kafka topics:&lt;/p&gt;

&lt;p&gt;API Gateway handles all incoming requests and JWT verification on port 4000&lt;/p&gt;

&lt;p&gt;Auth Service manages registration, login, and token issuance on port 4010&lt;/p&gt;

&lt;p&gt;Activity Service handles CRUD for achievements with Jaccard similarity deduplication on port 4020&lt;/p&gt;

&lt;p&gt;Verification Service verifies achievement hashes with an LRU cache on port 4030&lt;/p&gt;

&lt;p&gt;Resume Service rebuilds, ranks, and versions your resume on port 4040&lt;/p&gt;

&lt;p&gt;Integration Service ingests webhooks from external platforms on port 4050&lt;/p&gt;

&lt;p&gt;Notification Service fans out resume events via email and WebSocket on port 4060&lt;/p&gt;

&lt;p&gt;File Service renders the final PDF resume using pdfkit on port 4070&lt;/p&gt;

&lt;p&gt;The event flow looks like this:&lt;/p&gt;

&lt;p&gt;User logs activity → activity.created fires on Kafka → Verification service picks it up → activity.verified fires → Resume service rebuilds → resume.version.published fires → Notification service alerts the user&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Technical Decisions
&lt;/h2&gt;

&lt;p&gt;I chose Kafka over direct REST calls between services because it keeps everything decoupled. The resume service does not care who verified an activity or when. It just listens for the verified event and rebuilds.&lt;/p&gt;

&lt;p&gt;I used Fastify instead of Express because of its schema based validation and significantly better performance on benchmarks. Every route has a defined request and response schema.&lt;/p&gt;

&lt;p&gt;Prisma was the right ORM choice here because the monorepo has multiple services each with their own schema but sharing the same PostgreSQL instance. Prisma handles the per-service client generation cleanly.&lt;/p&gt;

&lt;p&gt;Redis handles two things: caching verification lookups in the verification service and rate limiting at the API gateway level.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Verification Works
&lt;/h2&gt;

&lt;p&gt;When you submit an activity with a certificate URL or credential hash, the verification service checks it against trusted issuers. Results are cached in an LRU cache so repeated lookups are instant. Once verified, the Kafka event fires and your resume rebuilds within seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resume Scoring Algorithm
&lt;/h2&gt;

&lt;p&gt;Activities are not treated equally. The resume service assigns impact scores based on activity type, issuer reputation, recency, and verification status. Internships at known companies score higher than self-reported projects. Verified credentials score higher than unverified ones. The final resume score is a weighted average that gives recruiters a quick signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;p&gt;Node.js 20 with TypeScript 5&lt;br&gt;
Fastify for all HTTP services&lt;br&gt;
KafkaJS for event streaming&lt;br&gt;
Prisma ORM with PostgreSQL&lt;br&gt;
Redis 7 for caching&lt;br&gt;
React 18 with Vite and TailwindCSS&lt;br&gt;
Zustand for frontend state&lt;br&gt;
pdfkit for PDF generation&lt;br&gt;
Docker Compose for local dev&lt;br&gt;
Kubernetes manifests for deployment&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Building event driven systems is harder than it looks. The biggest challenge was not Kafka itself but handling failures gracefully. What happens if the verification service is down when an activity.created event fires? You need retry logic, dead letter queues, and idempotent consumers.&lt;/p&gt;

&lt;p&gt;I also learned that monorepo tooling with npm workspaces is surprisingly good. Sharing TypeScript configs and common utilities across 8 services without a build tool like Nx or Turborepo is very manageable for a project this size.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Next
&lt;/h2&gt;

&lt;p&gt;OAuth integration with LinkedIn and GitHub to auto-import activities&lt;br&gt;
Swagger UI at /docs for interactive API documentation&lt;br&gt;
Full test coverage with Jest and Supertest&lt;br&gt;
Puppeteer based HTML to PDF for better resume templates&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;Clone the repo and run the demo in 60 seconds:&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/srivilliamsai/resume-ecosystem-node" rel="noopener noreferrer"&gt;https://github.com/srivilliamsai/resume-ecosystem-node&lt;/a&gt;&lt;br&gt;
cd resume-ecosystem-node&lt;br&gt;
npm install&lt;br&gt;
npm run docker:up&lt;br&gt;
npm run db:push&lt;br&gt;
npm run seed&lt;br&gt;
npm run dev&lt;/p&gt;

&lt;p&gt;Then open &lt;a href="http://localhost:5173" rel="noopener noreferrer"&gt;http://localhost:5173&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you find this useful or have architecture feedback, a star on GitHub goes a long way. Contributions and issues are very welcome!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/srivilliamsai/resume-ecosystem-node" rel="noopener noreferrer"&gt;https://github.com/srivilliamsai/resume-ecosystem-node&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>typescript</category>
      <category>kafka</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
