If you have been applying for different jobs, you know how time-consuming and repetitive writing cover letters can be. On top of that, trying to make sure your cover letter includes the job requirements and effectively highlights your relevant skills can feel repetitive. But what if we could automate this process?
This is how I got the inspiration to build an AI-powered cover letter generator that automates this process while maintaining guidelines that make cover letters effective. In this article, I will walk you through how I built this application.
Tech Stack
- Django handles the heavy lifting on the backend.
- HTMX provides smooth interactivity without the complexity of a full JavaScript framework.
- OpenAI's GPT models understand job requirements and generate tailored content.
- Vector embeddings help find patterns in successful cover letters.
- Alpine.js manages a minimal client-side state.
- Tailwind CSS provides a rapid way of building user interfaces.
Application Flow and Demo
Before diving into the technical details, let's see how the flow works. The first thing you need is to upload a resume. The application automatically extracts and processes your experience, skills, and achievements. You can upload multiple resumes and mark one as primary, a handy feature if you maintain different versions for various roles.
The process is very simple afterward:
- Enter job details.
- Generate a cover letter.
- If needed, refine the letter by providing feedback (e.g., "Make it more formal" or "Emphasize my leadership experience").
- Mark cover letters as favorite to influence future generations.
Here are some screenshots!
Resume Management
Uploading and managing resumes is central to this application. Since people often maintain different resume versions for various roles, the application allows users to upload multiple resumes and select one as their primary. Below is the Django model for handling resume uploads:
class Resume(BaseModel):
"""
Model representing a user's resume.
"""
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="resumes")
name = models.CharField(
max_length=255,
help_text="Indicates the name of the resume to better track multiple resumes.",
)
file = models.FileField(
upload_to="resumes/",
validators=[
FileExtensionValidator(allowed_extensions=["pdf"]),
validate_file_size,
],
help_text="Indicates resume in PDF format. Size should not exceed 2 MB.",
)
extracted_content = models.TextField(
help_text="Indicates extracted content from the resume. This is used for generating cover letters.",
)
is_primary = models.BooleanField(default=False, help_text="Indicates if this is the primary resume.")
def __str__(self):
return f"{self.name} - {self.user}"
The following ResumeService extracts the content from a PDF using pypdf
from __future__ import annotations
from pypdf import PdfReader
class ResumeService:
"""
Service class for handling resume-related operations.
"""
def extract_resume_content(self, file_path: str) -> str:
"""
Extracts text content from a PDF file.
"""
reader = PdfReader(file_path)
return "\n".join(page.extract_text() for page in reader.pages if page.extract_text())
resume_service = ResumeService()
You can also extend this service to use other parsing tools like llama-parse.
Vector Embeddings and pgvector
The key to generating good cover letters is understanding both the job requirements and the person’s experience. This is where vector embeddings come into play.
Vector embeddings numerically represent data as an array of numbers (vectors) and capture semantic relationships and similarities between data points. Once data is represented in numbers, semantic relationships can be found by seeing how close data points are to each other as points in vector spaces using techniques like cosine similarity.
When working with embeddings, you need a way to efficiently store and query these numerical representations. While there are several approaches - from simple file storage to specialized vector databases like Pinecone or Weaviate - I wanted a solution that wouldn't complicate the infrastructure. Enter pgvector - an extension that directly adds vector similarity search capabilities to PostgreSQL. Two reasons for using pgvector in your Django project:
- There is no need for a separate vector database - everything stays in PostgreSQL.
- Simple setup with Docker and seamless integration with Django.
Setting it up is straightforward with Docker:
version: '3.8'
services:
postgres:
image: ankane/pgvector
ports:
- "5432:5432"
environment:
- POSTGRES_DB=cover_letter_generator
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
Run the service with:
docker-compose up -d
Then install pgvector python, a package that provides support for pgvector in Python:
uv add pgvector
This package supports different database libraries, including Django and SQLAlchemy. To enable this extension in Django, we have to first create a migration:
from django.db import migrations
from pgvector.django import VectorExtension
class Migration(migrations.Migration):
dependencies = []
operations = [VectorExtension()]
Then run the migrate command and that is it!
Once the extension is enabled, you can define VectorField in your models to store vector embeddings:
class Job(BaseModel):
"""
Represents a job posting that a user wants to apply for.
"""
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="jobs")
resume = models.ForeignKey(Resume, on_delete=models.CASCADE, related_name="jobs")
job_title = models.CharField(max_length=255, help_text="The title of the job position.")
job_description = models.TextField(help_text="The full job description and requirements.")
job_embedding = VectorField(
dimensions=1536,
help_text="Vector embedding of the job title and description.",
)
def __str__(self):
return f"{self.user}'s Job: {self.job_title}"
In the code snippet above, the job_embedding field stores vector representations of job details. But how do we generate these embeddings? While there are multiple methods - such as using Sentence Transformers or Hugging Face's Transformers library - I chose OpenAI's text-embedding-ada-002 model for this project due to its simplicity:
class CoverLetterService:
"""
Service class for handling cover letter generation.
"""
def __init__(self):
self.client = OpenAI(api_key=settings.OPENAI_API_KEY)
self.embedding_model = settings.OPENAI_API_EMBEDDING_MODEL
self.generation_model = settings.OPENAI_API_GENERATION_MODEL
@backoff.on_exception(backoff.expo, RateLimitError)
def _create_embedding(self, text: str) -> list[float]:
"""
Creates embeddings with retry logic.
"""
try:
response = self.client.embeddings.create(
model=self.embedding_model,
input=text,
)
return response.data[0].embedding
except Exception as e:
logger.error("Embedding creation failed: %s", str(e))
raise OpenAIServiceError("Failed to create embedding") from e
Generating Cover Letters with OpenAI
Once we have our embeddings and similar examples, it's time to generate the actual cover letter. I am using OpenAI's gpt-4o-mini model but you can easily configure other services if you wish to do so:
class CoverLetterService:
"""
Service class for handling cover letter generation.
"""
def __init__(self):
self.client = OpenAI(api_key=settings.OPENAI_API_KEY)
self.embedding_model = settings.OPENAI_API_EMBEDDING_MODEL
self.generation_model = settings.OPENAI_API_GENERATION_MODEL
@backoff.on_exception(backoff.expo, RateLimitError)
def _create_chat_completion(
self,
messages: list[dict[str, str]],
temperature: float = 0.7,
) -> str:
"""
Creates chat completions with retry logic.
"""
try:
response = self.client.chat.completions.create(
model=self.generation_model,
messages=messages,
temperature=temperature,
)
return response.choices[0].message.content.strip()
except Exception as e:
logger.error("Chat completion failed: %s", str(e))
raise OpenAIServiceError("Failed to generate content") from e
def _find_similar_examples(self, job: Job, limit: int = 3) -> list[CoverLetter]:
"""
Retrieves favorite cover letters that are semantically similar to a given job.
"""
similar_letters = (
CoverLetter.objects.filter(user=job.user, is_favorite=True)
.annotate(similarity=CosineDistance("content_embedding", job.job_embedding))
.filter(similarity__lte=0.5)
.order_by("similarity")
)
return similar_letters[:limit] if similar_letters else []
def generate_cover_letter(self, resume: Resume, job: Job) -> str:
"""
Generates a cover letter using gpt-4o-mini.
"""
similar_letters = self._find_similar_examples(job)
examples_context = ""
if similar_letters:
examples_context = "\n\nReference these effective cover letters for tone and style:\n"
for letter in similar_letters:
examples_context += f"\n{letter.generated_cover_letter}\n"
messages = [
{
"role": "system",
"content": system_prompt.create_template(),
},
{
"role": "user",
"content": generation_prompt.create_template(
job_title=job.job_title,
job_description=job.job_description,
resume_content=resume.extracted_content,
similar_examples=examples_context,
),
},
]
return self._create_chat_completion(messages)
cover_letter_service = CoverLetterService()
Future Improvements
While the current version gets the job done, there's always room for improvement. Some ideas for the future:
- Knowledge base of external documents/resources about industry guidelines.
- Integration with job application platforms to automatically extract job details.
Try It Yourself
The complete source code is available on GitHub. Feel free to use it directly, contribute improvements, or check the implementation in detail.





Top comments (0)