Today, we focused on implementing the AI-driven resume parsing feature. We used spaCy for named entity recognition (NER) to extract key information from resumes.
First, we installed spaCy and downloaded the English language model:
pip install spacy
python -m spacy download en_core_web_sm
Then, we created a new service for resume parsing:
# app/services/resume_parser.py
import spacy
nlp = spacy.load("en_core_web_sm")
def parse_resume(text):
doc = nlp(text)
parsed_data = {
"name": [],
"email": [],
"phone": [],
"skills": [],
"education": [],
"experience": []
}
for ent in doc.ents:
if ent.label_ == "PERSON":
parsed_data["name"].append(ent.text)
elif ent.label_ == "EMAIL":
parsed_data["email"].append(ent.text)
# Add more entity extractions here
# Custom skill extraction (simplified)
skill_keywords = ["Python", "JavaScript", "Machine Learning", "Data Analysis"]
parsed_data["skills"] = [keyword for keyword in skill_keywords if keyword.lower() in text.lower()]
return parsed_data
We also created a new route to handle resume uploads:
# app/routes/resume.py
from flask import Blueprint, request, jsonify
from app.services.resume_parser import parse_resume
resume_bp = Blueprint('resume', __name__)
@resume_bp.route('/upload', methods=['POST'])
def upload_resume():
if 'file' not in request.files:
return jsonify({"error": "No file part"}), 400
file = request.files['file']
if file.filename == '':
return jsonify({"error": "No selected file"}), 400
if file:
# Read the file content
resume_text = file.read().decode('utf-8')
# Parse the resume
parsed_data = parse_resume(resume_text)
return jsonify(parsed_data), 200
This is a basic implementation and will need further refinement, but it's a good starting point. Tomorrow, we'll work on improving the parsing accuracy and handling different resume formats.
Top comments (0)