DEV Community

Cover image for Automating LeetCode Documentation with a Local LLM + GitHub Workflow
Micheal Angelo
Micheal Angelo

Posted on

Automating LeetCode Documentation with a Local LLM + GitHub Workflow

DEV Weekend Challenge: Community

This is a submission for the DEV Weekend Challenge: Community


The Community

This project is built for developers who actively solve LeetCode problems and share their learning publicly. Many developers aim to stay consistent, build in public, and grow within the coding community — but the documentation process often becomes repetitive and time-consuming.

I built this tool to reduce friction in that workflow and help developers focus more on problem-solving rather than manual documentation tasks.


What I Built

I built LeetCode AutoSync, a CLI automation tool that:

  • Adds LeetCode solutions locally with structured metadata
  • Automatically updates and sorts the README by difficulty
  • Generates high-quality structured solution write-ups using a locally hosted LLM (via Ollama)
  • Logs token usage and performance metrics into an Excel sheet
  • Supports multiple programming languages (Python, SQL, C++, Java)
  • Uses a background worker queue to generate AI content asynchronously
  • Gracefully shuts down and manages model lifecycle

Instead of manually:

  • Writing the same markdown sections
  • Updating README entries
  • Formatting code blocks
  • Copying structured explanations

This tool automates the entire process.


Demo

Video Link :
👉 https://www.loom.com/share/589028a173444af191f4788ff7f25a42

GitHub Repository:

👉 https://github.com/micheal000010000-hub/LEETCODE-AUTOSYNC

The tool runs locally as a CLI and integrates with:

  • Python threading + queue system
  • Ollama (local LLM inference)
  • GitHub workflow
  • Structured Markdown generation
  • Token analytics logging

Code

You can view the full source code here:

👉 https://github.com/micheal000010000-hub/LEETCODE-AUTOSYNC

The architecture includes:

  • autosync.py — CLI controller and queue manager
  • repo_manager.py — file handling and README sorting logic
  • llm_generator.py — LLM integration and token telemetry logging
  • git_manager.py — Git automation
  • config.py — environment configuration

How I Built It

🧠 Architecture

I designed the tool using a producer-consumer model:

  • The main CLI thread enqueues solution-generation tasks
  • A background worker processes them using a queue
  • Thread-safe state tracking is handled using locks
  • Notifications are buffered to prevent console interference
  • Graceful shutdown ensures no tasks are lost

⚙️ Technologies Used

  • Python
  • Threading + Queue
  • Ollama (Local LLM inference)
  • Mistral model
  • Requests (HTTP calls)
  • OpenPyXL (Excel logging)
  • Git automation via subprocess
  • Structured Markdown generation

🚀 LLM Integration

Instead of using a cloud API, I used a locally hosted LLM via Ollama.

This allowed me to:

  • Avoid API rate limits
  • Control resource usage
  • Capture inference telemetry
  • Log prompt tokens, response tokens, and generation latency

📊 Observability

Each generation logs:

  • Prompt tokens
  • Response tokens
  • Total tokens
  • Load duration
  • Generation duration
  • Tokens per second

This turns the tool into a small LLM observability layer.


What I Learned

Building this project helped me deeply understand:

  • Designing concurrent CLI applications
  • Avoiding blocking operations with background workers
  • Managing shared state safely with locks
  • Graceful shutdown patterns
  • Model lifecycle management
  • Performance telemetry for LLM inference
  • Designing tools that solve real developer workflow problems

More importantly, I learned how small workflow automation can significantly improve consistency when building in public.


Why This Matters

Consistency is everything in community-driven growth.

By automating documentation, formatting, and GitHub syncing, this tool removes friction from the process of sharing knowledge.

Less friction → More sharing → Stronger developer community.


Thanks for reading! 🚀

Top comments (0)