Building fast with AI is easy but building something you can actually run, inspect, and trust is harder. This project is my attempt to teach builders core engineering intuitions when building projects.
This is the second and closing post in my mini build-in-public series around a coding course generator. The goal was not to polish a hosted SaaS. It was to validate the concept quickly, release the code, and see whether a local-first learning product built with AI assistance could help others.
Coding setup
- VS Code
- ChatGPT 5.4 for ideation and planning
- Codex 5.3 for scaffolding and broader architectural changes
- Codex 5.3 Spark for faster live coding and pair-programming-style iteration
Because this was a side experiment and I wanted fast feedback, I used ChatGPT and Codex heavily throughout the build. That definitely helped me move faster. It also made one thing very clear: AI can accelerate implementation, but it does not remove the need for architectural steering, debugging discipline, and careful review.
I stopped thinking about this as "AI writes the code" fairly quickly. The better framing for me was: I am still the engineer, but I now have a very fast collaborator that needs clear direction.
What I learned building with my AI assistant
A few practices made the biggest difference for me:
- Spend time in planning mode before implementation. Longer planning conversations up front saved me from a lot of drift later.
- Be explicit about architecture. If I did not define boundaries early, the assistant would fill in the blanks in ways I did not actually want.
- State the expected end result clearly. Prompts worked much better when I described success criteria, constraints, and what “done” should look like.
- Add observability early. Admin pages, job logs, and debug surfaces helped a lot once the system had multiple moving parts.
- Debug collaboratively, not passively. The best prompts were the ones where I pasted the full error, the relevant code path, and the current hypothesis.
- Work in smaller slices. Smaller prompts with checkpoints were much easier to validate than one large mega-prompt.
- Test between slices. Typechecks, linting, and automated tests were essential to keep momentum without accumulating silent breakage.
- Ask for pre-mortems and final audits. Some of the most useful prompts were “what am I missing?” and “review this repo for architectural risk.”
For Codex specifically, a few habits helped a lot: use plan mode for architecture, keep a reusable prompt contract, isolate bigger changes in branches or worktrees when useful, and run a final repo-wide audit once the main implementation is in place.
One other thing I noticed: model choice matters less than matching the model and reasoning depth to the task. When I wanted speed, I went lighter. When I was stuck on architecture or a harder bug, I switched to a deeper pass instead of trying to force a fast model to solve everything.
For example, this is high-level comparison between Codex 5.3 and Codex 5.3 spark that I would switch between:

This is the sequence of prompts I ran with Codex, notice how they are split into smaller building blocks to maintain frequent control and checkpoints when building the project:
Thoughts on the project itself
The hardest part of this project was not generating code. It was deciding how to structure the learning experience for a beginner.
I wanted AI to be optional, not the foundation of the product. That mattered a lot to me. The default experience should still work without any AI setup, and the product should still teach healthy habits rather than turn into a black box.
That pushed me toward a local-first design. The app is meant to run on your machine. It lists guided project-based courses, scaffolds a starter project into your own GitHub repository, and tracks progression through pull requests and validation checks. If you configure an OpenAI or Anthropic key, it can also post optional advisory AI reviews on PRs.
I also experimented with skill-focused course generation around intents like debugging and error_handling. That idea is still promising to me, but I want to be honest about its current state: the custom course pipeline is still experimental, and the UI is currently marked as temporarily disabled while I harden it.
The deeper challenge here is educational design. How do you teach fundamentals in a way that is practical, motivating, and not overly abstract? The repo reflects that tension quite a bit. The course structure includes step-zero onboarding, glossary terms, review checklists, concept primers, reflection prompts, and milestone validation. The goal is not just “finish tasks,” but build better engineering intuition along the way.
Tech stack
- TypeScript
- Next.js 15
- React 19
- Node.js 22
- pnpm monorepo
- Prisma
- PostgreSQL
- a local background worker using tsx
- GitHub API integration with outbound sync and optional webhook ingestion
- OpenAI / Anthropic integrations for optional AI features
- Vitest, ESLint, and Prettier
Architecture decisions I made early
A few design decisions shaped the project quite a bit.
Implemented today:
- Local-first web app plus background worker
- Queue-based job processing with retries
- Optional AI layer on top of deterministic milestone validation and progression
- Structured logging for jobs, events, and LLM activity
- Admin pages for monitoring job and LLM behavior
- An LLM gateway with token limits, request hashing, cache support, and secret scanning on diff input
- AI PR review based on trimmed diff context rather than dumping the full repo into the model
- Template and course versioning
- Course-plan validation against a concept dependency graph, so the learning progression stays more coherent
One detail I particularly like is that milestone progression is deterministic. AI can help as an advisory layer, but it does not decide the underlying progression state. That felt like the right tradeoff for a learning product.
Another useful decision was adding structured logs and worker heartbeats early. Once you have a web app, a worker, GitHub sync, and optional AI behavior, silent failure becomes expensive. I wanted the system to be inspectable.
Still future work:
A few ideas are still more directional than finished:
- better cost controls as usage scales
- a stronger and more reliable custom-course generation pipeline
- better reuse and sharing of generated courses
- richer reflection and learning artifacts around the core course flow
So while I am happy with the foundation, I do not want to oversell the current state. The core local-first flow is the real shipped value today.
Open-source release
The repo is now public and MIT-licensed:
JulienAvezou
/
ai-course-generator
Build custom code fundamentals courses with github and AI integrated workflows
Coding Course Generator
Local-first, open-source software for learning healthy coding fundamentals in the age of AI.
This project is a build-in-public social experiment. The goal is not to ship a hosted product. The goal is to help learners run the app locally, scaffold a real project into their own GitHub repository, and learn fundamentals, debugging habits, and engineering discipline through guided milestones.
Open Source Status
- The source code in this repository is licensed under MIT.
- The repo is intentionally
privateat the package-manager level in package.json to prevent accidental npm publishing. - This repository is publicly visible on GitHub and is intended to be cloned and run locally.
- Treat the codebase, docs, screenshots, and git history as public-facing project material.
Mission
- Teach programming fundamentals through real projects
- Teach professional engineering habits, not just syntax
- Help AI-assisted builders develop better judgment, debugging ability, and maintainable code instincts
- Keep the whole system…
The intended usage is simple: clone it, run it locally, inspect everything, and fork it if you want to push the idea further. That is also why I kept AI optional. You do not need AI features to use the core product.
What I am releasing today is not a polished production platform. It is a local-first open-source project with a real architecture, a clear learning idea, and a working core flow. The custom course pipeline generation is still too experimental, I will try to implement a working version when I have some extra time to dedicate to this project.
Screenshots
Closing thoughts
This project was a useful reminder that AI can accelerate building, but it does not replace product judgment or engineering clarity. If anything, it makes those qualities more important.
If you want to try it, fork it, or build on top of it, I would love to see what you do with it. I am especially interested in hearing what you would change in the learning flow, the validation model, or the AI workflow.
How would you design a beginner coding product today: should AI be a guide, a reviewer, or mostly stay out of the way until the learner gets stuck?
As a beginner, what resources are you using today to learn how to code?
For experienced coders, what resources would you recommend to beginners learning to code today?







Top comments (2)
Wow, that's impressive Julien! Releasing a whole coding course generator as open-source takes a lot of guts. I love that you're focusing on core engineering intuitions, that's exactly the kind of foundational knowledge we need to build on. Observability is so important when working with AI, it's great to see you emphasizing that.
Thanks for the support Aryan! The hardest part of this projects was creating building blocks that felt intuitive for someone with zero coding experience prior.