A lot of people ask me: “What exactly will I learn in your 4-month program?”
So here’s a clear, honest breakdown of what you’ll walk away with after 16 weeks:
🔹 Weeks 1–4: Building & Running LLM Applications
You begin by learning how LLM-powered systems are built and executed.
You’ll work on:
✔️ LangChain – building LLM applications using prompts, chains, memory, and tools
✔️ Hugging Face – models, tokenizers, datasets, and inference APIs
✔️ Prompt & Context Engineering – structuring prompts, managing context windows, and reducing hallucinations
✔️ Ollama & vLLM – running models locally and serving them efficiently in production-like setups
👉 Focus: Using models correctly and understanding how they run under the hood.
🔹 Weeks 5–8: RAG, Fine-Tuning, Evaluation & Optimization
Once you can run models, you move into making them useful, reliable, and efficient.
You’ll learn:
✔️ Retrieval-Augmented Generation (RAG) – grounding LLMs with external knowledge
✔️ Fine-Tuning Language Models – when to fine-tune vs when not to
✔️ Model Evaluation – structured test cases and scoring strategies
✔️ MCP & Quantization – reducing memory usage and improving performance
👉 Focus: Turning LLMs into production-ready systems.
🔹 Weeks 9–12: AI Agents & Orchestration
This phase moves beyond single LLM calls into agentic systems.
You’ll work on:
✔️ AI Agents with LangChain – planning, tool usage, and memory
✔️ n8n & CrewAI – workflow automation and multi-agent collaboration
✔️ LangGraph & LlamaIndex – graph-based and index-driven agent workflows
✔️ SmolAgents & Agent Evaluation – lightweight agents and reliability evaluation
👉 Focus: Designing systems that think, plan, and act.
🔹 Weeks 13–16: LLM Internals & Building from Scratch
The final phase is where everything comes together.
You’ll learn:
✔️ PyTorch & Neural Networks – tensors, forward/backward passes, training loops
✔️ Tokenizers & Positional Encoding – how raw text becomes model input
✔️ Attention Mechanisms & KV Cache – how modern LLMs optimize inference
✔️ Building a Small Language Model (SLM) from Scratch – architecture, training workflow, and evaluation
👉 Focus: Understanding LLMs deeply by building one yourself.
🎯 What You Walk Away With
After 16 weeks/4 months, you don’t just use GenAI tools you understand, build, optimize, and evaluate them.
This program is designed for:
✔️ Software Engineers
✔️ DevOps / Platform Engineers
✔️ Engineers transitioning into GenAI
If 2026 is the year you move from using GenAI → engineering GenAI, this journey was designed for you.
🎓 From Software & DevOps Engineer → Generative AI Engineer
📚 50% off my book: Building a Small Language Model from Scratch

Top comments (0)