I've spent the past 13 years in software engineering—and, call me a masochist, but my favorite part is still running face-first into unfamiliar tech and coming out the other side. If you’re a jack-of-all-trades like me, you know the drill: you can dance through AWS, sprinkle some YAML, chat about machine learning infra, and even nerd out about k8s clusters, but—rarely do you wake up feeling like “The One” for any particular topic. You know, the person who the buck stops with.
To get there, you’d have to live and breathe the material. Sweat the APIs. Debug the dark corners. Ship the damn thing yourself. But here’s the secret: you can't fake expertise. You need a foundation built on honest effort. For me, books and courses have always been the gateway.
But here’s the rub: it’s too easy to fool yourself into thinking you’ve “mastered” a book just by reading it. Absorbing core technical material is a multi-step grind — you’ll need to let your brain chew on the information, test yourself, apply it, and loop back for more. Passive reading? It’s like a cheat day for your neurons.
Recently, I picked up Architecting Data and Machine Learning Platforms. I wanted to inhale the knowledge—not just skim it. So, I dusted off three tried-and-true tactics:
Read a section, highlight mercilessly, jot down the essentials for rapid review.
Summarize those highlights at chapter’s end—like your own private CliffsNotes.
Find a study buddy who’s willing to nerd out, challenge my takes, and call out the chapters where the author gets delightfully vague.
But this time, I upped the ante and recruited a new tutor—the Large Language Model (LLM).
Here’s where it gets spicy:
Imagine you’re enrolled in a graduate seminar. You don’t just read—the professor drills you on concepts, throws pop quizzes, and points out where you sound like a hallucinating AI. That's what I wanted. Why not let an actual LLM do the heavy lifting on the quizzes, feedback, and meta-level nitpicking?
So, I gave it a shot. After each chapter, I fed my best notes to two different AI windows (let’s call it “double-barreled learning”): one ChatGPT for speed, one for patience (shoutout to o3 pro deep research mode). Yes, I ponied up $200 for early access—because if you’re not burning money on AI subscriptions, do you really love learning?
The results? Ridiculously useful quizzes.
Varied formats.
Deep recall questions.
Surprising nuance.
Instant feedback without that “please see me after class” embarrassment.
Best of all, it felt low-stakes. Getting something wrong was honestly great—since the LLM would break down exactly where I missed the mark.
(I know, I know: “LLMs hallucinate! You’re just getting tricked, Masud.” Go ahead, be skeptical—I dropped the quizzes and notes for you to judge below.)
Why did this work? The secret: grounding. By feeding only my curated chapter notes, the LLM couldn't stray into mainframe poetry or invent quantum acronyms. Want less BS? Feed it better data.
I ran this experiment across three platforms:
GPT 4.5 and o3 pro deep research (early days).
Later, Perplexity Pro (because I’m a completionist and apparently love paying those premium AI rates).
Sometimes, I even overlapped subscriptions to see which AI UI made me sweat harder on the quizzes.
Spoiler: For any leading LLM, when you ground the prompts in real notes, they all perform great. Minor differences, maybe deeper question angles, but the fundamentals are solid.
Pro tip: Do this with any recent model—ChatGPT, GPT5, Claude Opus/Sonnet—the tool matters less than your process.
Give it a try!
If you’re prepping for a job interview, building your knowledge for a killer work project, or just want to pass as The One in your Slack channel, this workflow is gold.
Feed your notes. Build your own quizzes. Let AI drill you until the material is second nature.
Would love to hear how it goes for you!
If you try this, drop a comment. I want to see what methods you invent—and which questions stump you most.
Ready to see the raw experiment?
Notes and Quizzes: GitHub Repository
Don’t let anyone tell you AI can’t teach you something new—they’re just hallucinating.
Top comments (0)