Let me be honest: most people do not have an intelligence problem, they have a learning problem.
They still try to learn by buying 90-hour courses, reading 300-page books, or binge-watching YouTube videos like it's 2015.
Sure, it feels productive at first, but a few days in, the brain gets overloaded, attention drops, and almost nothing sticks. You feel busy, maybe even smart, but a week later you struggle to explain what you learned.
That is not learning, that is information hoarding.
This is why most people fail to learn AI, coding, finance, or any so-called complex skill. Not because the subject is hard, but because the process is broken.
Random blog posts, long videos watched end-to-end, bookmarks you never open again.
I was doing the exact same thing while working with clients, researching a topic, learning AI concepts to create a digital product, and building AI workflows and systems to write posts like this.
And that's when it clicked for me: I was just consuming information and spending too much time overloading my brain, which was hurting my output.
However, learning only happens when you compress, connect, and test ideas.
That realization led me to build a simple system using "Perplexity and NotebookLM" that now works like a thinking partner.
Below is the exact workflow I use to learn faster, go deeper, and actually remember what I learn.
Note: This post was originally published in my newsletter, AI Made Simple. It's basically where I document what actually works for me with AI, in real workflows.
Step 1: Stop searching randomly. Force AI to curate for you.
Let me be honest, our learning process is completely broken.
We open Google or YouTube, type in a topic, and click the top ranked posts or videos.
Then what happens? You watch a couple of videos and read a few posts. The process quickly becomes exhausting, and you end up going through at least three or four different pieces of content.
Even after all that, you often still do not learn what you were actually looking for.
So instead of opening ten tabs and guessing which source is worth my time, I force Perplexity to filter the noise for me and suggest the best resources upfront.
I also avoid vague prompts like "Explain X" or "What is Y". I use curation first prompts.
For example:
I want to learn [topic] from scratch to an intermediate level.
Give me:
- The 5 to 7 best learning resources, including blogs, videos, or papers
- Only practical and example driven sources
- A short reason why each resource is worth reading
Avoid generic beginner content and theory heavy material
This single prompt saves hours and gives me exactly what I need.
You see, now I am not trying to learn everything. I am learning from the right inputs.
And that matters because bad inputs lead to bad understanding, no matter how smart the model is.
Just so you know:
Everything I've shared here is something I actually use.
If this post changed how you use AI even a little, that didn't happen in isolation. It came from a much bigger shift in how I use AI overall.
That's why I put that entire system down inside "The (Unfair) AI Workflow Playbook" with everything you need.
It's the exact set of workflows I use daily to run my work faster than feels normal, and if you apply even a few of them, you'll save hundreds of hours.
You can spend months figuring this out on your own, or you can steal my entire playbook right now.
Step 2: Convert scattered links into one private knowledge base
Here's where most people still mess up.
They read the resources recommended by Perplexity one by one and hope their brain absorbs everything and magically connects the dots.
It does not work that way.
Instead, I dump all those links, PDFs, or transcripts into NotebookLM and treat them as a single knowledge source.
Now I am no longer asking questions to the internet. I am asking questions to my own curated dataset.
And that changes everything.
When I ask:
- Summarize the core ideas
- What concepts repeat across multiple sources?
- What assumptions do these authors agree on?
I get pattern level answers that connect insights across all the sources, not surface level explanations from a single one.
This is how you move from "I have read about this" to "I actually understand this".
Step 3: Ask questions that force compression & reverse-engineering
To be honest, whenever I read something, my brain fills up with questions.
Most people then read the entire post again or search on Google to resolve them.
That is the opposite of what you want when learning.
I intentionally ask questions in NotebookLM that shrink information instead of expanding it.
Here are a few prompts I use almost daily:
- If I had to explain this concept to a smart 12 year old, what would I say?
- What are the three mistakes beginners make with this topic?
- What would break if I misunderstood this one idea?
- What is the simplest mental model that explains most of this?
These prompts force the model to compress complexity into clarity.
And here is the key insight most people miss: "If you cannot compress it, you do not understand it".
Step 4: Convert learning into an output immediately
This is the step that multiplies retention.
I never end a learning session without creating something:
- a short audio or video explanation
- a mind-map or framework
- a report
- a visual outline
- or even a rough tweet sized insight
Not for social media, but for my own clarity.
The best part? Inside NotebookLM, I can create audio overviews, video overviews, mind maps, quizzes, infographics, slide decks, and more with ease.
That is what I use most often to reinforce concepts.
Even while jogging or exercising, I listen to the audio overview or watch the video overview to solidify my understanding.
Step 5: Turn passive knowledge into forced recall
Now let me be honest: reading summaries in different formats is still not enough, because you do not get a deep understanding of the topic.
If you want something to actually stick, you need retrieval, not more reading or watching.
So I always end my learning session by asking NotebookLM to test me.
For example:
- Create five scenario based questions from this topic
- Give me a short quiz that checks conceptual understanding, not definitions
- Ask questions where a wrong answer clearly reveals a misunderstanding
When I struggle to answer, I know exactly what I need to revisit.
This is how learning turns into a feedback loop instead of a one way dump of information.
The "30-Minute Sprint": How to use this today
Now, let's stop talking about the system and start using it.
If you have a topic you've been "meaning to learn", don't book a 4-hour window on Saturday.
You'll never do it.
Instead, run this 30-Minute Sprint. This is exactly how I tackle a new AI topic or learn something new.
a) 0–5 Minutes: The Curation Filter (Perplexity)
Don't visit to Google or YouTube, and search for the topic you want to learn.
Simply copy and paste this into Perplexity right now:
I need to understand [Topic] for [Specific Goal].
Find me the 3 most data-backed articles and 1 high-quality video transcript. Focus on 'how-it-works' rather than 'why-it-matters'.
b) 5–10 Minutes: Build the Brain (NotebookLM)
Download the PDFs or grab the URLs from Perplexity, and dump them into a new NotebookLM notebook.
Do not read them yet since you are building a silo, not a library.
c) 10–20 Minutes: The Aggressive Inquiry
Now, this is where the magic happens. Instead of reading top-to-bottom, "interview" your sources.
Ask NotebookLM:
- Based on these sources, what are the 3 non-obvious variables that make this work?
- Create a table comparing [Concept A] and [Concept B] based strictly on these files.
- What is the most common point of failure mentioned across all these documents?
d) 20–30 Minutes: The "Proof of Work"
Generate your output.
Click the "Audio Overview" in NotebookLM and listen to it at 1.5x speed while you grab a coffee. Then, write down one sentence that explains the topic to a total novice.
If you can't write that sentence, go back to the chat and ask: "Explain this to me like I'm a distracted executive who only has 30 seconds".
Hope you like it.
That's it, thanks.
Also, don't forget to checkout "The (Unfair) AI Workflow Playbook" where I shared exact set of AI workflows I use daily to run my work faster than feels normal.







Top comments (0)