DEV Community

Cover image for Is AI Really Making Us Dumber?
Kaike
Kaike

Posted on

Is AI Really Making Us Dumber?

How to Use ChatGPT to Boost Your Learning Efficiency through UltraLearning


Over the past two months, I’ve been deeply immersed in the book UltraLearning by Scott Young, following a recommendation from one of my university professors. I had asked him for guidance on how to improve my learning approach to enhance both my academic and professional performance.

The insights I gained from this reading inspired me to write this post, where I share how I’m applying the book’s principles in my own journey as a Computer Science student. My goal is to help anyone looking to learn a new skill or deepen their knowledge by using the modern tools available to achieve their desired outcomes more efficiently.

This post isn't about prompt engineering techniques. Instead, it’s about how to frame your interaction with Large Language Models (LLMs) according to the principles of UltraLearning, in order to achieve a specific learning goal. This kind of contextual “fine-tuning” before engaging with the tool can significantly improve your results.

Based on the nine principles explored in the book, I’ll focus on three where I see strong synergy with Generative AI: Meta-Learning, Direct Practice, and Feedback.

AI and Meta-Learning

Chapter 4 introduces the principle of Meta-Learning — the process of breaking down the subject matter in order to focus on the foundational concepts that guide your learning efforts.

In this context, I’ve found that AI can help identify what to focus on to build a specific skill or understand a particular topic more effectively.

By providing the right context, we can prompt the LLM to outline the core elements of the subject we want to study. This creates a more focused and goal-oriented learning experience.

Personally, I always strive to identify the crucial 20% of concepts that will help me understand 80% of the material, following the Pareto Principle.

AI and Direct Practice

Chapter 6 discusses the principle of Direct Practice — actively practicing the skill or knowledge you want to acquire in a way that closely mirrors how you’ll use it in the real world.

Once I’ve mapped the key concepts of a topic, I use AI to generate practice questions related to each one. To align with the Direct Practice principle, I ask for questions that resemble those I expect to see on my actual university exams. This helps me overcome what Scott Young calls the “Transfer Problem” — making sure that practice matches performance.

Although my examples focus on theoretical academic content, this process works for any skill. When I’m learning a programming technology, for instance, I prompt the AI to give me challenges that involve building an algorithm or a mini-app using that tool — again, making the practice as close as possible to the intended application.

AI and Feedback

Chapter 9 introduces the principle of Feedback — the process of getting input on your performance in order to adjust and improve over time.

The book outlines three levels of feedback, with Corrective Feedback being the most valuable. This type not only tells you what you got right or wrong, but also how to fix it so you improve. The problem? Corrective feedback typically requires an expert — a teacher or mentor — who can accurately assess your work and guide you.

Generative AI helps bridge this gap. LLMs can now provide feedback on your answers to study questions, or review your code and explain how to fix your errors — essentially mimicking corrective feedback in real time.


Down below there's an example of a prompt structure I use that incorporates the principles discussed in this post.
Context: You are my professor for {Subject}.
Goal: You are responsible for answering any questions I have. I will provide the question along with the reference material I’m using.

Instructions:
- Structure your answer into clear sections (e.g., "Opportunities", "Challenges", "Practical Cases").
- Use the Pareto Principle (80/20) to highlight the most critical information for understanding the topic.

Reference materials to be used: {List of materials}

Problem to be solved:
My study must cover the key topics of the specified subject.

Action to be taken: Use the references to map out the crucial content for each topic. At the end of each one, create review questions.
Throughout my study, you must answer any doubts I have and provide review questions at the end of each explanation.

Upon my request, you should correct my answers and provide feedback, including observations and corrections.


The Thin Line Between Feedback and Dependency

As Generative AI becomes a regular part of our workflows, we’re seeing a growing trend of outsourcing more and more tasks to these tools. The real risk lies in delegating critical thinking, which raises the question:
Is AI making us dumber?

That’s why it’s our responsibility to strike a balance — to use AI for feedback, but not to let it do the thinking for us. After all, how can we assess the correctness of AI feedback if we don’t understand the topic ourselves?

That’s why I believe, above all else, we must remain curious and problem-oriented. This mindset keeps us actively engaged with our learning process. It drives us not just to find answers, but to understand the reasoning behind them. With this posture, we don’t become dependent on AI outputs — instead, we become independent thinkers who build our own answers.

Top comments (0)