Let's be honest: as engineering students, we all use AI.
Whether it's for debugging a C++ pointer error that has been driving us crazy for 4 hours, or for brainstorming ideas for that one elective course essay we forgot about.
But for a long time, I had a nagging feeling. I felt like a fraud.
I was treating Copilot and ChatGPT like a magic answer machine. I would paste my prompt, cross my fingers, and hope the result was correct. If it hallucinated or gave me a generic answer, I would just get frustrated and try to rewrite the prompt randomly.
I wasn't an engineer using a tool; I was a student relying on a lottery.
The "Imposter Syndrome" Moment
It hit me during a group project. We were trying to build a simple chatbot, and it kept making up facts about our dataset. My teammate asked, "Why is it doing that? Is the temperature too high? Or is it a context window issue?"
I had no idea what he was talking about. I realized that despite using these tools every day, I didn't actually know how they worked.
I decided to stop copy-pasting and start learning.
What I found (The "Aha!" moment)
I went looking for resources that weren't just "Top 10 Prompts" videos. I found a Microsoft Learn module specifically about the Fundamentals of Generative AI.
It didn't teach me "hacks." It taught me the architecture. And suddenly, everything made sense.
Here is what changed my perspective:
1. It's not Magic, it's Math 🧮
I finally understood LLMs (Large Language Models). They aren't "thinking"; they are predicting the next token based on statistical probability.
- University application: Now, when I ask for code, I verify the logic because I know the model is just predicting the most likely syntax, not "solving" the problem.
2. Hallucinations aren't bugs, they are features 👻
I learned that the model is designed to be creative. If you don't ground it, it will invent things to satisfy the pattern.
- University application: I stopped asking for citations for my thesis directly. I now use RAG (Retrieval-Augmented Generation) techniques or supply the source text myself.
3. "Prompt Engineering" is actually Logic 🧠
Before, I would beg the AI: "Please write better code."
Now, I structure my requests: "Act as a Python Senior Dev. Analyze this function for time complexity. Explain your reasoning step-by-step."
Understanding how the model pays attention to specific words (Transformers architecture) changed how I talk to it.
Why you should care
If you are a student, you are probably worried about AI taking entry-level jobs.
The truth is, companies don't want people who can just use ChatGPT. They want people who understand how to build with it, how to debug it, and how to fix it when it goes wrong.
Taking an hour to understand the theory behind the chatbot puts you ahead of 90% of your peers who just use it as a homework machine.
📚 Where to start
If you want to move from "User" to "Engineer," this is the resource that helped me connect the dots. It covers LLMs, Tokenization, and Responsible AI in a way that is actually digestible (even during exam session).
👇 Check it out here:
👉 Fundamentals of Generative AI (Microsoft Learn)
Are you using AI for your studies? Do you trust it blindly or do you double-check everything? Let's discuss in the comments! 👇
Top comments (0)