So there I was, staring at a fully working Rubik's Cube simulator on my screen.
Colors, grid, faces — everything exactly where it should be. It ran first try.
And I had absolutely no idea what any of the code did.
The Problem With "It Works"
I'm an IB student trying to build a CS portfolio for university applications. Everyone says the same thing: build projects, put them on GitHub, show you can code.
So I did what most students do. I described what I wanted to an AI, it gave me the code, I ran it, it worked. Done. Next project.
Except — when I looked at the code, it was completely foreign to me. I knew what I asked for. I had no idea what the AI had actually built. I couldn't explain a single function if someone asked me to.
That's not a portfolio. That's a magic trick I didn't understand.
The Rebuild
I decided to start over. Same project, same AI tools — but this time with one rule: I don't move to the next line until I understand the current one.
The difference was in how I prompted. Instead of just saying "build me a Rubik's cube simulator in pygame," I wrote something like this:
"Add a comment above every function explaining what it does. Add inline comments on any line that isn't obvious. After each stage, pause and ask me if I understand before continuing."
That one change made everything different. Suddenly the AI wasn't just handing me a finished product — it was teaching me as it built.
What I Actually Learned
This isn't really about Rubik's Cubes or pygame. It's about context engineering — the quality of what AI gives you is entirely determined by the quality of how you ask.
What's Next
The cube now has a full 2D net, live notation input, and a kociemba solver in progress.
I'm an IB student documenting my CS journey while applying to university. Drop a comment if you're doing the same.
Top comments (0)