Pop quiz. What's wrong with this code?
router.post('/login', async (req, res) => {
const { email, password } = req.body;
logger.info('Login attempt', {
email,
password, // LINE 5
ip: req.ip
});
const user = await db.query(
'SELECT * FROM users WHERE email = $1',
[email]
);
const u = user.rows[0];
// ... auth check ...
analytics.track('user_login', {
email: u.email,
ssn: u.ssn_last4, // LINE 22
creditScore: u.credit_score,
});
return res.json({
token,
user: {
email: u.email,
passwordHash: u.password_hash, // LINE 30
ssn: u.ssn_last4,
creditScore: u.credit_score,
}
});
});
There are 7 issues in there. How many did you spot? Did you catch that line 5 writes plaintext passwords to your log aggregator? That line 22 sends SSN data to a third-party analytics service, violating GDPR Article 28? That line 30 returns the password hash in the API response?
This is a real scenario from LearningTo.co - a training platform I built for the dev skills that CS programs don't teach.
The problem
I've been hiring and mentoring developers for years and I keep seeing the same pattern. Junior devs come in knowing algorithms and data structures but have never:
- Reviewed a pull request and caught a real bug
- Identified PII leaking through logs or API responses
- Recovered from a force-push that wiped a teammate's work
- Looked at AI-generated code and said "this looks right but it's wrong"
These aren't edge cases. This is Tuesday on a real engineering team. And almost nobody teaches them.
What I built
LearningTo.co is scenario-based training. You get dropped into a realistic situation - a PR review, a failing pipeline, a suspicious config - and you have to investigate, flag issues, and explain your reasoning.
Each scenario gives you context like a real team would: "You've just joined a fintech startup. Your tech lead drops a PR review in your queue." Then you're evaluated on three things: what you find, how you explain it, and whether you can prioritize severity.
Here's what gameplay looks like
You see the code. A timer's running. You click lines to flag them. Then you write your analysis explaining what's wrong and why it matters.
The key: reasoning quality is scored, not just finding the line numbers. Saying "line 6 has a bug" gets you nothing. Saying "line 6 logs the plaintext password to the log aggregator, which means anyone with log access can see user credentials" gets you points.
After you submit, you get a full debrief: what you found, what you missed, and why each issue matters in production. A score, a grade, and a ship/block verdict.
The AI Code Review course (free, 8 chapters)
This is the part I'm most proud of. GitHub teaches you how to use Copilot. Nobody teaches you how to review what it produces.
Eight hands-on chapters, each a real scenario:
- Copilot's First CRUD - it compiles, it runs, but should it ship? (SQL injection, no auth, swallowed errors)
- The Hallucinated Import - half the imports point to packages that don't exist on npm
- The 100% Coverage Lie - AI-generated tests show perfect coverage but test nothing meaningful
- The Confident Refactor - AI "improved readability" but silently removed business rules and compliance checks
- Audit the Agent - an AI support agent makes tool calls to external APIs. What could go wrong?
- MCP Server Lockdown - an MCP server gives an AI assistant access to your filesystem and database
- The Destructive Migration - AI-generated schema migration. Your DBA is on vacation. You're the last line of defense.
- CI Pipeline from Copilot - it builds, tests, and deploys. Find the supply chain and injection risks.
That screenshot above is Chapter 2 - "The Hallucinated Import." Can you spot which of those imports point to real npm packages and which were completely fabricated by Copilot? (@react-toolkit/data-grid doesn't exist. Neither does use-smart-fetch. And next/analytics is not a thing.)
The entire course is free. No paywall, no credit card.
What's live
4 categories, 17 scenarios:
| Category | Scenarios | What you practice |
|---|---|---|
PII & Security (/ntrol) |
4 | API key exposure, login endpoint bugs, webhook handling, user profile data leaks |
Git & Gitflow (/mmit) |
3 | Merge conflicts, force-push recovery, conventional commits |
Code Reasoning (/de) |
2 | Auditing AI-generated services, spotting subtle bugs |
AI Code Review (/view) |
8 | The full course above |
The URL taxonomy is a nerdy Easter egg - every slug completes the phrase "learning to..." (learning to /mmit, learning to /ntrol, learning to /de, learning to /view).
7 more categories are coming: Observability, Debugging, Code Review, CI/CD, Performance, Incident Ownership, and Testing.
Who it's for
- CS students about to start their first internship or job
- Bootcamp grads who can build features but haven't done team-based dev work
- Junior devs who want to level up on the skills that get noticed in code reviews
- Senior devs - try the AI Code Review course. I guarantee Copilot has snuck something past you that you didn't catch.
Try it
Head to learningto.co and start with The Login Endpoint - it has 7 PII issues hidden in a fintech auth route. See how many you can find in 10 minutes.
Or jump straight to the AI Code Review course if you want to see how good you really are at catching Copilot's mistakes.
Everything live is free. Sign up takes 10 seconds with GitHub or Google.
I'd love to hear: what dev skills do you wish someone had taught you before your first job? Drop a comment - it'll probably become a scenario.







Top comments (0)