Frameworks are good for more than just boilerplate. They encode decisions: how to structure a project, where logic belongs, how to handle requests. A developer picking up Laravel or Spring for the first time isn't just getting free code — they're inheriting years of hard-won conventions. That's valuable. It means a junior and a senior on the same team are solving the same problem in the same -almost- shape.
But "frameworks are useful" doesn't mean "always use a framework." Knowing when not to reach for one is as important as knowing how to use one.
When you're still learning the language
This is the one that gets skipped most often, and causes the most damage later.
When the only mental model is Laravel does it this way, it's not really programming — it's copying at a higher level. Instead of copying Stack Overflow snippets, copying framework patterns. The abstraction is more sophisticated, but the understanding underneath is the same. When a bug appears outside the framework's happy path, or something it doesn't support cleanly is needed, there's nothing to fall back on.
A concrete example: webhook signature verification.
// ❌ What you might write if you only know framework routing
$expected = 'sha256=' . hash_hmac('sha256', $rawBody, $secret);
return $expected === $received; // Vulnerable to timing attacks
// ✅ What learning the language teaches you
$expected = 'sha256=' . hash_hmac('sha256', $rawBody, $secret);
return hash_equals($expected, $received);
hash_equals() instead of ===. This is a language-level security detail that prevents timing attacks. No framework teaches this — it's just PHP. Learning PHP only through a framework, a developer might write === and never know it was wrong.
Learn the language first. Write raw SQL before using an ORM. Handle routing yourself before adding a router. Not forever — just long enough to see what the abstraction is actually doing for you.
When the project is small enough to not need one
A framework has a cost beyond file size or boot time. The mental overhead of fitting your problem into its model is real.
For a 200-line script, a standalone endpoint, or a cron job that reads a file and sends an email — that cost doesn't pay off. A script that runs once a day and calls one external service doesn't need routing, DI containers, or a migration system. It needs to work.
The same principle applies to pulling in packages. PointArt's self-updater downloads release zips from GitHub with no HTTP client library:
// ❌ Reaching for a package by default
$client = new GuzzleHttp\Client();
$zip = $client->get($zipUrl)->getBody();
// ✅ PHP already handles this natively
$ctx = stream_context_create(['http' => ['timeout' => 30]]);
$zip = file_get_contents($zipUrl, false, $ctx);
Knowing the language means knowing when the standard library is enough — and not adding a dependency graph to solve a problem that was already solved.
The question isn't could I use a framework here? It's does this problem have enough surface area that shared conventions help me manage it?
When the framework's model doesn't fit your problem
Frameworks are designed around specific problem shapes. A web framework expects HTTP request/response cycles. An MVC framework expects controllers, models, views.
If your project doesn't fit that shape — a long-running daemon, a CLI tool, a data pipeline, a batch processor — you spend as much effort fighting the framework as building the thing.
When I built PointArt, I had to make this call explicitly: no middleware system, no async, single-process, designed for shared hosting. Not oversights — deliberate limits because the target problem is the web request/response cycle on constrained hosting.
The AI angle
There's a new version of the "framework without language knowledge" problem: using AI to generate framework code without understanding either.
With enough prompting you can build something that looks like a working application. Frameworks help AI here — it's seen a lot of Laravel and Rails, so it generates plausible-looking code. But when something goes wrong, and it will, you have no model of what correct looks like. You can't debug what you can't read. You can't maintain what you don't understand.
AI is a strong tool for developers who already know what they're doing. It fills in boilerplate fast. But the understanding it skips is exactly what you'll need when the generated code misbehaves.
The practical signals
Use a framework when:
- Multiple developers need shared conventions across a long-lived codebase
- The problem shape fits the framework's model well
Skip it when:
- You're still learning the language fundamentals
- The project is small enough that the model costs more than it saves
- The deploy target rules it out
- Your problem shape doesn't match the framework's assumptions
The goal is not to avoid frameworks. It's to know what they're doing — so you can choose when to use them, and know what to do when they stop working.
I've been writing about building PointArt — a zero-dependency PHP micro-framework — from scratch. If you're curious about what it looks like to make these decisions at the framework level, you can take a look at PointArt Devlog Series.
Top comments (2)
In an era where AI can scaffold complex framework architectures in seconds and bridge the gap between intent and implementation, is deep language-level mastery shifting from a 'professional requirement' to mere 'technical nostalgia'? If AI eventually reaches a point where it can flawlessly debug edge cases outside the framework’s 'happy path,' does the manual understanding of the underlying language risk becoming nothing more than an academic hobby rather than a core engineering skill?
That's a genuinely difficult question to anticipate — but worth unpacking carefully.
First, consider whether AI itself would need frameworks. If AI ever reaches the point of reliably building a project from start to finish, the question becomes: are framework conventions still the optimal path, or does AI find better patterns on its own? The framework rules we follow today are conventions shaped by human constraints — team coordination, readability, shared mental models. An AI that no longer shares those constraints may not benefit from the same abstractions.
Second, without foundational understanding, evaluating AI output remains unreliable. AI is trained on existing human-written projects and attempts to mimic how humans write code. But just as humans are poor at auditing their own reasoning, AI is not a reliable judge of its own output.
Third, there's a practical feasibility question. Will AI reach this level of reliability? Will it become hallucination-proof? And even if it does, we're still dependent on the platform, the API, and the company maintaining it. Treating AI as a complete replacement for language-level knowledge means accepting all of those dependencies as a foundation — which is a significant bet.
So the concern isn't that AI will never improve. It's that the understanding being skipped is exactly what you'd need to evaluate, correct, and extend whatever AI produces — now and for the foreseeable future.