DEV Community

Cover image for Premature Optimization Is Bad, But Your App Is Just Slow Because You're Lazy
Adam - The Developer
Adam - The Developer

Posted on

Premature Optimization Is Bad, But Your App Is Just Slow Because You're Lazy

"Premature optimization is the root of all evil."

Donald Knuth wrote that in 1974. It is one of the most cited lines in all of software engineering, and it has been used to justify more genuinely terrible code than almost any other idea in the field.

The quote is correct. The way most developers apply it is not.

There is a difference between premature optimization and basic engineering competence. Somewhere along the way, the industry collapsed that distinction, and the result is production systems that make users wait for things that should be instant.


What Knuth Actually Said

Here is the full sentence, which almost nobody quotes:

"We should forget about small efficiencies, about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

He was talking about micro-optimizations. Loop unrolling. Manual register allocation. Squeezing cycles out of hot paths before you know which paths are hot. He was not giving you permission to write N+1 queries, load 400KB of JavaScript on a login page, or fetch entire database tables into memory to filter them in your application layer.

The "premature optimization" shield has been stretched so far beyond its original meaning that developers now invoke it to defend code that is simply slow by design.


The Difference Between Optimization and Competence

There are two entirely different things that get lumped together under "performance":

Premature optimization is spending three days hand-tuning a sorting algorithm before you know if sorting is even on the critical path. It is rewriting a function in assembly before you have profiled anything. It is trading code clarity for speed gains that may not matter.

Basic competence is not making obviously expensive choices when obviously cheaper ones exist at the same level of effort.

These are not the same thing. One requires you to know the future. The other just requires you to know your tools.

Writing a loop that queries the database on every iteration is not a performance decision you deferred for later. It is a mistake you made right now. Selecting every column with SELECT * when you need two fields is not an optimization you skipped. It is unnecessary work you added.

Nobody calls it premature optimization when a carpenter pre-drills a hole before driving a screw. That is just knowing what you are doing.


The Patterns That Are Just Laziness

Let us be specific. These are not edge cases or nuanced tradeoffs. These are patterns that slow applications down and have no corresponding benefit.

The N+1 Query

const posts = await db.query('SELECT * FROM posts');

for (const post of posts) {
  post.author = await db.query(
    'SELECT * FROM users WHERE id = ?', [post.author_id]
  );
}
Enter fullscreen mode Exit fullscreen mode

If you have 200 posts, this runs 201 queries. If your database round trip takes 2ms, that is 402ms of pure waiting added to every single request, for the lifetime of the application, for every user, forever.

The fix is not an optimization. It is a JOIN, which is what relational databases were designed to do in 1970.

SELECT posts.*, users.name, users.avatar
FROM posts
JOIN users ON users.id = posts.author_id;
Enter fullscreen mode Exit fullscreen mode

One query. Done. This is not a performance tradeoff. There is no version of the world where 201 queries is better than 1.

Selecting Everything

const user = await db.query('SELECT * FROM users WHERE id = ?', [id]);
return { name: user.name, email: user.email };
Enter fullscreen mode Exit fullscreen mode

You fetched every column including the password hash, the encrypted recovery codes, the full address, the preferences blob, and the thirty other fields your schema has accumulated over three years. You used two of them.

Every extra column is bytes over the network, memory allocated, time spent serializing and deserializing. More importantly, SELECT * means your application will silently break or leak data if someone adds a sensitive column to that table later.

Select what you need. Always.

Synchronous Work That Does Not Need To Be Synchronous

// These two things don't depend on each other
const user = await getUser(userId);
const settings = await getSettings(userId);
const permissions = await getPermissions(userId);
Enter fullscreen mode Exit fullscreen mode

Each await waits for the previous call to finish before starting the next. If each takes 50ms, you have spent 150ms doing work that could have been done in 50ms.

const [user, settings, permissions] = await Promise.all([
  getUser(userId),
  getSettings(userId),
  getPermissions(userId)
]);
Enter fullscreen mode Exit fullscreen mode

Three concurrent requests, one round of waiting. This is not a micro-optimization. It is understanding how asynchronous code works, which is a baseline expectation for anyone writing it.

Rendering Thousands of DOM Nodes Because It Is Easy

A dropdown with 8,000 options. A table with 50,000 rows. A chat window that mounts every message since 2019 into the DOM.

The browser has to create, style, layout, and paint every one of those nodes. Then it has to keep them in memory. Scrolling becomes janky. Interactions stutter. The user experience becomes noticeably bad.

Virtualization, pagination, and windowing exist. They are not heroic performance engineering. They are the correct default for lists of unbounded size.

Not Caching Things That Never Change

// Called on every request
const countries = await db.query('SELECT * FROM countries ORDER BY name');
Enter fullscreen mode Exit fullscreen mode

There are 195 countries. The list has not changed meaningfully in decades. You are hitting the database for it on every single page load.

A cache with a 24-hour TTL, or even just an in-memory constant loaded at startup, costs essentially nothing and eliminates the query entirely. This is not premature. This is reading the data and making an obvious decision about it.


Why This Keeps Happening

The honest answer is that slow code usually still works. The user experiences a delay. The developer does not feel the delay because they are testing on localhost against a database with 50 rows. The feature ships. The slowness becomes someone else's problem later.

There is also a subtler force at play. Modern frameworks and ORMs make it extremely easy to write slow code. ActiveRecord's lazy loading, GraphQL resolvers that each hit the database, React components that fetch independently of their siblings. These tools are excellent. They also make it trivially easy to produce N+1 queries without ever writing a single explicit loop.

The tools do not save you from understanding what they are doing on your behalf. That is still your job.


The Standard Worth Holding

"We'll optimize it later" is a reasonable thing to say about caching strategies, query tuning, and infrastructure scaling. It is not a reasonable thing to say about selecting fewer columns, batching database calls, or running independent tasks in parallel.

The bar is not "did the feature ship." The bar is "does the feature ship without obvious waste."

Profiling before optimizing is correct. Knowing what your code does before you write it is also correct. These are not in conflict. You do not need a profiler to know that 200 database queries is more than 1.


A Practical Filter

When you are writing code and wondering whether something is premature optimization or basic competence, ask one question:

Do I need to measure anything to know this is slower?

If the answer is yes, finish the feature and measure later. That is the Knuth principle in action.

If the answer is no, if the slower choice is obviously slower by construction and the better choice takes the same amount of time to write, then shipping the slow version is not a principled stance on optimization. It is just not doing the work.

Your users feel the difference. The profiler just helps you find it on a map.

Top comments (11)

Collapse
 
claudiu_cimpoies profile image
Claudiu Cimpoies

This is a really good point and something that gets misunderstood a lot.

I’ve also seen the “premature optimization” quote used as an excuse for things that are clearly inefficient by design. There’s a big difference between micro-optimizing something early and simply avoiding obvious waste.

While working on a small personal blog project recently I ran into a similar situation when experimenting with rich-text editors. Some solutions made development faster at first, but they also introduced complexity and performance trade-offs that became obvious once the project started growing.

That experience made me appreciate the idea of “basic engineering competence” you mentioned here — understanding what the code and the tools actually do under the hood.

In your experience, do you think modern frameworks make this problem worse by hiding too much complexity from developers?

Collapse
 
eaglelucid profile image
Victor Okefie

The distinction you're drawing is exactly right: lazy patterns aren't optimizations, they're just not thinking. But the deeper problem isn't technical, it's that slow code ships because the person writing it never has to wait for it. Localhost with 50 rows feels fast. Production with 50,000 users does not. The feedback loop is broken.

Collapse
 
klement_gunndu profile image
klement Gunndu

The N+1 example is spot on, but I'd push back slightly on eager loading as the default fix — in systems with deep relationship graphs, it can quietly balloon memory usage worse than the query count it solves. Profiling both sides matters.

Collapse
 
adamthedeveloper profile image
Adam - The Developer

yup totally, eager loading can get hairy with deep relationships, memory can blow up fast. Still, N+1s are basically free points lost by default 😅.

Profiling both ways do matter.

Collapse
 
jeffrin-dev profile image
Twisted-Code'r

*Honestly the N+1 query part hit a bit too close *

I’ve seen (and written) code exactly like that early on, where it looks clean in the loop but you forget every iteration is another database call. On localhost with like 20 rows everything feels instant, so nobody notices. Then later with real data it suddenly becomes “why is this endpoint 400-500ms??”.

I like how you framed it as competence vs optimization. Using a JOIN instead of 200 queries isn’t really “optimizing”, it’s just… using the database the way it was meant to be used.

Also the point about frameworks making this easy is real. ORMs and resolvers kinda hide what’s happening underneath, so it’s easy to accidentally create these patterns without realizing it.

Collapse
 
apogeewatcher profile image
Apogee Watcher

At some point in your career, you will need to accept that, in each project, you should start small and simple and will have to refactor a couple of times as you progress.

Collapse
 
adamthedeveloper profile image
Adam - The Developer

preach. it never fails to amaze me that some people don't like the fact that code will eventually need to be refactored.

Collapse
 
robert_cizmas profile image
Robert Cizmas

I know a lot of people that suffer from Premature Optimization

Collapse
 
adamthedeveloper profile image
Adam - The Developer

haha yes, it’s like a rite of passage for devs. all of that just save a few microseconds

Collapse
 
itskondrat profile image
Mykola Kondratiuk

the Knuth quote has been doing so much damage for so long. i have seen developers use it to justify N+1 queries, loading entire tables into memory, and rebuilding indexes on every request - all with a straight face. "dont optimize prematurely" got twisted into "never think about performance until users are screaming". there is a middle ground between micro-optimizing string concatenation and just... writing code that is not obviously broken

Collapse
 
nandofm profile image
Fernando Fornieles

Avoiding premature optimization is no a matter of "lazyness", it is about maintainability and cost.

There are situations where performance comes directly from applying best practices and judgement (like the countries example). Totally agree with you here.

But there are many situations where we try to optimize for a situation that we don't know if will happen. In theses cases I prefer to optimize when the code becomes really a bottleneck.