Skip to content

Do you have any bad “sunk cost fallacy” stories?

twitter logo github logo ・1 min read  

I recently found myself faced with a decision where the best outcome may be to abandon prior work and start over more simply.

This was hard to come by because my brain wants me to reap the rewards of prior labor, even if less than ideal.

Do you have any stories where you or your team went too far down the sunk cost rabbit hole?

twitter logo DISCUSS (32)
markdown guide

Repair Client: I have anti-virus software, but I got a bunch of viruses anyway.

Me: That's because the anti-virus software you're using has a reputation for missing most viruses. You need to use this other one (free), which is proven by independent lab tests to catch almost all malware.

Repair Client: I'd rather finish using the current anti-virus software. (About a year left on the subscription.) I paid a lot of money for it.

Me: Yes, but it doesn't work.

Repair Client: I don't care. I just want to get my money's worth.


There are so many people that are exactly like bad, I don’t understand it! It’s hilarious and frustrating at the same time.


Sounds more like the viruses are the ones getting his money's worth.


I rewrite all the critical solutions from the scratch. That said, when I face a relatively complicated issue, I do a quick MVP and then I completely throw the whole code out to rewrite it. No, I don’t allow myself copy-pasting of some looking good pieces.

Only that way I can confirm the version that goes to prod is relatively mature and robust. At least, it lacks all the congenital problems the first version had.

So, yeah, answering your question, it happens every single time I face a problem more complicated than bubble-sort and it happens on purpose.


Do you have a consistent approach in terms of git branches etc?

Yes. I do not git the first attempt at all to avoid a temptation to just merge it and move ahead towards new alluring horizons :)


Wow. That's not an answer I was expecting to see, and is (if I may say so) far more brutal an approach than would have entered my mind.

Not saying that's bad, by the way.

Did that approach take special training?


Training? Hell no. You just approach the next serious issue without preceding git branch -b and do git reset HEAD --hard once it works.


Edit: I suppose this is the opposite of the Sunk Cost Fallacy. :S

I do. I previously posted my dev history, which contained this:

When I first came back on board 4 years ago, the plan was to do a big bang rewrite of the system. In the back of my mind I knew it was a bad idea, but there was general agreement that it was the best plan. It was also a siren song to start the architecture with a clean slate. However, since I had been gone the system had become pretty sprawling with a lot of apps integrated through a database. As I developed the new system, it became harder and harder to understand how we could possibly migrate to it from what we had. Eventually, the executives got tired of not getting business value from my expensive salary, and directed me to fix some really painful issues in the legacy app for a while. It was after that point, my manager and I agreed to abandon the rewrite. Instead we started implementing the good things we learned from the rewrite into the existing system. We made a lot of positive changes to ease pain for various users. (And we still have a lot of work to do, but we can at least see a path forward.)

However, I have successfully done rewrites before (on the previous project, in fact). But the only successful rewrites have been progressive rewrites, not big bang. It goes right along with this quote.

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.

  • John Gall

Our legacy system still has numerous known issues. It's like a hallway with lots of immovable obstacles and things sticking out of the wall. You eventually learn how to navigate the hallway and it becomes normal. But it limits what you can carry through it. So I have been approved and have started a progressive rewrite. This time I am doing a parallel system, and propagating data to the legacy system which is necessary to keep it functioning. Then migrating piece at a time to the new system and turning off that piece in the old system. That's the plan anyway.

When to rewrite

It is rarely worth rewriting simply to re-express the same solution on top of a different platform (framework, language). In that situation, all you really see are the upsides to the new platform. But it isn't until you get into it that you begin to see the new challenges. It often ends up with an equal amount of issues, but they are just in different places.

I usually find it worth a rewrite when the current code base / system constraints both restrict new kinds of features and resist being refactored. This ends up increasing the overhead of a new feature to be many times the cost of the feature implementation code itself. But the only way to make an improvement by rewriting is to first figure out a system/organization strategy (design, architecture) which is a better fit for your situation.


Well said, and I completely agree - re-writing exactly the same thing on top of different tech just because is begging for trouble.


The biggest sunk cost fallacy comes from lock-in with a vendor or platform. Switching costs can be incredibly high, we understand that, but we're also terrible at estimating the on-going cost of not switching.


I think that the biggest lockins are made with the Languages, frameworks and databases, so big that teams stuck with bad decisions for decades and refuse to admin that the requirements changed.

The vendors and platforms can be easily changed, most public services are very similar in behavior.


"Similar" is only superficial. For example, PayPal and Stripe offer similar features, but the way they work in practice is completely different. If you build your business on top of one it's not necessarily easy to switch. PayPal does things Stripe doesn't do, and vice-versa.

The key difference here is that programming languages and many frameworks are run by organizations committed to improving that product for its user base. Their priorities are centred around making the platform better.

Any "software as a service" company is focused on building their business and they may make dramatic changes that aren't in your best interest. They may shut down their API for no reason. They may pivot to a new business model that makes their service prohibitively expensive. They may get acquired and run into the ground.

As one example, PHP will no doubt be around in ten years, and while the popular frameworks will no doubt change between now and then, any framework you pick now will still exist then even if it has a much smaller user base. You can always patch and update it yourself if necessary.

The same is not true for that SaaS that looks so amazing today. It might be completely gone in ten years.

It is true, but ...

most projects I've seen on APIs can be switched around with a few weeks of dev hours.

For example Paypal and Stripe can be easily be replaced in most websites I know, sometimes with a simple click (install different addon). We moved from Parse to a custom API in 2 weeks I think (when it failed).

What I meant is:

  • the downsides of a language/framework on the long run are way more devastating and limiting (see Instagram/FB with their PHP for example)
  • the SaaS are limited to one feature usually of the project, cannot affect in most cases the entire project, you can switch them in small steps/soft releases/ab testing

Ah, interesting. I have completely opposite experience, where most of the languages and platforms offer similar results, one way or another, so picking any one won't make difference long run, but it's the internal business logic that becomes an issue and can't be easily changed, part due to code structure, but more so because of existing business logic.

Completely agree about 3rd party integrated services, these companies can close down or be bought and change their business model completely, so that's always a risk.

But in the end I think it's more important to be able to adjust and jump on new business opportunities than to worry about tech stack.


This was hard to come by because my brain wants me to reap the rewards of prior labor, even if less than ideal.
The reward is usually your new level of skills and understanding, which is exactly what will let you write better code.

But, in my experience, it is better not to do complete rewrites, it always takes at least 4 times longer than you hope. Refactoring and extending - yes. Rewrites are good in the first stages of a project when you do MVP and prototypes and figure things out.


I'm in the middle of one. Won't give many details because, well, I'm in the middle of it.

I'm working as an infosec consultant, and our current client is a major bank with really big important regulations that basically mandate that they have records of every time someone gains effective access in one of the many applications they use in-house. In addition, they need to investigate any instances where the person gains access without a recorded request for that access and subsequent approval from both their manager and the application owner.

We're currently working on a very big software deployment for them, that automates a lot of things related to access, and can also completely handle those regulations I just mentioned. However, when pitching this to them, they decided that they wanted an external application made from scratch that reconciles new access. The reason? In the past, they paid a different consulting firm to make an external application to do that. It didn't work, and they just internalized the idea of "we need a specially made software to do this".

We weren't able to convince them to just roll it into the main project, so I spent a few months making that new app from scratch. It works, but now that we're finally starting to move out of the requirements phase for the new project, they're realizing that they made a huge mistake and are now paying us two times for the same thing.


"Sunk costs fallacy is analogous to "that's the way we've always done it"
So freaking what!

Working for the federal government there is a near incapable mindset to say, "hey money spent doesn't matter, its sunk costs". Changing course on a big scale also can lead to bad press about money wasted. This is completely foreign and impossible to deal with, especially since the supporting structure to perform work also do not react well to change. Hey mid contract we found a better way to do X, well to bad because a contract mod will take beyond the end of the contract to get done, so plug away on the lesser solution.
Generally if its purely a dollar thing, the economics of sunk costs in terms of dollars can be worked around, but sunk costs of peoples time and passion are another thing. We might get more dollars, getting back peoples time ain't gonna happen. People are funny like that, time too.

As a fan of economics and software engineering, nope no story, intentionally ignore all arguments of sunk costs. P*ss*d of a boss or two along the way. Made many a non fan for people who say "bc that's the way we have always done it"


"This is the way we've always done it" and "we don't need automated tests. We've used the same scripts for 20 years!" are my two pet peeves in doing government work...


Like most comments - opposite situation for me.

If I get the tiniest whiff of something off in my architecture, File -> New -> Project... here we goooooooo!

And that's why I never finish my personal projects.


I love deleting code. The way I see it, the faster you can delete code and start over (whether it be a line or a project), what you're saving is the countless hours saved not continuing bad work.


I completely agree :-)

(added as a comment to a post called 'What little things make you happy while coding?')

Deleting code.

(The context here being: working with problematic legacy code and getting to the point where you have new code (paths) that do(es) the same thing but without the issues, so that the old code, and its issues, can just be discarded)

@ben : so I went to look up the syntax for a comment in the Editor guide, with my comment id readily for cut-and-paste. Turns out there was no need. Coincidence to the rescue! :)


A company I used to work for started using QlikView for all their data dashboarding and actively resisted pleas from my manager to even investigate different stacks. Cut to several years later, they have me writing a "big data" application in QlikView, which is a tool that holds all of its data in RAM and forces all active dashboards to share that RAM, and it Did Not Work.


So... Aside from the sharing issue, if the power goes out, data goes bye bye, right?


The data was usually pulled from somewhere else, and you couldn't add data on the fly, so that wasn't a problem so much.

But if one app went down, it took all the rest on that machine down with it.

...There was (pretty much) one machine.


I once worked on some code that imported product definitions from central office to local factories. I was assigned to fix the bugs on this program since they needed to be fixed fast, otherwise product definitions could not be imported and the factory would not run.

I received bug reports once or twice a week but I quickly saw a pattern; the program was set up poorly. It had been created when there was one factory and two products, but at the time I worked on it there were 20+ factories and hundreds of products. The way the program had been expanded was by using copy/paste. Off course.

So I asked if I could rewrite it. Negative. It would take about a week to rewrite it and fixing the bug would take a few hours, so do the math. But I grew sick of fixing the same sh*t over and over again and started to rewrite it bit by bit. Management must have noticed that my fixes took significantly longer, but if they did, they didn't say anything. Over the coarse of a few months I had done the rewrite as a separate project in the hours of fixing the bugs and when I finally put it in place I really crossed my fingers. But it worked out fine; it came back only once or twice for some small things, but after that I never had to fix it again.

Unfortunately, management still think they made the right decision....


Yes I have been burned by one.

The short version

A company developed an inhouse software solution which didn't work out. The typical pattern of quality vs. speed (»speed now, quality later«).
Lessons learned? None.

Coming up: The next inhouse solution. "The new hope" so to say.
Though being better than the first try, a new big mess was created.
Some time later I entered the company and was thrown into the shark tank.

The result:
Project being over a year late not to mention cost.
This was a literally death march.
Two people left the company before project ended.

Lessons learned? None.
This solution isn't thrown out of the window as it should have been.
More money, time and people are thrown at it. In some magical point in time everything will be unicorns - so the belief. A classical book example of sunken cost fallacy.

Good luck with that.

Classic DEV Post from Feb 14

CSS in JS - have we done something wrong?

Hello folks, its not standard for me article, as I prefer to not get into these kind of discussions,...

Ben Halpern profile image
A Canadian software developer who thinks he’s funny. He/Him.