I recently found myself faced with a decision where the best outcome may be to abandon prior work and start over more simply.
This was hard to come by because my brain wants me to reap the rewards of prior labor, even if less than ideal.
Do you have any stories where you or your team went too far down the sunk cost rabbit hole?
Top comments (27)
XKCD always has the answer
Repair Client: I have anti-virus software, but I got a bunch of viruses anyway.
Me: That's because the anti-virus software you're using has a reputation for missing most viruses. You need to use this other one (free), which is proven by independent lab tests to catch almost all malware.
Repair Client: I'd rather finish using the current anti-virus software. (About a year left on the subscription.) I paid a lot of money for it.
Me: Yes, but it doesn't work.
Repair Client: I don't care. I just want to get my money's worth.
There are so many people that are exactly like bad, I don’t understand it! It’s hilarious and frustrating at the same time.
Sounds more like the viruses are the ones getting his money's worth.
Edit: I suppose this is the opposite of the Sunk Cost Fallacy. :S
I do. I previously posted my dev history, which contained this:
However, I have successfully done rewrites before (on the previous project, in fact). But the only successful rewrites have been progressive rewrites, not big bang. It goes right along with this quote.
Our legacy system still has numerous known issues. It's like a hallway with lots of immovable obstacles and things sticking out of the wall. You eventually learn how to navigate the hallway and it becomes normal. But it limits what you can carry through it. So I have been approved and have started a progressive rewrite. This time I am doing a parallel system, and propagating data to the legacy system which is necessary to keep it functioning. Then migrating piece at a time to the new system and turning off that piece in the old system. That's the plan anyway.
When to rewrite
It is rarely worth rewriting simply to re-express the same solution on top of a different platform (framework, language). In that situation, all you really see are the upsides to the new platform. But it isn't until you get into it that you begin to see the new challenges. It often ends up with an equal amount of issues, but they are just in different places.
I usually find it worth a rewrite when the current code base / system constraints both restrict new kinds of features and resist being refactored. This ends up increasing the overhead of a new feature to be many times the cost of the feature implementation code itself. But the only way to make an improvement by rewriting is to first figure out a system/organization strategy (design, architecture) which is a better fit for your situation.
Well said, and I completely agree - re-writing exactly the same thing on top of different tech just because is begging for trouble.
The biggest sunk cost fallacy comes from lock-in with a vendor or platform. Switching costs can be incredibly high, we understand that, but we're also terrible at estimating the on-going cost of not switching.
I think that the biggest lockins are made with the Languages, frameworks and databases, so big that teams stuck with bad decisions for decades and refuse to admin that the requirements changed.
The vendors and platforms can be easily changed, most public services are very similar in behavior.
"Similar" is only superficial. For example, PayPal and Stripe offer similar features, but the way they work in practice is completely different. If you build your business on top of one it's not necessarily easy to switch. PayPal does things Stripe doesn't do, and vice-versa.
The key difference here is that programming languages and many frameworks are run by organizations committed to improving that product for its user base. Their priorities are centred around making the platform better.
Any "software as a service" company is focused on building their business and they may make dramatic changes that aren't in your best interest. They may shut down their API for no reason. They may pivot to a new business model that makes their service prohibitively expensive. They may get acquired and run into the ground.
As one example, PHP will no doubt be around in ten years, and while the popular frameworks will no doubt change between now and then, any framework you pick now will still exist then even if it has a much smaller user base. You can always patch and update it yourself if necessary.
The same is not true for that SaaS that looks so amazing today. It might be completely gone in ten years.
It is true, but ...
most projects I've seen on APIs can be switched around with a few weeks of dev hours.
For example Paypal and Stripe can be easily be replaced in most websites I know, sometimes with a simple click (install different addon). We moved from Parse to a custom API in 2 weeks I think (when it failed).
What I meant is:
Ah, interesting. I have completely opposite experience, where most of the languages and platforms offer similar results, one way or another, so picking any one won't make difference long run, but it's the internal business logic that becomes an issue and can't be easily changed, part due to code structure, but more so because of existing business logic.
Completely agree about 3rd party integrated services, these companies can close down or be bought and change their business model completely, so that's always a risk.
But in the end I think it's more important to be able to adjust and jump on new business opportunities than to worry about tech stack.
A company I used to work for started using QlikView for all their data dashboarding and actively resisted pleas from my manager to even investigate different stacks. Cut to several years later, they have me writing a "big data" application in QlikView, which is a tool that holds all of its data in RAM and forces all active dashboards to share that RAM, and it Did Not Work.
So... Aside from the sharing issue, if the power goes out, data goes bye bye, right?
The data was usually pulled from somewhere else, and you couldn't add data on the fly, so that wasn't a problem so much.
But if one app went down, it took all the rest on that machine down with it.
...There was (pretty much) one machine.
Like most comments - opposite situation for me.
If I get the tiniest whiff of something off in my architecture, File -> New -> Project... here we goooooooo!
And that's why I never finish my personal projects.
But, in my experience, it is better not to do complete rewrites, it always takes at least 4 times longer than you hope. Refactoring and extending - yes. Rewrites are good in the first stages of a project when you do MVP and prototypes and figure things out.
"Sunk costs fallacy is analogous to "that's the way we've always done it"
So freaking what!
Working for the federal government there is a near incapable mindset to say, "hey money spent doesn't matter, its sunk costs". Changing course on a big scale also can lead to bad press about money wasted. This is completely foreign and impossible to deal with, especially since the supporting structure to perform work also do not react well to change. Hey mid contract we found a better way to do X, well to bad because a contract mod will take beyond the end of the contract to get done, so plug away on the lesser solution.
Generally if its purely a dollar thing, the economics of sunk costs in terms of dollars can be worked around, but sunk costs of peoples time and passion are another thing. We might get more dollars, getting back peoples time ain't gonna happen. People are funny like that, time too.
As a fan of economics and software engineering, nope no story, intentionally ignore all arguments of sunk costs. P*ss*d of a boss or two along the way. Made many a non fan for people who say "bc that's the way we have always done it"
"This is the way we've always done it" and "we don't need automated tests. We've used the same scripts for 20 years!" are my two pet peeves in doing government work...
I love deleting code. The way I see it, the faster you can delete code and start over (whether it be a line or a project), what you're saving is the countless hours saved not continuing bad work.
I completely agree :-)
(added as a comment to a post called 'What little things make you happy while coding?')
Deleting code.
(The context here being: working with problematic legacy code and getting to the point where you have new code (paths) that do(es) the same thing but without the issues, so that the old code, and its issues, can just be discarded)
@ben : so I went to look up the syntax for a comment in the Editor guide, with my comment id readily for cut-and-paste. Turns out there was no need. Coincidence to the rescue! :)
Yes I have been burned by one.
The short version
A company developed an inhouse software solution which didn't work out. The typical pattern of quality vs. speed (»speed now, quality later«).
Lessons learned? None.
Coming up: The next inhouse solution. "The new hope" so to say.
Though being better than the first try, a new big mess was created.
Some time later I entered the company and was thrown into the shark tank.
The result:
Project being over a year late not to mention cost.
This was a literally death march.
Two people left the company before project ended.
Lessons learned? None.
This solution isn't thrown out of the window as it should have been.
More money, time and people are thrown at it. In some magical point in time everything will be unicorns - so the belief. A classical book example of sunken cost fallacy.
Good luck with that.