If you're always playing poker for very high stakes, sometimes you're gonna get burned for very high stakes.
Especially when you're trying something new (like launching new code), doesn't it make sense to play for lower stakes?
If the worst case scenario for your code is: "no customers can checkout all day," then we can use modern tools like A/B test groups to make sure your code was only released to 1% of customers, drastically reducing our risk. Now your worst case scenario is reduced to "1% of users couldn't check out all day."
But how can we get alerted earlier, so our outage isn't all day long? With dashboard tools like AppOptics or Amplitude, and automated alerts from the likes of PagerDuty or Pingdom, we can detect anomalous behavior sooner, as long as we define in advance what our metrics are.
When an anomaly is detected, we can rely on our feature flipper library of choice to turn off the code path for everyone. Maybe we've reduced our outage from "No one could check out all day" to "1% of users couldn't check out for 1 minute."
A much better scenario.
Testing is just one of the ways we can feel confident about our code. The more we can decrease our maximum risk, the less we stand to lose. Think of it like portfolio diversification.
I think it's important that we start out by giving ourselves the lowest possible stakes--the least worst case scenario to start from before we even begin evaluating the value of testing. Why would you want every deploy to potentially risk losing the company a bunch of money?
Before you even begin testing, remember to weigh what's at stake, and consider what alternative approaches you can use to mitigate your risk. When testing is essential for code confidence, make sure your tests aren't providing a false sense of security