You should always make a big fuss about mistakes which happen in production. Of course mistakes happen, but why didn't this mistake happen in Test or Acceptance. What do you need to fix to the way you go to production so that this does not happen again the next time.
Quite often production issues are the result of good enough mentality. The quality should be good, it does not have to be perfect. Something is good when it works, and you have proof that it works. It might not be the best in performance or scalability, but you know where its limitations are. And here is where something that is good can still fail in production. For example, in case of the Amazon issue they probably had in incorrect estimation of the surge of new devices. And that's fine. But if they did not even consider for it, and test for it. Then they deserve all the fuss that should be made about it.
In production there should only be two cases of issues which are (kind of) acceptable.
Both these issues are not really solvable. You can only reduce the number of occurrences. This is what defines your software/process maturity.
You can attempt to expose these problems by employing things like chaos engineering and fuzzing testing. But that only gets you so far. In fuzzing testing you generally only try to find the edge cases of a single unit. But for the "That's interesting" you probably need to invoke a whole series of edge cases.
100% this. ^
I'm now changing incident report reasons to "Oh Fuck" and "That's interesting...".
So. Much. Yes.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.