Why isn't software secure by default?

twitter logo ・1 min read

I'm wondering why developers don't make their software secure by default.
MongoDB (just as an example) isn't even shipped with authentication.

I understand that it is easier to set it up and test it out. But it is even more complicated to make the installation secure afterwards.

Do you have any examples for other software which is insecure by default?
Where do you get information about how to make your installed software safe and protect it against attackers?

twitter logo DISCUSS (22)
markdown guide
 

I think the question is in the same ballpark as "Why isn't software bug-free by default?"

Security is a feature. It is an expensive feature to implement, and even more expensive to try to add in after-the-fact.

Consider Microsoft Windows, which is a special kind of software: it is an operating system.

Is Microsoft Windows secure?

I would argue, vehemently, that Microsoft Windows is the most secure operating system in the world. But... it is also the most attacked operating system in the world, because it is the most successful operating system in the world.

Yet despite the enormous effort that Microsoft has put forth to make Windows so very, very secure, vulnerabilities are discovered. Frequently.

Why? Because Windows is a very large piece of software, with many components.

Security is hard. And expensive.

 

I think it's is more complicated than that.

If you're NASA you may be able to mathematically prove that each and every command does exactly what it needs to do but that is unrealistic with every day apps. There security is first and foremost a mentality.

You can't create any realistic real-life application that is developed without security in mind and create an EPIC in JIRA to make it secure. I mean you can do that but you won't end up with secure software at the end. It's pretty much guaranteed that you'll end up rewriting everything from scratch to make each and every layer safe. Even than software security will remain relative, it's not a feature that your software provides or it doesn't.

 

You might also say that there are not a whole lot of actual consequences for neglecting security. It's pretty clear that some corporations with win-at-all costs attitudes tend to operate with in less-than-ethical ways. They leak passwords, make a press release, deal with a bit of bad PR, then move on. Security is hard, but I guarantee there are a lot of business leaders negligently putting their users' security at risk to move faster and save a buck.

I'd have to agree with that but I'd say that good engineers should also have the integrity to stop such a malicious practices. Plus business leaders would make more of an effort if customers valued secure software more. This further leads me to believe that everyone involved is at fault and as engineers we should focus on fixing our part first and foremost.

Maybe we, as engineers, need to create a framework by which customers can observe and judge the quality of security in the products they use. If only customers could see which sites / products / etc. are cutting corners and which ones are engineering with quality, they could vote with their feet and their pocketbooks. That would create the pressure to stop cutting corners, and financially incentivize good engineering. Imagine if there were a Consumer Reports type of independent review of software security!

I like this line of thinking. Google and others have made the web a lot more secure by really freaking out when things are not https in key places and it's only going more that way.

It's definitely something that could be done at a grass roots level too. An independent review paired with great PR push could get the info diffused pretty well. I bet there are a lot of news orgs that would report on a well-made periodic report on who's cutting corners with your security. It wouldn't catch people preemptively per se, but it could go a long way in encouraging the right behavior.

A review like this should ideally be balanced and understanding of mistakes but be really hard on negligent responses/cover ups or negligence in general, as well as perhaps discussing trends which could lead to future problems, like well-predicted security issues with iOT.

When discussing this, one thought is "this must exist already in some form, why bother" but if you and I haven't heard of it than, by definition, it hasn't reached a wide enough audience and may have strategic flaws.

If feels motivated to create something like this, feel free to ping me at ben@dev.to and me and my team could help out in some way.

I am glad that someone other than me has the will to take this on, because to tell the truth, I myself have neither the energy nor the expertise to do it. (Sorry!) But I want to say, go go go!

Sue

 

You're absolutely right! It is not the easiest thing to write secure software.

But I actually meant, that many server applications already have security features documented but they are not enabled by default.

I just taking MongoDB as an example and I don't want to blame MongoDB (actually I really like it), but you have to enable authentication by yourself. Why isn't authentication enabled by default and you have to specifically disable it?

 

A product that is easy-to-install and works performs, business-wise, far better than one that requires configuration.

Simplicity sells.

I think you'd have a hard time naming one product that is actually shipped as secure, or is easy to secure, in the eyes of a security expert.

 

Indeed it does.
Maybe the developers should provide a quickstart version (which is useful for demos or trying things out) and one for the production environment where e.g. authentication is required.

But we all know that's not a decision devs will take ;)

 

There is far more to securing something than enabling authentication, and authentication isn't always needed to make something secure. If you have a database that runs on a local server and doesn't listen over the network you're really not adding any meaningful security to it by adding a username and password.

If you accept connections over the network you're also not necessarily secure unless you have a very good plan in place for key management, which is a pretty hard problem that people often flub.

 

Here's a classic conversation about something like this years ago. Ruby on Rails has lots of configuration and defaults and some of those defaults are extremely insecure.

Someone noticed that one of these defaults was not only insecure, he found a significant chunk of websites weren't choosing a secure alternative or adding other protections to prevent the issue. The side effect of this meant that you could inject content into websites by just adding HTTP parameters and they'd bypass important things like "does this user have permission to do this?"

The consensus was that the convenience of the feature outweighed the "bad devs" out in the wild (to paraphrase).

To drive home the point that it's not just dumb people doing dumb things he submitted an issue over 1000 years into the future and even made a commit to Ruby's master branch because even the smarties at Github had left the vulnerability in.

 

I think the answer can be given through an analogy: why aren't all houses nuclear-bomb-proof by default?

 

I think this analogy is out of place. If you had asked "why don't all houses have locks that are relatively difficult to break" then the answer would be "most houses do except for where the owner was lazy enough to consider it important".

 

No, it isn't. And that's because security (that's worth anything at all) is hard. Not just 'a typical front door comes with a reasonably secure lock' hard, but rather 'adding a bomb shelter to a normal house takes quite some time and effort' hard.

The post and your comment rely heavily on the 'just' in "can't you just ...", which is the bane of any hard-working software developer.

I think you are misunderstanding both the original post and my response to you comment. Without going into what is the bane of a hard-working software developer. One's house getting nuclear bombed is a rare phenomenon. No one builds protection against rare phenomenon. But a thief breaking into your house is a regular occurrence hence you must build a good lock for your house. I think that is security 101. Now, neither I nor the original poster is talking about the rare phenomenon. We are talking about simple and regular hacks against which a lot of websites do not have any protection.

Perhaps an important difference in interpretation here is that some seem to understand the above as referring to 'turn the existing security on by default', while others see it as meaning 'adding security as a feature'. The former, I can definitely agree with, the latter not so much (assuming typical time and effort constraints).

You have touched upon an important but subtle difference between the two. And yes, I agree that the former is something that every developer should make an effort not to get wrong. The latter depends on your appetite on how nuclear-bomb-proof you want to be.

 

Because people want to run software fast. Speed transcends accuracy and security. People demand fast games, fast mining, and fast computing in general. Software developers are supposed to cut corners. Security is just another feature.

... And that's why the entire computer industry is so vulnerable to security attacks. Security doesn't sell well.

 

IMO security issues often stem from starting with 1) invalid assumptions or 2) valid assumptions that are later invalidated/broken. Often, such assumptions stem from not considering all possibilities or the arrival of new possibilities (that break valid assumptions).

 

It depends on the use-case. Sometimes authentication is not a hard necessity. For example if you have a database with only test data on a local VM that only runs during business hours and accepts no connections from the outside world (but just from your host machine), there's no real need to setup authentication. Because, why would you bother to protect test/dummy data for development purposes?

Other times, services are by default set to bind to the local network adapter (127.0.0.1), also disallowing any "outside" connections by default. This can sometimes also be enough "authentication" (because if an unauthorized person would then connect to your service, it would mean they are already "inside" your system, which is a whole other problem). But shipping something with no authentication options at all sounds like a very bad practice.

 

Insecure hardware example:
Any cheap router default username and password: admin:admin :D

Classic DEV Post from Sep 11 '18

We believe in open-source, so why can't everyone code?

There is a cancerous attitude in today's environment that needs to be fixed.

Matthias πŸ‘¨β€πŸ’» profile image
Software Engineer. Always curious for new (☁️) technologies. Working on all stacks. Using Java, TypeScript, Bash, Docker, Kubernetes, macOS, ...