DEV Community

Cover image for Software security is hopelessly broken
Blaine Osepchuk
Blaine Osepchuk

Posted on • Originally published at smallbusinessprogramming.com

Software security is hopelessly broken

As software developers, we are doing a terrible job of protecting the data we collect from our users because software security is hopelessly broken. This is a huge topic so I'll restrict my comments to coding, encryption/hashing, web server configurations, regulation, and what we can do about the security of the software we create and maintain.

Programming needs to be significantly safer by default

We're failing on the easy stuff. We are guilty of hard coding API passwords in our code bases and then posting it on github, insecurely storing user passwords, writing code that's vulnerable to injection and overflow attacks, failing to properly validate data before using it, not protecting our backups, not deleting data we no longer need, etc., etc..

I bought a book on secure programming around 2002 and all the risks identified in that book are still very much with us today. We've barely moved the needle at all in the last 15 years!

The only way we are going to make significant progress on software security issues is to make programming safer by default. Trying harder hasn't worked and is unlikely to work for the vast majority of projects in the future.

Sure, it would be great if every developer had security training and lived by the programmer's oath. It would definitely help. I wrote quite a popular post about software professionalism and security if you're interested.

The problem is that we already have a thousand things to remember every time we write a line of code and it's naive to think that humans (with our pitiful working memory of 5 +/-2) will every remember to do everything right all the time. (Or that your boss will let you take another month to review your code for security problems before you release it.)

Secure programming in C is basically impossible

Have you ever looked at the security guidelines for properly range checking an array index in C? Not fun. Who's going to get that 100% correct every time? If you write a significant project in C, you are going to have trouble ensuring that you never get an array index out of bounds, or an overflow, or a null pointer, etc..

You can staple copies of the MISRA C standard to your developer's foreheads and review every commit 5 times but you'll still miss piles of errors.

What's safer look like?

  • computer languages that have automatic bounds checking
  • database abstraction layers that automatically escape inputs to prevent SQL injection attacks
  • templating engines that automatically escape output by default to prevent cross-site scripting
  • form builders that automatically add and check for a unique token on every submission to prevent cross-site request forgery
  • data validators that make it easy to prevent injection attacks
  • web frameworks that have well designed and tested authentication and authorization capabilities
  • tools that allow software developers to statically and dynamically examine their code and report a prioritized list of problems
  • security scanners that are easy to use

These things work because you get the security benefits for free (but only if you actually use them). Secure coding has to be automatic and free if we expect it to work.

Password hashing and encryption need to be idiot-proof

We need simple ways to get the benefits of the most-up-to-date programming practices without becoming experts. PHP developers have actually done some impressive work in this area.

Secure password hashing

For example, password hashing in PHP is now simple to use and strong by default. PHP has three functions in the core that do everything you need to securely store and verify passwords. We upgraded one of our websites in a couple of hours. So, PHP now takes care of the salting and secure hashing of our passwords. Our code will even upgrade our hashes automatically in the future when something better comes along.

Here's the best part: people using PHP's secure hashing functionality don't need to understand security best practices, salting, rainbow tables, or the difference between md5 and sha-256. And that's the way it should be.

Secure application level encryption

Application level encryption should be dirt-simple to use. Anybody should be able to call encrypt($cleartext, $key) and decrypt($cyphertext, $key) and know that it's secure without understanding anything about how encryption works.

If you're an expert go ahead and use the lower level functions. But most of us just need to encrypt a string and store it safely so we can decrypt it later. So just give us something safe to use and we'll use it. Encryption isn't quite as easy to use as password hashing in PHP but it's getting close. Check out this implementation (scroll down for example code). I imagine simpleEncrypt() and simpleDecrypt() or something similar will eventually make it into the PHP core.

Servers need to be easier to configure and more secure by default

Have you ever tried to setup a web server and make it secure? I have, and it's not fun on Windows or Linux. The level of knowledge you need to do this well is insane. But even if you do manage to create what you believe is a "secure" configuration, you have no guarantees that your server will remain secure tomorrow or next week.

What would be better? Imagine if Apple developed the GUI for a web server OS that was built to the security standards of the OpenBSD project. This is out of my wheelhouse so forgive me if I say something silly.

Here are some features I'd like to see in a secure web server OS:

  • it's easy to see the configuration of the system and how it has changed over time (and who changed it)
  • the server monitors the behavior of logged-in users and reports anything suspicious (along with a recording of everything they did and saw during their session)
  • it's easy to see if someone is attacking your system, how they are attacking it, and what the OS is doing to stop the attack from succeeding
  • the server should contact the admin if it needs help defending itself from an attack (and suggest actions the human should take)
  • the OS should only allow software it thinks is safe to be executed (I know this is very challenging in practice but I can dream)
  • configuration changes are made through wizards (or scripts) and the system won't allow you to make silly configuration mistakes (like creating an ssh account with an easily guessed password)
  • the OS should monitor how it is used and suggest or automatically turn off unneeded functionality
  • the OS should automatically install signed updates without requiring a reboot but allow rollback if necessary (or have a configurable update policy)
  • built-in encrypted full backups with one click restores
  • the OS should be wary of new hardware and anything plugged into its USB ports
  • the file system is encrypted by default
  • the OS uses address space layout randomization by default
  • multiple servers can be managed from a single interface with ease
  • the server should fail safely (never reveal sensitive information about itself or its data)
  • the OS should be able to run a self-test and tell you all the places it can be accessed/exploited
  • the OS should learn from the successes and failures of other systems to improve its security and performance (like anti-virus software does today)
  • all firmware is cryptographically signed

I know this stuff is easier said than done but you can't dispute the fact that there's lots of room for improvement here. There's also no shortage of controversy around making computing safer. In many ways freedom and flexibility are at odds with security.

New regulations are going to force us to change the way we design and construct software

I'm interested to see what is going to happen to the software world when the EU's new data protection regulations come into effect on May 25, 2018. These regulations are specific and the penalties for not complying with them are steep but the details of how it's going to be enforced are still unclear. I'd be surprised if 2% of the software in the wild that contains user data complies with these regulations. And making your existing software compliant is going to be expensive.

Plus, this is just the beginning of the regulation of non-safety critical software. I predict more and more regulation will be thrown at us as people get tired of data breaches and the damage caused by our crappy software. People will seek government protection.

I also wonder when insurance companies are going to routinely set premiums for businesses based on what kind of software they develop and how carefully they develop it.

It should be interesting to see how it all turns out.

Okay, software security is hopelessly broken. What happens next?

I believe we'll get slightly better at writing secure software in the coming years. But the bad guys will continue to steal our data with ease.

We'll use safer languages, better tools, and incrementally better software engineering practices to create software that offers our users slightly more protection (like testing, static analysis, and code reviews). Big companies like Google, Microsoft, and Facebook will do a better job of writing secure software than small companies. Apps and IOT devices will remain an absolute disaster area but almost all software will remain vulnerable because, like I've said before, software security is hopelessly broken.

There are just too many ways make a programming or configuration mistake, trick you into defeating your own security, or attacking your system at another level (network, router, OS, hardware, firmware, physical, etc.).

Plus there are billions of lines of code out there that will never and can never be upgraded to be secure because:

  • the software has been abandoned
  • of the expense of modifying existing software is prohibitive
  • it's basically impossible to add effective security controls to an existing insecure system
  • we don't have enough security experts to go around
  • there's no money in fixing it

Conclusion

Here's the thing: our entire modern computing apparatus held together with duct tape. There is no bedrock layer in computing that we could drop down to and say "everything below this point is safe so we're just going to rebuild from this level."

Nothing like that exists. We could employ security experts to redesign/rewrite everything from scratch (hardware, firmware, OS, and applications, protocols, etc.) with insane attention to detail. But we don't yet know how to make complex software without errors, certainly not at the scale we are talking about here.

Plus, what are you going to do about the people? They're part of the system too. Remember, the bad guys can just drug you and hit you with a wrench until you give up all your passwords.

And you also have to worry about physical security as well because someone could slip a key logger between your keyboard and your computer. Or remotely read the EMF off your keyboard (it works with wired keyboards too). Or just install a small camera in your room near your computer and take videos of your screen. Or activate your webcam and read your screen in the reflection in your glasses. Or any of a million other things.

Nope. The truth is that software security is hopelessly broken.

What can you do?

  • keep your software up to date--security updates are the best defense you have
  • comply with all applicable laws and regulations such as GDPR, HIPPA, PCI-DSS, etc.
  • educate yourself about security best practices, the tools, and the languages available to you to help you reduce the cost of writing secure software
  • use a risk mitigation strategy to protect your most sensitive data first because you can't fix everything all at once
  • allocate time to fix lower priority security issues because they are never going to fix themselves
  • raise awareness about your security issues by talking about them with your coworkers (both programmers and non-programmers)

What do you think? Do you believe software security is hopelessly broken? I’d love to hear your thoughts.

Top comments (8)

Collapse
 
hepisec profile image
hepisec

Just a few thoughts as your post is quite pessimistic.

  • If you don't have the knowledge to configure a webserver, consider using a PaaS, e.g. Google App Engine. This way you hand over all the hassle to an experienced team of system engineers who work 24/7 to keep your app online.

  • Or you can use a server management software. From my own experience Plesk is really good at this. However, the default configuration can still be improved.

  • Before you reinvent the wheel (e.g. building the next eCommerce software), check for available Open Source solutions in the field and their developer documentation. You'll benefit from the efforts of the community to build a solid software.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks. These are good tips to help people outsource some of the problems I mentioned, which is a viable strategy.

However, they don't address the underlying issues with software security. Your code is/was still vulnerable to Meltdown and Spectre no matter how you serve it.

Collapse
 
hepisec profile image
hepisec

I don't think that "code" can be vulnerable to Meltdown and Spectre. These are information leakage vulnerabilities which require to run code on your machine. If you're running your web application on bare metal (no shared host), you won't be affected much as long as you apply normal security best practices.

In cloud environments these vulnerabilities are critical, but I expect all major cloud platforms to apply the patches quickly.

Vulnerable clients should also apply normal security best practices, including ad blocking and patching.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Yes. Where I said "code" it would have been more accurate to say "the security of the information contained in your app" is/was still vulnerable...

Collapse
 
andrewsw profile image
Andrew Sackville-West

Disclaimer, I haven't read your whole piece yet...

The principle problem I see is commercial development is done with too much time pressure and not enough focus on security. In my experience, security is always considered last when building new software, as in "we'll come back and add security after we get the product working." And then, business is always eager to deprioritize security. It takes too long, is too finicky, and too restrictive, and doesn't "add value". I recently patched a bug in my employer's auth that had been in place for at least two years and shipped in several versions. It wasn't a priority because our customers never actually enable auth....

The bottom line is, despite all the words to the contrary, business only cares about security to the level it impacts the bottom line. We, as employees, just don't have much impact on that. Thus, this problem will continue despite software developers' best intentions.

Or maybe I'm cynical.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

No, I think you have a valid point of view and your experience matches mine (and that of many others).

However, the way you've framed the problem takes most/all the responsibility off your shoulders as a software developer. But you are far from helpless. You can:

  • recommend safer languages over less-safe languages for new projects
  • use frameworks and other tools and libraries to "go faster" and not talk about the security benefits
  • educate yourself and your team about security and follow best practices for new code
  • report ineffective or non-existent data validation as a defect in your bug tracker (not a security issue)
  • fail code reviews for defects (including security related defects)
  • use a static analysis tool to increase your team's productivity and reduce mistakes (but don't mention the security benefits to management)
  • make sure you keep your software up to date

For example, we sold our product owner on https everywhere for the bump we'll get in our search engine rankings, not the security benefits.

Quality and speed are not opposites. That's based by research, which I wrote about that near the end of this post. That's why most companies that try automated testing, design reviews, code reviews, etc. get so many benefits that they can't imagine producing software any other way.

You can go a long way with the strategies I've described above without ever having to have explicit permission to work on "security". You can appeal to management's desire for improved quality or productivity or efficiency and get the security benefits for free on the side.

Collapse
 
bgadrian profile image
Adrian B.G.

I was surprised to see that database injection is still #1 in top OWASP 2017 threats, but then again the industry has a big influx of newcomers and the intro learning resources lack in security chapters.

A good thing is that containers and managed services took many issues from our hands into the proper ones, security experts that work for datacenters and service providers.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I know, right? And it's not just small projects that still has SQL injection vulnerabilities; big companies are still making headlines with them. Here are some recent examples.

Managed services are a good thing overall but I wonder how many teams actually understand the strengths and weaknesses of outsourcing. Are they still thinking about security or just throwing it over the wall and assuming that their providing is doing whatever is required to keep their project safe?