DEV Community

Sturdy
Sturdy

Posted on

Secure Software Design

empty college classroom

As part of the Executive Order on Improving the Nation's Cybersecurity, there is a provision in Section 4, subsection s that says

The Secretary of Commerce acting through the Director of NIST, in coordination with representatives of other agencies as the Director of NIST deems appropriate, shall initiate pilot programs informed by existing consumer product labeling programs to educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices, and shall consider ways to incentivize manufacturers and developers to participate in these programs.

When I first read this, I got excited. I think that as a whole, we in the development community don't naturally build in security to our applications because we've never been taught what that means. This is an opportunity to improve that.

I've looked into it and I haven't seen too much published, but I knew The Linux Foundation played some major roles in various parts of the Executive Order, so I started to explore what they had available. And I found it - training about secure software development through the Open Source Security Foundation (OpenSSF).

https://openssf.org/training/courses/

As it should, the training starts off with how to design secure software. Here are a few of the high-level points you can take away without actually taking the training

Requirements

Security should be an integral part of the requirements of any software application. Specifically you should think about the following:

  1. Confidentiality

    Which information should not be publicly revealed? Who is allowed to see the data? Can we avoid having that information at all? What about passwords - how are they stored?

  2. Integrity

    Which information should only certain people be allowed to modify? Who are those certain people?

  3. Availability

    This is a hard one as availability is rarely an absolute. How can we develop our software so it isn't easy to overwhelm? Can we build our applications to scale up with load? To protect against corruption, make sure the data is backed up to cold storage.

  4. Non-repudiation

    Is there an action that we want to prove someone took?

  5. Identity & Authorization

    How do people prove who they are? We should implement two factor authentication whenever possible.

  6. Authorization

    Who is allowed to do what? Implement role based security.

  7. Auditing/Logging

    What events are important to record? Login, logout, user creation, user deletion are a must. The format for the record should include when it happened, what happened, what system component did it, and who caused it to happen.

Privacy

Privacy is the right to be left alone, or freedom from interference or intrusion. Information privacy is the right to have some control over how your personal information is collected and used. Various countries and cultures have wildly differing views on what a person's rights are with regard to privacy.

When setting privacy requirements, often the best thing to do is to not collect information. When you don't collect information, you don't have to tell people how you use it, you don't have to figure out a strategy for the misuse of the information. If you need to collect personal information, you must provide protections for them.

The European Union has a comprehensive privacy regulation - General Data Protection Regulation (GDPR). The Linux Foundation has a summary worth reviewing for additional information.

Managing Risk

If people start using the software you develop, expect that intelligent adversaries will try to attack it. Don't wait to address risk. If you ignore risk until something happens then they are now problems. Addressing risk now is easier and cheaper than addressing problems later. Additionally, it is better for professional and organizational reputations to address risk before they are problems.

Identifying risk is something a lot of people are not very good at. Most of us try to use things the way they were meant to be used. As an example, when you go vote, have you ever tried to vote twice? That is the mindset we need to start addressing risk and the best way to get there is to start doing it now and continuously improve. Software we designed 5 years ago with security in mind is likely not secure anymore.

Security is rarely once and done

Someone once told me, "we have to be right 100% of the time, but attackers only have to be right once." I don't know anyone that is right 100% of the time, and NIST agrees with me. They've put together Cybersecurity Framework that should be followed when an incident occurs. Here are the high-level steps:

  1. Identify

    Develop an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities

  2. Protect

    Develop and implement appropriate safeguards to ensure delivery of critical services

  3. Detect

    Develop and implement appropriate activities to identify the occurrence of a cybersecurity event

  4. Respond

    Develop and implement appropriate activities to take action regarding a detected cybersecurity incident

  5. Recover

    Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident

Secure Design Principles

These are just rules of thumb for building in security, but they should never replace thinking and doing the right thing. These are the same secure design principles put together in 1975 in The Protection of Information in Computer Systems.

  1. Least Privilege

    Each user and program should operate using the fewest privileges possible. This principle limits the damage from an accident, error, or attack. It also reduces the number of potential interactions among privileged programs, so unintentional, unwanted, or improper uses of privilege are less likely to occur.

  2. Complete Mediation

    Also known as non-bypassability. Every access attempt must be checked; position the mechanism so it cannot be subverted.

  3. Economy of Mechanism

    Ironically also known as simplicity. The system, in particular the part that security depends on, should be as simple and small as possible. Easier to review, harder to get wrong.

  4. Open Design

    The protection mechanism must not depend on attacker ignorance. Instead, you should act as if the mechanism is publicly known, and instead depend on the secrecy of relatively few and easily changeable items like passwords or private keys. An attacker should not be able to break into a system just because the attacker known how it works. Security through obscurity generally doesn't work.

  5. Fail-safe Defaults

    The default installation should be the secure installation. if it is not certain that something should be allowed, don't allow it.

  6. Separation of Privilege

    Access to objects should depend on more than one condition (such as having a password). That way, if an attacker manages to break one condition (e.g., by stealing a key), the system remains secure.

  7. Least Common Mechanism

    Minimize the amount and use of shared mechanisms. Avoid sharing files, directories, operating system kernel execution, or computers with something you do not trust, because attackers might exploit them.

  8. Psychological Acceptability

    Also known as easy to use. The human interface must be designed for ease of use, so users will routinely and automatically use the protection mechanisms correctly.

Reusing Software

Looking at the composition of applications, a very large majority of the code is reused. It could be proprietary or third-party, open source software. When we reuse software, regardless of the source, we need to consider a few things.

  1. Is the software easy to use securely? If it isn't easy, then it likely won't be used securely.

  2. Can evidence be found that the authors of the software are working to make it more secure?

  3. Is the software maintained at all?

  4. Are there other people using the software?

  5. What license is applied to the software? A lack of a license is a big red flag - a lot of countries make it illegal to use unlicensed software.

And even with all of those questions answered, don't be afraid to do your own review. Run a static analysis tool against it, check for TODO statements, are there tests?

If you are using Open Source Software, you should never fork it to make changes. Contribute the updates back to the community. Forking makes receiving updates very difficult. If Google had a hard time with it, I can't imagine you will do better.

And lastly, if you are using software from a third-party, you should try to keep it up to date as much as possible. The further out of date you are, the harder it will be to upgrade.

Conclusion

That was a long, but brief introduction to the training available through the OpenSSF. If this is of any interest to you, I highly recommend going through the available training material they offer.

https://openssf.org/training/courses/

Top comments (0)