Part of being a developer - part of being a human being, really - is lifelong learning. The idea that there's always more to learn, and that you'll become a better developer (and a better person) if you take the time to learn new things. In the spirit of growth through learning, I've been trying to take half an hour out of every workday for professional development, focusing on something I want to learn that's not directly related to my current work priorities. At first, this professional development time was spent learning more about things I touched in my work. Topics I studied included typescript, redux (which I've been using for 3 years and I still struggle with), and our app's session management. While I appreciated the opportunity to learn more about these topics, I wasn't entirely sure how studying the same things I was working on every day was contributing to my overall professional growth. My manager shared our department's competency framework with me, and after reviewing that framework, I noted a few areas where I felt that I had a lot to learn.
One of the gaps that I identified in my knowledge was web security. I know that security is important, and while many companies (including mine) hire dedicated cybersecurity engineers, as a developer, I knew that it was important for me to learn the basics of web security to get a better sense of where I could run up against security vulnerabilities (and when to reach out to my company's security team to review my options with them). To help me learn more about this area, I decided to watch a course on LinkedIn Learning called Programming Foundations: Web Security.
I learned a lot from this course, and in the spirit of sharing my knowledge and helping others find areas where they want to grow, I decided to write a series of articles recapping what I learned through that course.
Web security is when we are aware of the potential threats and have adequate protection against them. As a web developer, this means detecting potential vulnerabilities in code and addressing them. The process of detecting these threats is called developing a threat model, and is an important part not only of launching a new web application but also of maintaining an existing app.
Total security is unachievable, but it's important to use your threat model to discover the most important vulnerabilities and protect against those. Effective security is not a one-time thing - it requires constantly reassessing the threats against your website and protecting against new threats.
After introducing the concept of web security, the course covered some general security principles. It described each of the principles and how to apply them. Some of these principles were things I had heard of before, but many of them were new to me, and I learned a great deal from this section.
This principle says that every user of the system should operate with the least amount of privilege necessary to complete the job. In a system that applies the principle of least privilege, a user only has access to the resources that they absolutely must be able to access. In such a system, a user also cannot have access to edit their level of privilege or to give themselves access to something to which the do not need access.
This principle should also be applied to code. Code should only be accessible to other code that needs to use it. One example of this is private and public functions. If a function will not be used outside of the scope where it is declared, it should be private (and therefore not accessible outside that scope). Only when you need to reuse a function outside the scope where it is declared should that function be made public.
The simpler a system is, the more likely it is to be secure. Adding complexity to a system increases the likelihood of introducing bugs or mistakes that could lead to a security vulnerability.
The video on this topic offered several suggestions to reduce complexity in an app, and I'd like to mention two of these suggestions in particular. One is to leave code comments, particularly pertaining to security concerns. When a code decision is made with security concerns in mind, it's important to ensure that all other devs who see that code are aware of the security concern and don't refactor that code in a way that could create a new vulnerability. The other suggestion I wanted to mention was using functions built-in to the programming language rather than custom solutions. As devs, sometimes we feel that we can create a custom solution that fits our needs better than a language's built-in functionality. But these built-in functions tend to be better tested and have often already taken into account security concerns, which means that in writing custom code, we're losing the security that would have come from the built-in functionality. If we choose to use custom code instead of a built-in solution, there should be a good reason for it, and security concerns should be part of the development and review process for this custom code.
Trust should only extend as far as absolutely necessary. While you may want to believe that all of your users have good intentions, this likely won't be true (after all, insider threats, where a user with legitimate access abuses that access, are a real concern). To ensure good security for your system, it's a good idea to be a little paranoid and treat every user as if they are a potential hacker.
One example of an attack that exploited legitimate user access was a recent hack involving several verified Twitter accounts. The affected accounts posted a message promising to send back twice as many bitcoins if users sent bitcoins to a particular address - but no return bitcoin was sent. The attackers were able to access these verified accounts because they had gained access to a Twitter admin account - an account that had legitimate access to user information was used for a malicious attack. Even if you know your user won't be launching an attack, you never know who else may be able to gain access to their account (more on that later).
Security should be proactive, not reactive. Assume that you will be hacked (or at least that attempts will be made), and figure out how it's most likely to happen so that you can work to prevent it. A good place to look for vulnerabilities is in edge cases, which are often overlooked and can contain unexpected security vulnerabilities.
It's important to have multiple defenses in place. This decreases your reliance on any one defense, while at the same time increasing the difficulty of getting into your system.
The three main categories of defenses to consider are physical, technical, and administrative. Physical defenses are defenses applied to your servers and hardware to prevent unauthorized access, which can include limiting building access, having proper security protocols, and locking up important hardware. Technical defenses, which include firewalls, antivirus, logging attempted hacking events, and data backups, provide defense for the software and network. Encrypting data, forcing multi-factor authentication for user login, and applying the principle of least privilege to your code are all part of creating technical defenses. Administrative defenses are policies and procedures put in place to enforce security and can include writing a formal security policy, training and security reviews, and penetration testing.
The less information you give out, the better. Information is valuable, and giving out information on a need-to-know basis helps keep your app secure. As a web developer, you want to be careful what information you make visible - hide any file extensions or version information that the app does not need to run. The more information a hacker can find about what languages and software you use, the easier it is for them to focus their attack. The instructor gave an example of a PHP app, which creates pages with a
.php extension. If you have such an application, you should configure your server to remove that extension before displaying the page so that hackers do not know that you are using PHP (and therefore may not know to focus their attack on known PHP vulnerabilities).
Security through obscurity is not itself the best defense, but it works well as part of defense in depth.
"Deny lists" and "allow lists" are exactly what they sound like - lists of objects or actions that are either forbidden or allowed. Allow lists are a better choice for maintaining security because they assume by default that actions are forbidden, and actions are only allowed if they are explicitly listed. Deny lists assume that everything is permitted unless it is on the deny list. When adding a new option or object that should be restricted, if you're using a deny list, it will assume that that new thing is permitted, whereas an allow list will assume it is forbidden. Allow lists provide better default security for the same reason that "trust no one" is a good policy - the best security assumes that all intentions are malicious and all actions are bad unless explicitly told otherwise.
Mapping the movement of data and where it could be exposed will help increase awareness of the areas where your data is vulnerable, which then helps you protect against those vulnerabilities.
Data security goals can be summarized with the acronym CIA, which stands for confidentiality, integrity, and availability. Confidentiality means that our data is only available to privileged users. Integrity means that data is correct and can be trusted. Availability means that data is available when needed.
Understanding the general principles of web security is important to developing a secure app, but it's only the first step. Stay tuned for future articles in this series as I share what I've learned about filtering input, regulating output, and some common attacks (and defenses against them).