This article was first published on Medium. You can take a look at it here.
Cloud computing has been a popular buzzword in recent years, leading some to be skeptical of its benefits. There are considerable benefits to cloud computing but most are focused on cost effectiveness and speed. Rarely do people mention how security is a benefit of moving to the cloud. The reality is that the cloud can be as secure or insecure as you make it. However, if architected properly, it is possible to have a highly resilient, scalable, secure and compliant application in the cloud.
The first benefit of moving to the cloud is that the responsibility for securing the cloud environment is shared between the customer and the cloud vendor. In an on-premise environment, the customer handles all of the security (Figure 1). In a cloud environment, the customer is only responsible for security at the operating system and above (the light blue shaded sections in Figure 2). Moving to the cloud lets the customers focus their energy on building a robust and secure application.
Scalability & Resiliency
Hosting an application in the cloud enables you to take advantage of on-demand scalability. The lack of scalability in an application presents a customer experience issue as well as a security threat. As an application increases in popularity, it is harder to predict what time of day customers will be accessing the site. One server cannot handle millions of requests, but having hundreds of servers lay idle during low demands parts of the day is not an ideal solution either. Instead of buying more hardware and software resources as the application grows, you can provision resources on-demand and only pay for what you need. This saves money as you only pay for what you use.
Let’s imagine a situation where you have an e-commerce site and one of your items is suddenly very in demand. While the internet loves your product, the operations team, sees a huge spike in network traffic and suddenly the server is at capacity. Orders are not being processed, downloads are incredibly slow and customers are not happy. In a cloud environment, we can utilize autoscaling and elastic load balancing to ensure that this situation does not become reality. When the load balancer experiences too much load, it can trigger an autoscaling policy to spin up new servers. When the demand diminishes, we can scale our servers back down ensuring that we are not paying for unused resources. This way all orders are completed and download times are not affected, yielding happy customers.
Scaling is not only a cost-effective method, it also makes the application resilient. If you only have one physical server and that server ever experiences some type of hardware failure, it will take time to replace the server and have the application back up and running. In the cloud, if there’s a problem with one server, it can be easily terminated and a new one can be created in less than 5 minutes. It can even be an automated process making your life easier.
Similarly, in the event that your application suffers against a DDoS attack, there is little hope in thwarting the attempt. In a cloud environment, however, we can scale up and absorb the load of a DDoS attack. The key strategy behind a DDoS attack is to bring infrastructure to a breaking point. The strategy assumes that you cannot scale to meet the attack, its success depends on this assumption. Thus, the easiest way to defeat this strategy is to design the infrastructure to scale horizontally and vertically when needed. There are four benefits of scaling that we can take advantage of in mitigating a DDoS attack:
- The attack is spread over a larger area.
- The attackers have to counter-attack to the new scale, taking up more of their resources.
- Scaling buys us time to analyze the attack and respond with appropriate countermeasures.
- Scaling provides us with additional levels of redundancy.
Scaling on-demand in the cloud provides resiliency and a means to protect an application from increased network traffic, hardware failures and DDoS attacks in a cost-effective manner. Next, we will discuss how a cloud environment can enable better identity and access management processes.
Identity and Access Management
The purpose of IAM is to provision, manage and de-provision identities that have access to your cloud environment’s infrastructure. With IAM, you can centrally manage users, security credentials, access keys and permissions policies that control which services and resources users can access. This is important because without an account permission strategy, anyone would have the ability to run privileged commands. Situations both unintentional and intentional could occur where someone could wreck havoc on the system using privileged commands. Setting up Identity and Access Management (IAM) in a cloud environment can help ensure that this does not happen.
The goal is to never have to login as the root user. There are four components to IAM which enable secure and least privilege access to the infrastructure and application.
- Central User Repository — This stores and delivers identity information to other services.
- Authentication — This establishes an identity by asking who you are and verifying the identity claim with one or more authentication factors.
- Authorization — This evaluates what you have permission to access whatever it is you are trying to access after authentication.
- User Management — This manages the user lifecycle (onboarding, offboarding, role changes, identity/password changes).
Authentication is made simple in the cloud with the use of federated identity management (FIM). When using FIM, the application doesn’t need to focus on identification and authentication, just authorization. There are different federation standards that can be used for authentication including Single Sign On (SSO), SAML, Oauth, OpenID Connect and WS-Federation. These standards can be used to make sure that users, developers and admins only have the access they need, enforcing the principle of least privilege.
IAM helps us regulate who has access to the data, but we also need to be mindful of how we are protecting data in the cloud.
Using the same example of an e-commerce application, in the event of a hardware failure that causes data loss, you would lose more than just application code. Chances are, your application could contain sensitive customer information such as personally identifiable information (PII) and credit card information, which would also be lost.
Creating backups via snapshots is easy in the cloud. We can create snapshots for the database and storage volumes and restore data from these snapshots if necessary. Many cloud providers also have storage options, including archival storage, with high durability that can be part of our backup strategy.
We also want to make sure that we are securing any sensitive information that is either stored or processed through the application. Thus, we should encrypt data in transit and at rest. To encrypt data at rest, we should encrypt the whole disk or volume where the data is stored. While data is in transit, we should use TLS or VPNs to encrypt the data.
We also want to protect the application from common web exploits, such as SQL Injection and cross-site scripting, that could compromise security or affect availability. We want to filter out known bad IP addresses and monitor HTTP and HTTPs requests. We can use a web application firewall (WAF) to do this. Typically, firewalls are built in to the cloud environment with default deny which grants our application and data an extra layer of protection.
Protecting our data is not enough, we need to also ensure that we are compliant with any laws and regulations.
Assuming our sample application processes credit card information, we need to make sure that we are PCI-DSS compliant. We can choose a cloud provider who is PCI-DSS compliant such as AWS or one that ensures that the way we store, process or transmit cardholder data in compliance with the standards. We should also make sure that we are in compliance with any data retention policies that exist. Lifecycle rules can be used in certain storage solutions to meet any data retention policies.
If the application ever goes through an audit, you won’t have to spend hours preparing for the audit as asset inventory and auditing tools are built-in cloud services. Since every call made in a cloud environment is an API call, there is extensive API call logging. Logs may contain console/API logins, high rate of API activity, new kinds of API activity and new IP addresses accessing the database. These logs can be useful in the event of a data breach or cyber-attack as well.
Setting up secure infrastructure, protecting our data and ensuring compliance will only be useful if the actual application code is secure. With the shift to the cloud, developers can start to embrace DevSecOps and its principles related to secure coding practices.
Secure Coding Practices
Following DevSecOps principles can lead to a robust patching strategy as well as secure code. One DevSecOps principle states “Automate security updates.” This is an important principle as it pertains to using automated tools for patching the OS, core services and the application itself. Developers can use tools such as Puppet and Chef to enable continuous patching in the cloud environment.
Another DevSecOps principle states, “Integrate and automate security scanning from the start.” To establish secure coding practices, we can embrace code analysis through both automated tools and manual code review. We should review the code every time there is a meaningful change in the code base. Additionally, we should conduct static and dynamic penetration tests against our code to find any vulnerabilities and then mitigate them before releasing the code into production.
As you can see, many characteristics of the cloud lend themselves nicely to security. We can use autoscaling to provide scalability and resiliency, IAM to regulate user and resource access, cloud services for data protection and compliance, and DevSecOps for secure coding practices. Thus it is possible to have secure applications in a cloud environment.
This is my twelfth post in my "What is" tech blog series. I'll be writing more every week here and on my blog!