DEV Community

Cover image for Bootstrapping a Secure AWS as-Code Environment - Your MVS Checklist
David Melamed for Jit - Minimum Viable Security for Developers

Posted on • Originally published at jit.io

Bootstrapping a Secure AWS as-Code Environment - Your MVS Checklist

Infrastructure as Code (IaC) has changed the way we manage our cloud operations, by making it infinitely easier and quicker to roll out infrastructure on demand––with a single config file.

In this article, we’ll delve into both the benefits and security challenges introduced to the underlying stack that comes with adopting an AWS anything-as-code model. We’ll also introduce the minimum viable security (MVS) approach that delivers baseline security controls for any stack.

Embracing an Everything-as-Code Model

In line with the principles of IaC, organizations are increasingly adopting as-code frameworks for different components of a tech stack, including security, policy, compliance, configuration, and operations. AWS supports various as-code frameworks, including their own CloudFormation, Terraform, Pulumi, and have even recently rolled out their next-gen IaC in the form of AWS CDK. By providing an API to provision and manage resources, it’s now possible to spin up a complex cloud architecture by defining simple, code-based templates.

With environment-as-code pipelines, organizations can then manage and extend their deployment environment across multiple regions and accounts through a single workflow, leveraging the same code.

Adopting Minimum Viable Security for Baseline Security Controls

While IaC frameworks come with the benefits of automation across the entire stack for more rapid delivery and tighter controls, securing each environment comes with its own unique set of challenges. On top of this, with all of the noise and panic being constantly generated around security and exploits, it is hard for organizations that are just starting up to understand the minimum critical controls, and what should be out of scope. The end result being that emerging companies striving to launch the first version of their product, have little understanding of the baseline security they actually need to implement to get ramped up.

To solve this, the MVS approach offers a vendor-neutral security baseline that reduces the complexity and overhead when deploying infrastructure, and specifically cloud (native) environments. Similar to Agile methods, MVS focuses on a minimal shortlist of critical security controls that adds some initial security to the launched product to tackle the most common threats. This approach helps organizations establish a sufficient security posture while integrating seamlessly into existing automation tooling and pipelines used for configuring today’s complex cloud-based environments.

In order to demonstrate this in practice, we’ll show how this actually applies and works when securing AWS instances (as the most popular and most widely adopted cloud), through the automated MVS approach.

Best Practices for How to Secure AWS Environments as Code

At Jit, we have identified a few layers upon which we focus our security controls for AWS environments that provide the baseline security required that can be expressed as code to automate the bootstrapping of your AWS environments without compromising velocity.

These include:

  • Account Structure
  • Identity and Access Management
  • User Creation and Secret Management
  • Hierarchies, Governance, and Policies
  • Access Controls

Below we’ll dive into each individually and how we can automate these eventually within your existing IaC and automated pipelines.

Build a Secure AWS Account Structure

Many of the practices we will list below are tried and true, and applied at Jit for our own security controls. First off, we split our AWS accounts into three primary organizational units (OUs): users, a sandbox, and workloads.

The users OU let us host a dedicated account to set up all users.

The sandbox unit is for developing or testing new code changes. This OU can also host accounts for experimenting with as-code templates and CI/CD pipelines.

Workload units include staging/production environments and contain various accounts that run external-facing services.
As cloud workloads grow, DevOps teams are inclined to set up multiple accounts for rapid innovation and flexible controls. The use of multiple AWS accounts helps DevOps teams achieve isolation and independence by providing natural boundaries for billing, security, and access to resources.

While building AWS accounts, the below practices are recommended:

Use organizational units (OUs) to group accounts into logical and hierarchical structures. Accounts should be organized based on similar and related functions instead of an organization’s reporting hierarchy. Although AWS supports OU depth of up to five levels, it is best to maintain the lowest possible structure depth to avoid complexity.
Maintain a master account created for managing all organizational units and related billing for cost control and ease of maintenance.

Assign limited cloud resources, data, or workloads to an organization’s management or master account for maximum security since the organization’s service control policies (SCPs) do not apply to the management account.
Isolate production and non-production workload environments from each other. AWS workloads are typically contained in accounts, where each account can have more than one workload. Production accounts should have either one or a few closely related workloads. By separating workload environments, administrators can secure production from unauthorized access.

Follow Identity and Access Management Practices

For managing access to and permissions for AWS resources, Identity and Access Management (IAM) offers a first line of defense by streamlining the creation of users, roles, and groups. When provisioning an AWS environment through automation, organizations should leverage existing modules to manage IAM users, roles, and permissions.

Administering robust security through IAM typically relies on a set of common practices that we also apply internally at Jit:

  • Make sure IAM policies for a user, group, or role grant only the permissions needed to accomplish a given task––this approach is also dubbed “least privilege” and there is plenty of excellent material about it. Permissions should initially only contain the least number of privileges required; these can later be increased if necessary.
  • Create separate roles for different tasks for each IAM user.
  • Use session tokens as temporary credentials for authorization. You should additionally configure a session token to have a short lifetime to prevent misuse in the event of a compromise.
  • Do not use a root user’s access key to perform regular activities or any programmatic task, as the root access key grants full access to all AWS services for any resource. You should also rotate the access key of the root user regularly to prevent misuse.
  • If account users are allowed to select their own password, make sure there is a strong baseline password policy and the requirement to periodically change it.
  • Implement multi-factor authentication (MFA) for additional security. MFA adds an additional layer of authentication on top of the user credentials and will continue to secure a resource in the event that credentials are compromised.

Automate User Creation with Encrypted Secrets

To cut down the risks associated with manual efforts, it is strongly recommended that organizations embrace automation for user creation. This ensures that all stages of the process flow, including account creation, configuration, and assignment to an OU, require minimal manual intervention.
Automation also assists with streamlining user experience by integrating with onboarding and offboarding user workflows. The mechanism provides a fine balance between agility and control by permitting automated configuration and validation of IAM policies across multiple environments (dev, staging, or production).

AWS IAM Roles
Figure 1: A typical user creation process flow (Source: Amazon)

Apart from user creation, you should also automate identity federation and secret provisioning to ensure a comprehensive user creation cycle. A typical workflow resembles the above process flow, along with leveraging tools such as Keybase for the automatic encryption of credentials and keypairs, supported by IaC frameworks like Terraform. ]

Create Hierarchical Structure & Policies with AWS Organizations

AWS Organizations helps you implement granular controls to structure accounts in a manageable way. The service offers enhanced flexibility and hierarchical structure to AWS resources based on organizational units (OUs). For any AWS organization, it is recommended to start with a basic OU structure with core OUs such as infrastructure and security.
You should also create a policy inheritance framework that allows maximum access to OUs at the foundation level and then gradually limits access with each layer of the OU. This layering of policies can further continue to the account and instance levels.

Organizations should also apply service control policies (SCPs) on the OU rather than individual accounts. SCPs offer a multi-layered approach to access management, as they offer a redundant security check that takes precedence over IAM policies.

As a best practice, it is recommended to use trusted access for authorizing services across your organization. This mechanism helps to grant permissions to only designated services without affecting the overall permissions of users or roles. As workloads grow, you can include other organizational units based on common themes, such as: policy staging, suspended accounts, individual users, deployments, and transitional accounts.

Secure Remote Access to the AWS Console

Securing remote access to the AWS console is one of the easiest yet crucial parts of maintaining security in an AWS as-code environment. A minimal approach here can be achieved by leveraging the AWS Management Console and AWS Directory Service to enforce IAM policies on account switching. Once logged in, based on the user’s role (read-only or read-write access), this approach allows individual users to switch accounts from within the console.

Additionally, you can also enforce MFA through a trust policy between the user’s account and the target account to ensure only users with MFA enabled can access the target account.

Enforce Secure Access of AWS APIs

Since the majority of API endpoints are public-facing, it is extremely crucial to secure them. It is always recommended to limit unauthenticated API routes by enforcing a robust authentication and authorization mechanism for accessing the APIs. Apart from leveraging various AWS built-in mechanisms to safeguard both public and private API endpoints, you should also adopt minimal security controls such as enabling MFA to use the AWS CLI or using AWS Vault to secure keypairs.

Apart from this, there are several approaches to achieve controlled access to APIs. These include:

  • IAM-based role and policy permissions
  • Lambda authorizers
  • Client-side SSL certificates
  • Robust web application firewall (WAF) rules
  • Throttling targets
  • JWT authorizers
  • Creating resource-based policies to allow access from specific IPs or VPCs
  • API keys

AWS Security - the TL;DR

The as-code model for various computing components allows you to automatically, consistently, and predictably spin up deployment environments using manifest files. While the everything-as-code approach simplifies the deployment and management of resources on AWS, security can’t be ignored as part of this process and should also benefit from the guardrails automation can provide.

This article delved into the MVS approach and how it can be applied as code. In the next article of this series, we will give concrete code examples of how to bootstrap a secure AWS environment using Terraform in practice.

Top comments (0)