Under the Shared Responsibility Model AWS is responsible for security 'of the Cloud', while the customer is responsible for what is 'in the Cloud'.
This means that while AWS take responsibility for the physical security of its data centres, database patching, and firewall configuration, the customer needs to take responsibility for who has access to their content, access rights and authentication.
This post will take you through the services that AWS offers and best practices they recommend to keep what's in the Cloud safe and secure.
Identity and Access Management helps you to securely control who has access to your resources and how they access them.
- Follow the best practice to enable MFA, delete root account credentials and create new roles with Administrator permissions.
- Create Users, Roles and Groups to grant 'least access' and assign permissions to your users.
- Use Roles to allow access to services like EC2 rather than individual users.
- Put conditions in place so users create strong passwords which get rotated regularly.
- IAM Access Analyzer helps you identify resources in your organisation and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity.
- Use Network Access Control Lists to control inbound and outbound traffic at the subnet level. NACLs support both allow and deny rules and are stateless meaning that return traffic must be explicitly allowed.
- Use Security Groups to act as a firewall at the EC2 level to control inbound and outbound traffic. These are stateful meaning they return traffic is allowed regardless of rules.
- Set up VPC Flow logs to capture information about how traffic is flowing.
- GuardDuty analyses data from VPC Flow Logs, and profiles them for anomaly detection. This service can detect a brute force attack on an EC2, suspicious API calls, malicious or unauthorised behaviour.
- Create IAM policies to control access to S3, and bucket policies make sure buckets are kept private.
- Enable MFA Delete and use Versioning to stop accidental deletion of objects and allow objects to be recovered using Cross-region replication.
- Consider locking objects to prevent objects being deleted during a fixed term or indefinitely using Amazon S3 Object Lock
- Use KMS keys or S3-Managed Keys for Server Side Encryption.
- Consider using Macie to recognise the type of data stored in S3. Macie can identify personally identifiable information, API keys, and credentials.
- Limit access by creating Security Groups and rules to control the inbound and outbound traffic to instances.
- Configure route tables with the minimal required network routes. For example, place only EC2 instances that need direct Internet access into subnets with routes to an Internet Gateway.
- Encrypt data stored in Elastic Block Store (EBS) as an extra layer of security.
- Create a baseline server configuration and assess each server against the baseline to identify and flag any deviations.
- Enable Inspector to check for access to your instances from the internet, remote root login being enabled, or vulnerable software versions installed.
- Encrypt data using AES-256 level encryption.
- Encrypt data in transit using SSL. This creates and installs the certificate when the instance is provisioned.
- When using Redshift, enable Cluster Encryption to encrypt user-created tables.
- Enable CloudTrail to provide a history of API calls made across your account.
- Integrate with CloudWatch and SNS to support compliance and monitoring by setting up logs, metrics and alarms.
By using the best practices and tools available, organisations can build scalable applications that are also secure and meet data protection requirements.
This post originally appeared on helenanderson.co.nz