DEV Community

Cover image for AWS Cloud Platform for highly loaded WordPress website
Yaroslav Yarmoshyk
Yaroslav Yarmoshyk

Posted on

AWS Cloud Platform for highly loaded WordPress website

Today I'd like to share the simple (or not), highly available and secure design of the AWS Cloud Solution to host the highly loaded WordPress website. I think it is also applicable to other popular CMS such as Joomla, Drupal, etc.

The diagram doesn't cover the multi-OU setup for simplicity purposes.

AWS Cloud Platform for WordPress

Explanations

Networking

  1. Public - the primary purpose for this network is to locate load balancer and NAT gateways (1 per availability zone).
  2. Private - all EC2 instances will be hosted here. I skipped routing tables to simplify the schema. Ofcourse you need to define routing rules for Private subnets.
  3. Isolated - is a home for multi-AZ Aurora RDS and EFS mount endpoints.

Internet Gateway is used to communicate with the world.

File Storage

Elastic File System (EFS) is planned to be used to locate the code of the website. It has to be mounted to every webserver root (ex. /var/www/html).

AWS Backup is to be used to create scheduled backups.

S3 is intended to be used for static files. To be honest I didn't test this approach but I found the following WordPress plugin that allows to locate uploads to s3. So I'd appreciate if whoever decides to implement this setup could drop off a comment whether it works or not.

If it doesn't work then we'll need to add S3 File Gateway to the schema and mount s3 into $ROOT/wp-content/uploads folder

I think S3 Intelligent tiering should be enabled for it to enable automatic storage cost savings by automatically moving data to the most cost-effective access tier when access patterns change.

Data Storage

The RDS Aurora with MySQL engine is to be used. It doesn't require much configuration except of the right sizing. It is recommended to store the password and read/write endpoint address in SSM Parameter Store or AWS Secret Manager and next read it on EC2 boot with userdata and expose as environment variables to be read by WordPress.

The automatic daily snapshotting should be enabled. Sometimes you might need to recover from it instead of point in time recovery.

Public Access and Content Delivery

In order to expose the website to the internet the combination of the following resources will be used:

  1. Application Load Balancer. The load balancer will distribute the load to the EC2 instances in the autoscaling group. In case of really high traffic this can be updated to Network Loadbalancer because.
  2. CloudFront is a CDN that will deliver traffic to the users from the closes edge location and will provide front-end caching capabilities to reduce the load on the compute. Also it will be configured to deliver static files directly from the S3 bucket (URI: /wp-content/uploads/*)

Security

This area includes multiple items:

  1. Cloudfront comes with AWS Shield standard enabled. No configuration is needed at this point. However we need to create proper S3 bucket policy to allow reading objects only with a certain CloudFront origin access identity. Here is the manual.
  2. AWS Web Application Firewall (WAF) with a set of rules is to be used to protect ALB from L7 (Application) attacks. A good practice is to create the ip ruleset and custom ACL linked to it to have an ability to blacklist IPs on manually (ex. during DDoS attacks). Here is the manual
  3. KMS was skipped to simplify the schema but it is recommended to create customer-managed key for every resource and use those to encrypt data. Resources to be covered are the following:
    1. S3 buckets
    2. EBS volumes of EC2 instances
    3. RDS volumes and snapshots
  4. Macie is a great ML-powered solution to audit the files in S3 buckets and detect any sensitive data published (ex. PII, passwords, etc.)
  5. AWS Certificate Manager (ACM) is probably one of the oldest services in AWS that allows to create SSl certificates to enable traffic encryption. The certificates are to be wired to CloudFront distribution.
  6. Security groups were skipped in the diagram to avoid visual complexity. 3 security groups are to be created:
    1. ALB - allow access to loadbalancer. Port 80 is to be open to 0.0.0.0/0. Since we will do ssl termination at CloudFront level, there is no need to listen to port 443. Transit encryption is optional inside AWS networks.
    2. Compute - allow access from loadbalancer to EC2. Port 80 open to alb-security-group-id.
    3. EFS - ports 111 and 2049 (TCP + UDP) are to be opened to compute-security-group-id
    4. RDS - port 3306 (TCP) is to be opened to compute-security-group-id
  7. IAM role and instance profile to be attached to the EC2 instances has to have sufficient permissions to find and mount EFS volumes. Also it needs to have permissions to read/write in the S3 bucket for static files. Use IAM policy generator to create the IAM policy

Compute

I use autoscaling group to start instances in multiple availability zones with scaling policy based on built-in metric of CPU utilization that is being sent to CloudWatch. Standard approach. If the load goes up then we need more servers to handle it.

The missing piece of puzzle is the AMI "golden image" that will be used to start the instances in autoscaling group. The AMI has to have NGINX and PHP installed with the list of required modules enabled. The great tool to brew one is hashicorp packer.

Additionally, the userdata script should find the EFS mount point in the current Availability Zone and mount it to the NGINX WebServer root as I mentioned earlier.

Automation possibilities

I am not only a big fan of hashicorp terraform. I'm also one of the early adopters of it. So this is my main go-to Infrastructure as a Code tool. However all the resources I use are supported by other IaaC solutions such as AWS CloudFormation and AWS CDK. You definitely got to use one to avoid loosing the track of resources you create.

Summary

This solution satisfies the following requirements to safely run your website in AWS Cloud:

  1. High availability (multi AZ provisioning)
  2. On-demand capacity (autoscaling)
  3. Encryption at rest and in transit (KMS + SSL)
  4. Least access privilege (IAM, Security Groups with explicit rules)
  5. Data safety (AWS Backup + Automatic snapshotting of RDS)
  6. Security audit (Macie)
  7. Secure communications (AWS Shield + WAF + 3-tier networking)

It is relatively easy to deploy code updates. All you need to do is just to copy over the updated files to NFS. No restarts are required.

The bottleneck here can be network limitations of NFS. In case of really high traffic you might need to adjust throughput settings but be aware of additional costs.

Top comments (0)