DEV Community

Cover image for Master the Art of AWS: Deploy Shopware like a Pro with AWS CDK!
Sebastian Wahn for Cubesoft GmbH

Posted on

Master the Art of AWS: Deploy Shopware like a Pro with AWS CDK!

In my previous blog post, I discussed the essential AWS building blocks needed for hosting a Shopware 6 shop. Now, in this follow-up post, I will delve into the process of defining these building blocks using AWS CDK (TypeScript).

To recap, the minimal necessary stack of AWS services for hosting a Shopware 6 shop includes the following building blocks:

  • OpenSearch Service
  • RDS Aurora Serverless V2
  • Elastic Filesystem
  • CloudWatch
  • Fargate
  • Application Load Balancer

In this segment, I will provide you with the step-by-step AWS CDK code required to set up these services and share some insights and considerations about the code itself. Let's get started!

Preconditions

  • ECR Registry with a pre-built Shopware 6.4 Container

Groundwork

Before we dive into setting up the individual components, the first requirement is to create a Virtual Private Cloud (VPC). The VPC will serve as the networking foundation for hosting all the necessary services.

To define the VPC in AWS CDK, we can use the following code snippet:

const vpc = new Vpc(this, 'store-vpc', {
    vpcName: 'store-vpc',
    enableDnsHostnames: true,
    enableDnsSupport: true,
    ipAddresses: IpAddresses.cidr('10.0.0.0/16'),
    subnetConfiguration: [
        { name: 'public', subnetType: SubnetType.PUBLIC, cidrMask: 24 }
    ]
});
Enter fullscreen mode Exit fullscreen mode

In the code above, we create a VPC named 'store-vpc' with DNS hostname and support enabled. The IpAddresses.cidr function specifies the IP address range for the VPC (in this case, '10.0.0.0/16'). We also define a single subnet named 'public' with a CIDR mask of 24, indicating a subnet size that can accommodate a sufficient number of resources.

Setting up the VPC is a crucial first step, as it provides isolation and network connectivity for the subsequent components we'll be deploying. With the VPC in place, we can now proceed to configure the other essential building blocks for hosting our Shopware 6 shop.

OpenSearch Service

To improve the search functionality for users, Shopware utilizes an OpenSearch database. This database stores specific data that helps accelerate user queries. While not strictly necessary for a minimal setup, having an OpenSearch service is highly recommended as it reduces the bandwidth requirements on the MySQL database.

Let's define the OpenSearch service using the AWS CDK code below:

const openSearchDomain = new Domain(this, 'open-search', {
    version: EngineVersion.OPENSEARCH_1_3,
    vpc: vpc,
    vpcSubnets: [
        vpc.selectSubnets({
            subnetType: SubnetType.PUBLIC,
            availabilityZones: [vpc.availabilityZones[0]]
        })
    ],
    ebs: {
        volumeSize: 10,
        volumeType: EbsDeviceVolumeType.GENERAL_PURPOSE_SSD
    },
    capacity: {
        masterNodeInstanceType: 't3.small.search',
        masterNodes: 0,
        dataNodeInstanceType: 't3.small.search',
        dataNodes: 1
    },
    removalPolicy: RemovalPolicy.DESTROY
});
Enter fullscreen mode Exit fullscreen mode

In the code snippet above, we create an OpenSearch domain named 'open-search' with the specified version (1.3). We associate the domain with the VPC we created earlier by setting the vpc property to vpc. Additionally, we select the public subnets in the VPC and specify the availability zone for the OpenSearch domain.

To configure storage for the OpenSearch domain, we set the ebs property with a volume size of 10 GB and the volume type as GENERAL_PURPOSE_SSD.

For the capacity configuration, we define the instance types for the master and data nodes. In this case, we use 't3.small.search' for both and specify the number of master and data nodes (0 and 1, respectively).

Lastly, we set the removalPolicy to RemovalPolicy.DESTROY, ensuring that the OpenSearch domain will be automatically removed when the stack is shut down. Remember to handle this manually if you don't want the domain to be destroyed.

Setting up the OpenSearch service enhances the search capabilities of your Shopware system, resulting in faster and more efficient user queries. Now that we have the OpenSearch service in place, let's move on to configuring the next building block.

RDS Aurora Serverless V2

The MySQL database serves as the single source of truth for Shopware. In this example, we will use an RDS Aurora Serverless V2 for our database.

First, let's set up the necessary security group for the database:

const databaseSecurityGroup = new SecurityGroup(this, 'database-security-group', {
    securityGroupName: 'database-security-group',
    vpc: vpc
});
Enter fullscreen mode Exit fullscreen mode

In the code above, we create a security group named 'database-security-group' associated with our VPC.

Next, we'll define the Aurora Serverless V2 database cluster. Since there is no high-level construct available for Serverless V2 yet, we need to use a bit of custom code:

// Serverless V2 is not available as higher level construct yet
// https://github.com/aws/aws-cdk/issues/20197
// code snippet from: https://github.com/aws/aws-cdk/issues/20197#issuecomment-1360639346‚
const databaseCluster = new DatabaseCluster(this, 'aurora-serverless-v2', {
    clusterIdentifier: 'store-aurora-database',
    engine: DatabaseClusterEngine.auroraMysql({ version: AuroraMysqlEngineVersion.VER_3_03_0 }),
    instances: 1,
    instanceProps: {
        publiclyAccessible: true,
        instanceType: new InstanceType('serverless'),
        vpc: vpc,
        vpcSubnets: {
            subnetType: SubnetType.PUBLIC // Set subnet type
        },
        securityGroups: [
            this.databaseSecurityGroup
        ],
        deleteAutomatedBackups: true // Enable deletion of automated backups
    },
    removalPolicy: RemovalPolicy.SNAPSHOT,
    backup: {
        retention: Duration.days(30)
    },
    defaultDatabaseName: 'shopware'
});
Enter fullscreen mode Exit fullscreen mode

In the above code snippet, we create a DatabaseCluster named 'aurora-serverless-v2' with the cluster identifier 'store-aurora-database'. We specify the engine as Aurora MySQL with the desired version.

For the instance properties, we set publiclyAccessible to true, indicating that the database can be accessed publicly. We use the 'serverless' instance type and associate it with the VPC and public subnets. The security group for the database is set to the previously created databaseSecurityGroup, ensuring secure access.

We enable the deletion of automated backups and set the retention period to 30 days. The default database name is set to 'shopware'.

To customize the scaling configuration for Aurora Serverless V2, we directly edit the generated CloudFormation construct:

(databaseCluster.node.findChild('Resource') as CfnDBCluster).serverlessV2ScalingConfiguration = {
    minCapacity: 0.5,
    maxCapacity: AuroraCapacityUnit.ACU_2
};
Enter fullscreen mode Exit fullscreen mode

In the code snippet above, we set the minimum capacity to 0.5 ACU and the maximum capacity to 2 ACU.

Setting up the RDS Aurora Serverless V2 provides a scalable and efficient MySQL database for your Shopware system. With the database in place, we can move on to configuring the next building block.

Elastic Filesystem

In order to have a persistent filesystem that can be shared among containers and survive container restarts, we need to set up an Elastic Filesystem (EFS). This allows multiple Shopware containers to access the required files and ensures consistency across instances. The list of directories that need to be shared can be found in the Shopware documentation at Shopware Storage Docs.

Let's define the EFS setup using the following code:

// This user ID has to match the user ID in the docker file when creating the user
const shopwareUserId = '1000';
const shopwareDirectoryRoot = '/shopware';

const efs = new FileSystem(this, 'store-efs', { vpc: vpc, fileSystemName: `store-efs` });
const accessPoint = efs.addAccessPoint('store-efs-access-point', {
    createAcl: {
        ownerGid: shopwareUserId,
        ownerUid: shopwareUserId,
        permissions: '750'
    },
    path: shopwareDirectoryRoot,
    posixUser: {
        gid: shopwareUserId,
        uid: shopwareUserId
    }
});
Enter fullscreen mode Exit fullscreen mode

In the code snippet above, we create an EFS file system named 'store-efs' associated with our VPC. We define an access point for the EFS file system, named 'store-efs-access-point', to manage permissions and file access.

To ensure that Shopware has the necessary permissions to manage files in EFS, we set the createAcl property with the owner's group ID (shopwareUserId) and owner's user ID (shopwareUserId), along with the desired permissions (750).

We also specify the path of the shared root directory (shopwareDirectoryRoot) and set the posixUser property with the group ID and user ID matching the ones defined in the Docker file (shopwareUserId). If you use Docker Containers root user, make sure to use the correct ID here!

By configuring the EFS access point, we ensure that Shopware has the required permissions to manage files in the EFS file system. This allows for seamless sharing of files among containers and ensures data consistency.

With the Elastic Filesystem set up, we are now ready to proceed with configuring the next building block.

Cloudwatch

While not mandatory for the setup, CloudWatch provides valuable insights when diagnosing issues with active containers. It can store error logs, access logs, debug logs, and crash reports, among other things.

Let's configure CloudWatch logging using the following code:

const logGroup = new LogGroup(this, 'shopware-loggroup', {
    logGroupName: `/org/backend/shopware`,
    retention: RetentionDays.ONE_MONTH,
    removalPolicy: RemovalPolicy.DESTROY
});
Enter fullscreen mode Exit fullscreen mode

In the code snippet above, we create a CloudWatch LogGroup named 'shopware-loggroup' with the specified log group name ('/org/backend/shopware'). We set the retention period to one month, indicating how long log data will be retained in CloudWatch. The removalPolicy is set to RemovalPolicy.DESTROY to ensure that the log group is automatically removed when the stack is shut down.

It's important to note that Shopware logs are not automatically sent to the CloudWatch LogGroup. To redirect the logs to stdout/stderr and make them available in the CloudWatch log group, some additional configuration is required within your container.

For example, if your Shopware container runs via supervisord, you can add the following snippet to tail all dev.log entries to /dev/stdout:

[program:dev_logs]
command=tail -f /sw6/var/log/dev.log
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=false
startretries=0
Enter fullscreen mode Exit fullscreen mode

By configuring your container to redirect logs to stdout/stderr, the logs will be captured by CloudWatch and made available in the specified log group.

Utilizing CloudWatch provides enhanced monitoring capabilities, allowing you to diagnose issues and gain valuable insights into the behavior of your Shopware system.

Fargate

Fargate is used as the main block to schedule and manage the Shopware container.

Let's go through the code step by step:

const ecsSecurityGroup = new SecurityGroup(this, 'ecs-security-group', {
    vpc: vpc,
    securityGroupName: 'ecs-security-group',
    allowAllOutbound: true
});
databaseSecurityGroup.addIngressRule(
    ecsSecurityGroup,
    Port.tcp(databaseCluster.clusterEndpoint.port)
);
Enter fullscreen mode Exit fullscreen mode

In this code snippet, we create an ECS security group named 'ecs-security-group' and allow all outbound traffic. We also add an ingress rule to allow inbound traffic from the ECS security group to the database cluster's port.

const volumeName = `shopware-efs`;
const cpu = 1024;
const memory = 2048;
const shopwareContainerRepository = Repository.fromRepositoryName(
    this,
    'shopware-repository',
    'shopware'
);
const ecsCluster = new Cluster(this, 'ecs-cluster', {
    vpc: vpc,
    clusterName: `shopware-cluster`,
    containerInsights: true
});
const shopwareTaskDefinition = new TaskDefinition(this, 'shopware-taskdefinition', {
    compatibility: Compatibility.FARGATE,
    cpu: cpu.toString(),
    memoryMiB: memory.toString(),
    networkMode: NetworkMode.AWS_VPC,
    family: 'shopware',
    volumes: [
        {
            name: volumeName,
            efsVolumeConfiguration: {
                fileSystemId: accessPoint.fileSystem.fileSystemId,
                rootDirectory: '/',
                transitEncryption: 'ENABLED',
                authorizationConfig: {
                    accessPointId: accessPoint.accessPointId,
                    iam: 'ENABLED'
                }
            }
        }
    ]
});
shopwareContainerRepository.grantPull(shopwareTaskDefinition.obtainExecutionRole());
Enter fullscreen mode Exit fullscreen mode

In this section, we define the necessary configurations for running the Shopware container using Fargate. We specify the CPU and memory requirements, create a repository object for the container image, create an ECS cluster, and define the task definition for the Shopware container. We also configure the EFS volume for sharing files between containers.

const containerApplicationPort = 8000;
const containerHealthPort = 8001;
const applicationDomain = 'myshop.localhost';
shopwareTaskDefinition
    .addContainer('shopware-container', {
        image: ContainerImage.fromEcrRepository(shopwareContainerRepository, 'dev-tag'),
        cpu: cpu,
        memoryLimitMiB: memory,
        containerName: 'shopware',
        essential: true,
        entryPoint: ['./bin/custom-entrypoint.sh'],
        portMappings: [
            {
                containerPort: containerApplicationPort,
                hostPort: containerApplicationPort,
                protocol: Protocol.TCP
            },
            {
                containerPort: containerHealthPort,
                hostPort: containerHealthPort,
                protocol: Protocol.TCP
            }
        ],
        environment: {
            APP_SECRET: '1234',
            APP_URL: `https://${applicationDomain}`,
            APP_ENV: 'dev',
            TRUSTED_PROXIES: 'REMOTE_ADDR',
            SHOPWARE_ES_ENABLED: '1',
            SHOPWARE_ES_HOSTS: openSearchDomain.domainEndpoint,
            SHOPWARE_ES_INDEXING_ENABLED: '1',
            SHOPWARE_ES_INDEX_PREFIX: 'sw',
            SHOPWARE_ES_PORT: '443',
            SHOPWARE_ES_PROTOCOL: 'https',
            SHOPWARE_CDN_STRATEGY_DEFAULT: 'id',
            SHOPWARE_ES_THROW_EXCEPTION: '1',
            MAILER_URL: 'smtp://localhost:1025'
        },
        secrets: {
            DATABASE_PASSWORD: Secret.fromSecretsManager(
                databaseCluster.secret as ISecret,
                'password'
            ),
            DATABASE_USER: Secret.fromSecretsManager(
                databaseCluster.secret as ISecret,
                'username'
            ),
            DATABASE_NAME: Secret.fromSecretsManager(
                databaseCluster.secret as ISecret,
                'dbname'
            ),
            DATABASE_HOST: Secret.fromSecretsManager(
                databaseCluster.secret as ISecret,
                'host'
            ),
            DATABASE_PORT: Secret.fromSecretsManager(
                databaseCluster.secret as ISecret,
                'port'
            )
        },
        logging: LogDriver.awsLogs({
            streamPrefix: `shopware`,
            logGroup: logGroup
        })
    })
    .addMountPoints({
        sourceVolume: volumeName,
        containerPath: '/mnt/efs',
        readOnly: false
    });
Enter fullscreen mode Exit fullscreen mode

In this part, we add the Shopware container to the task definition. We specify the container image, CPU and memory limits, container name, entry point (custom-entrypoint.sh script), port mappings, environment variables, secrets, and logging configuration. We also add a mount point for the EFS volume.

const shopwareFargateService = new FargateService(this, 'shopware-fargate-service', {
    cluster: ecsCluster,
    serviceName: 'shopware',
    taskDefinition: shopwareTaskDefinition,
    assignPublicIp: true,
    vpcSubnets: vpc.selectSubnets({
        subnetType: SubnetType.PUBLIC
    }),
    securityGroups: [ecsSecurityGroup]
});

auroraServerlessV2.databaseCluster.connections.allowDefaultPortFrom(shopwareFargateService);
efs.connections.allowDefaultPortFrom(shopwareFargateService);
openSearchDomain.grantReadWrite(shopwareTaskDefinition.obtainExecutionRole());
openSearchDomain.connections.allowFrom(shopwareFargateService, Port.tcp(443));
Enter fullscreen mode Exit fullscreen mode

Finally, we create the Fargate service for running the Shopware container. We specify the ECS cluster, service name, task definition, whether to assign a public IP, VPC subnets, and security groups.

First of all, the entry point is pointing to a script doing some initialization work required before the shop is started and available.

For example, Shopware requires the database connection string in the form of mysql://${DATABASE_USER}:${DATABASE_PASSWORD}@${DATABASE_HOST}:${DATABASE_PORT}/${DATABASE_NAME} however, with the above setup, database credentials are resolved on container start due to the DatabaseCluster’s SecretsManager. The custom-entrypoint.sh runs a setup in the form of:

bin/console system:setup -n -f \
      --database-url=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}@${DATABASE_HOST}:${DATABASE_PORT}/${DATABASE_NAME} \
      --app-env=${APP_ENV} \
      --app-url=${APP_URL} \
      --blue-green=0 \
      --http-cache-enabled=1 \
      --http-cache-ttl=7200 \
      --cdn-strategy=${SHOPWARE_CDN_STRATEGY_DEFAULT} \
      --mailer-url=${MAILER_URL} \
      --es-enabled=${SHOPWARE_ES_ENABLED} \
      --es-hosts=${SHOPWARE_ES_HOSTS} \
      --es-indexing-enabled=${SHOPWARE_ES_INDEXING_ENABLED} \
      --es-index-prefix=${SHOPWARE_ES_INDEX_PREFIX}
Enter fullscreen mode Exit fullscreen mode

This will set the correct database URL in the .env file as well as the other properties with values defined as environment variables.

Using DatabaseCluster’s SecretsManager is the proper approach to resolve the credentials. Another solution is resolving them so the CDK script is running and therefore on rollout. This has the downside, that the credentials are exposed in CloudFormation state files and is not recommended.

Application Load Balancer

In this segment, an Application Load Balancer (ALB) is used to expose the Shopware service to the internet. Let's break down the code:

const alb = new ApplicationLoadBalancer(this, 'application-load-balancer', {
    vpc: vpc,
    deletionProtection: false,
    internetFacing: true
});
Enter fullscreen mode Exit fullscreen mode

Here, an Application Load Balancer named 'application-load-balancer' is created, associated with the specified VPC. The deletionProtection property is set to false, and internetFacing is set to true to allow external internet traffic.

const httpsListener = new ApplicationListener(this, 'https-listener', {
    loadBalancer: alb,
    port: 443,
    protocol: ApplicationProtocol.HTTPS,
    certificates: [
        // TODO: set the Certificate ARN or references your Certificate here
        ListenerCertificate.fromArn('...')
    ],
    defaultAction: ListenerAction.redirect({
        host: 'myshop.localhost',
        protocol: ApplicationProtocol.HTTPS,
        port: '443',
        query: '',
        path: '/'
    })
});
const httpListener = new ApplicationListener(this, 'http-listener', {
    loadBalancer: alb,
    port: 80,
    protocol: ApplicationProtocol.HTTP,
    // redirect all HTTP to the Shops HTTPS listener
    defaultAction: ListenerAction.redirect({
        host: 'myshop.localhost',
        protocol: ApplicationProtocol.HTTPS,
        port: '443',
        query: '',
        path: '/'
    })
});
Enter fullscreen mode Exit fullscreen mode

Two listeners are created: httpsListener for HTTPS traffic on port 443 and httpListener for HTTP traffic on port 80. Both listeners are associated with the ALB. The HTTPS listener requires a valid SSL certificate specified in the certificates property. Additionally, both listeners have a default action that redirects incoming requests to HTTPS with a host, protocol, port, query, and path.

const shopwareTargetGroup = new ApplicationTargetGroup(this, 'shopware-target-group', {
    vpc: vpc,
    port: containerApplicationPort,
    targetType: TargetType.IP,
    healthCheck: {
        enabled: true,
        path: '/ping',
        port: containerHealthPort.toString(),
        interval: Duration.seconds(300),
        timeout: Duration.seconds(120),
        healthyThresholdCount: 3,
        unhealthyThresholdCount: 10,
        healthyHttpCodes: '200'
    }
});
Enter fullscreen mode Exit fullscreen mode

An Application Target Group named 'shopware-target-group' is created. It is associated with the specified VPC and listens on the containerApplicationPort for incoming traffic. The target type is set to IP, indicating that it targets individual IP addresses of the Fargate containers. The health check configuration defines the path, port, interval, timeout, and threshold settings to determine the health of the targets.

shopwareFargateService.attachToApplicationTargetGroup(shopwareTargetGroup);
Enter fullscreen mode Exit fullscreen mode

The Shopware Fargate service is attached to the shopwareTargetGroup to direct traffic to the Fargate containers.

new ApplicationListenerRule(this, 'shopware-target-group-listener-rule', {
    listener: httpsListener,
    priority: 1,
    conditions: [ListenerCondition.hostHeaders([applicationDomain])],
    targetGroups: [shopwareTargetGroup]
});
shopwareFargateService.connections.allowFrom(props.alb, Port.tcp(containerApplicationPort));
shopwareFargateService.connections.allowFrom(props.alb, Port.tcp(containerHealthPort));
Enter fullscreen mode Exit fullscreen mode

An Application Listener Rule is created to associate the httpsListener with the shopwareTargetGroup. It specifies a priority and condition based on the host header applicationDomain to route traffic to the target group.

Lastly, permissions are granted to allow traffic from the ALB to the Fargate service on both the container application port and health check port.

With the Application Load Balancer, listeners, and target group set up, the Shopware service is now exposed to the internet, enabling traffic routing and load balancing to the Fargate containers.

Conclusion

In this blog post, we explored the process of deploying Shopware, a popular e-commerce platform, using AWS CDK (Cloud Development Kit) and various AWS services. We went through the setup steps, including creating a VPC, deploying an Aurora Serverless v2 database, configuring Amazon OpenSearch for search functionality, setting up an Elastic File System for shared storage, utilizing CloudWatch for logging and monitoring, deploying the Shopware container with AWS Fargate, and finally, exposing the service to the internet using an Application Load Balancer.

By leveraging the power of AWS CDK and the comprehensive range of AWS services, we were able to create a scalable and resilient infrastructure for running Shopware. This setup enables seamless deployment, management, and scaling of the e-commerce platform, while ensuring high availability, security, and performance.

With the provided code snippets and explanations, developers can follow along and adapt the solution to their specific requirements, extending and customizing it further as needed. AWS CDK's infrastructure-as-code approach simplifies the deployment process and promotes repeatability and automation.

By adopting this architecture, e-commerce businesses can leverage the benefits of AWS's robust cloud infrastructure, enabling them to focus on their core business logic and deliver a seamless shopping experience to their customers.

We hope this blog post serves as a helpful guide for deploying Shopware on AWS using AWS CDK, enabling businesses to harness the power of AWS to build scalable and efficient e-commerce solutions.

Hey there, dear readers! Just a quick heads-up: we're code whisperers, not Shakespearean poets, so we've enlisted the help of a snazzy AI buddy to jazz up our written word a bit. Don't fret, the information is top-notch, but if any phrases seem to twinkle with literary brilliance, credit our bot. Remember, behind every great blog post is a sleep-deprived developer and their trusty AI sidekick.

Top comments (0)