DEV Community

Max Ritter
Max Ritter

Posted on • Edited on

Serverless on AWS with CDK #1: App Runner with VPC Integration

This is part one of a four part series on deploying serverless application on AWS with the Cloud Development Kit (CDK) and will cover App Runner with VPC integration. Each post goes into detail about one particular AWS service to run the application, as well as one sample application that uses a particular architectural pattern or framework.

The next posts of this series will be about:

  • #2: AWS Lambda with Hexagonal Service Architecture using Golang and DynamoDB
  • #3: AWS ECS on Fargate with Blue/Green Deployments
  • #4: AWS EKS on Fargate with cdk8s

Content of the four-part series

AWS App Runner

Introduced in May 2021, AWS App Runner is another managed container service for the cloud. Its principal use cases are web applications and APIs. Like its cousins, DigitalOcean App Platform, Heroku, or Google Cloud Run, AWS doesn’t want you worrying about scaling or infrastructure while using their service.

Until February 2022, the service was barely usable for production scenarios, as it had no VPC support until this point. Luckily this has changed in the meantime and there is also CDK and Terraform support for it, so we can setup it using Infrastructure-as-Code. If you are not familiar with those tools already, another and definitely easier option is to use AWS Copilot.

How does App Runner compare to Lambda, ECS or EKS? The point is clearly about keeping management overhead low. While ECS and EKS are nice, they require a lot of components to setup. For EKS it's even more complex, as Kubernetes is a topic on its own and requires a lot of effort to get a stable and secure setup. Lambda might be a good solution for a lot of use-cases, but it has limitations on the maximum number of concurrent executions and there is also the cold start issue, that might not be suitable for every scenario.

Behind the scenes, App Runner runs on Amazon ECS Cluster and Fargate to execute your containers. Compared to them, App Runner is a lot easier to get into, but you lose some of the more fine-grained configuration options:

App Runner Building Blocks

Also, cost estimation for App Runner is far simpler β€” AWS changes a fixed CPU and Memory fee per second. With $0.007 / GB-hour per provisioned container instance and $0.064 / vCPU-hour as well as $0.007 / GB-hour, you can get a high volume production app (80 requests/active container instance) with 1 vCPU and 2GB for a total of 102 dollar per month. For a development or test app (traffic of 2 requests per second for 2 hours each day), monthly costs can go down to 4.80 dollar.

App Runner Key Features

The key features of AWS App Runner are:

  • Simple autoscaling: instances are started and stopped as demand changes, between configurable min and max limits.
  • Load balancing: the service includes a transparent, non-configurable load balancer.
  • SSL enabled: you get HTTPS endpoints for all your applications with AWS-managed certificates. You don’t need to issue or renew certificates.
  • Build service: you can push your own images or let AWS build them for you from code.
  • Persistent URLs: the service assigns randomly-generated URLs for each environment. You can optionally map them to domains of your own.

Sample Application

Overview

Let's get our hands dirty and see how App Runner works in reality. For the sake of this demo, we are going to use the following technologies:

  • AWS CDK 2.x (For infrastructure and application deployment)
  • AWS App Runner (Executing our container stored in ECR with ECS on Fargate)
  • AWS Aurora Serverless (PostgreSQL Database)
  • AWS ECR (Container storage)
  • Golang (Perfect for backend services and cloud native apps)
  • Gin Web Framework (Awesome web framework for Go with focus on performance and productivity)
  • GORM ORM Library (ORM wrapper that works with many databases, we use it for PostgreSQL connection)
  • Projen (Setup our CDK project)

You can find all the code in my repository. Our architecture will look like this:

Architecture of our demo application

Application

To get things started, let's review the application. The application is quite simple and is designed to show off the App Runner VPC integration feature by implementing a To-Do List API with Gin and GORM for Postgres.

You can hit the following endpoints:

Method Route Body
GET /tasks
POST /tasks {"title": "task title"}
DELETE /tasks/:id
PUT /tasks/:id {"title": "task title", "completed": true}

The main function fetches the credentials for the Serverless Aurora PostgreSQL DB from AWS Secrets manager, then initializes the DB connection using GORM and finally launches the Gin webserver for the REST API and HTTP Health Checks:

func main() {
    log.Println("Getting secrets..")
    rdsSecret := utils.GetSecret()

    log.Println("Init DB connection..")
    db.Init(rdsSecret)

    log.Println("Starting server..")
    r := gin.Default()

    //HTTP Health Check
    r.GET("/", func(c *gin.Context) {
        c.String(http.StatusOK, "OK")
    })

    //REST API
    v1 := r.Group("/api/v1")
    {
        tasks := v1.Group("/tasks")
        {
            tasks.GET("/", controllers.GetTasks)
            tasks.POST("/", controllers.CreateTask)
            tasks.PUT("/:id", controllers.UpdateTask)
            tasks.DELETE("/:id", controllers.DeleteTask)
        }
    }

    r.Run(":8080") // listen and serve on 0.0.0.0:8080
}
Enter fullscreen mode Exit fullscreen mode

In the V1 Group, you see the definition of the controllers for the CRUD functionality of the Todo List API. They are implemented in the src/app/controllers/tasks.go file. The connection to the database is defined in src/app/db/db.go, task object for the ToDo's is in src/app/models/task.go. There are also some common utils and types for the DB secret in src/app/types/secrets.go and src/app/utils/secrets.go.

The Dockerfile to build the application using Go 1.18 and including SSL certificates is defined like this:

# Build environment
FROM golang:1.18.0-alpine3.15 as builder
RUN apk update
RUN apk add -U --no-cache ca-certificates && update-ca-certificates
WORKDIR /app
COPY . .
ENV GO111MODULE=on
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o server .
CMD ["/app"]

# Execution environment
FROM scratch
EXPOSE 8080
WORKDIR /app
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /app/server /usr/bin/
ENTRYPOINT ["server"]
Enter fullscreen mode Exit fullscreen mode

The image is build automatically by the AWS CDK and has a total size of only 5.42 MB! That's one of the big advantages of Go, as the application is compiled for the target architecture and can therefore run without any dependencies. A similar Python application with FastAPI would easily go to over 100 MB or more.

Infrastructure

As AWS App Runner is only available in a limited number of regions so far, we will go for Ireland (eu-west-1) instead of the default for Germany, which would be Frankfurt (eu-central-1).

Projen was used to create the base files for the AWS CDK 2.x IaC code. Apart from CDK, it can also generate projects for CDK8s and CDKTF, which are used to deploy Kubernetes Manifests and Terraform Code using modern programming languages supported by the jsii compiler.

We need a VPC to host the Serverless Aurora Database in a private subnet. To create our database cluster in RDS, we're going to use the ServerlessCluster construct. Once again this construct is going to create many resources on our behalf, with only a few lines of code on our end defining our requirements:

const vpc = new ec2.Vpc(this, "AppRunnerVPC");

const dbCluster = new rds.ServerlessCluster(this, "AppRunnerDatabase", {
  engine: rds.DatabaseClusterEngine.auroraPostgres({
    version: rds.AuroraPostgresEngineVersion.VER_10_14,
  }),
  vpc: vpc,
  enableDataApi: true,
  removalPolicy: RemovalPolicy.DESTROY,
  scaling: {
    autoPause: Duration.seconds(0),
  },
});

const databaseName = "tasks"
const createDatabase = new cr.AwsCustomResource(this, "RDSCreateDatabase", {
  policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
    resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE,
  }),
  logRetention: RetentionDays.ONE_WEEK,
  onCreate: {
    service: "RDSDataService",
    action: "executeStatement",
    physicalResourceId: cr.PhysicalResourceId.of(
      dbCluster.clusterIdentifier
    ),
    parameters: {
      resourceArn: dbCluster.clusterArn,
      secretArn: dbCluster.secret?.secretArn,
      sql: `CREATE DATABASE ${databaseName} OWNER postgres;`,
    },
  },
});
Enter fullscreen mode Exit fullscreen mode

At the time of the creation of this walkthrough, the App Runner constructs avaialable are L1, which means they map directly to Cloudformation. There are a few things to discuss here, so I want to start by talking about how the App Runner service connects to the VPC. When connecting your service to an existing VPC, you need to create a VPC Connector in App Runner. The connector is what allows your App Runner service to egress the service into your VPC.

For the connection to be established, a couple of parameters are required. First, you need to define the subnet id's that you want to egress into. Next, you will attach security groups to control network access into the VPC. You can attach multiple security groups and multiple subnets to a VPC connector, depending on the use case:

// Create an App Runner Service with a VPC Connector
const appRunnerVpcConnector = new aws_apprunner.CfnVpcConnector(
  this,
  "AppRunnerVPCCon",
  {
    subnets: vpc.selectSubnets({
      subnetType: SubnetType.PRIVATE_WITH_NAT,
    }).subnetIds,
    securityGroups: [
      dbCluster.connections.securityGroups[0].securityGroupId,
    ],
    vpcConnectorName: "AppRunnerVPCConnector",
  }
);
Enter fullscreen mode Exit fullscreen mode

Next, we need to talk about the permission model and what permissions the App Runner needs (Not our code, but the AWS App Runner control plane) vs what permissions the service (my code/container) needs while running. This is a bit confusing, so let's break it down:

The App Runner service role: As it says in the name, this is the role for the App Runner service itself (not your application) to make AWS API calls on our behalf. In this case, we are building an image based service which requires App Runner to pull down container images from Amazon ECR.

const appRunnerServiceRole = new iam.Role(this, "AppRunnerServiceRole", {
  assumedBy: new iam.ServicePrincipal("build.apprunner.amazonaws.com"),
});

appRunnerServiceRole.addManagedPolicy(
  iam.ManagedPolicy.fromAwsManagedPolicyName(
    "service-role/AWSAppRunnerServicePolicyForECRAccess"
  )
);
Enter fullscreen mode Exit fullscreen mode

The App Runner instance role: This is the role for OUR code, meaning that the AWS API calls being made from my application require the IAM policies attached to make the calls to AWS resources. In our code example, we interact with AWS Secrets Manager, which we've added into the IAM policy attached to this role:

const appRunnerInstanceRole = new iam.Role(this, "AppRunnerInstanceRole", {
  assumedBy: new iam.ServicePrincipal("tasks.apprunner.amazonaws.com"),
  inlinePolicies: {
    secretsManager: new iam.PolicyDocument({
      statements: [
        new iam.PolicyStatement({
          actions: ["secretsmanager:GetSecretValue"],
          resources: [dbSecrets.secretArn],
        }),
      ],
    }),
  },
});
Enter fullscreen mode Exit fullscreen mode

Finally we have our App Runner service. This is where we define the configuration of our service which includes how to build our service (source code or from a container image), which VPC connector to use (if any), auto scaling, service and instance roles, and so on:

const appRunnerService = new aws_apprunner.CfnService(
  this,
  "AppRunnerService",
  {
    sourceConfiguration: {
      autoDeploymentsEnabled: false,
      imageRepository: {
        imageRepositoryType: "ECR",
        imageIdentifier: appRunnerContainerImage.imageUri,
        imageConfiguration: {
          port: "8080",
          runtimeEnvironmentVariables: [
            {
              name: "AWS_SECRET_NAME",
              value: dbSecrets.secretName
            },
            {
              name: "AWS_REGION",
              value: props.env?.region,
            },
            {
              name: "DATABASE_NAME",
              value: databaseName,
            },
          ],
        },
      },
      authenticationConfiguration: {
        accessRoleArn: appRunnerServiceRole.roleArn,
      },
    },
    healthCheckConfiguration: {
      protocol: "HTTP",
      interval: 5,
      healthyThreshold: 1,
      path: "/",
      timeout: 5,
      unhealthyThreshold: 3,
    },
    networkConfiguration: {
      egressConfiguration: {
        egressType: "VPC",
        vpcConnectorArn: appRunnerVpcConnector.attrVpcConnectorArn,
      },
    },
    serviceName: Stack.of(this).stackName,
    instanceConfiguration: {
      instanceRoleArn: appRunnerInstanceRole.roleArn,
    },
  }
);
Enter fullscreen mode Exit fullscreen mode

We have some environment variables set in the configuration, and you may notice that we pass the secret ARN from the secret we created for the database and stored in in Secrets Manager. As mentioned above, we need this ARN to know which secret we will reference when making the call in our code. Last but not least, the ECR image is built automatically by CDK using the Dockerfile in the src/app directory and pushed to the registry, so App Runner can fetch it:

// Build a container image and push to ECR
const appRunnerContainerImage = new ecrAssets.DockerImageAsset(
  this,
  "ECRImage",
  {
    directory: "src/app",
  }
);
Enter fullscreen mode Exit fullscreen mode

Deployment

The whole application stack including infrastructure components can be deployed with a single command: cdk deploy --require-approval never

It will take a couple of minutes for CloudFormation to finish the stack creation. Afterwards, you can go into the App Runner UI and see the running service:

Main page of App Runner

There are some more pages where you can see application logs and metrics. Also you can view and edit the configuration that has been done by AWS CDK:

Source, Deployment and Service Settings

Auto Scaling, Health Checks, Security and Networking

As you can see, the options provided are way less than what you can configure with comparable services like ECS or EKS. So there is a tradeoff between simplicity and available features, where you have to validate what matches your requirements.

Summary

That's it! In this walkthrough we built an App Runner service that connects into a VPC that has resources running within the private subnets. Remember that we are not limited to RDS here, I simply chose those to provide a functional working example. The use cases are endless and whether that's talking to ECS Containers that are registered in AWS CloudMap, an Elasticache cluster, a Kubernetes service, or any other resource that resides in the VPC.

In general, I like the App Runner approach and think it can greatly simplify the creation of serverless, containerized services on AWS Fargate. Running a load test with 500 servers in parallel showed the performance is pretty neat and autoscaling works as expected:

Benchmark on Loadster.app

What do you think of AWS App Runner? Let me know in the comments and see you in part 2 of the series with Hexagonal Architecure on Lambda (coming soon)!

Top comments (1)

Collapse
 
mmuller88 profile image
Martin Muller πŸ‡©πŸ‡ͺπŸ‡§πŸ‡·πŸ‡΅πŸ‡Ή

Nice one :). Good to see a Projen fellow :). App Runner kind of looks interesting but I kind of not git what is like cooler than perhaps docs.aws.amazon.com/cdk/api/v2/doc... .