đ Introduction
Deploying a Go Echo application to AWS Lambda allows you to leverage a serverless architecture â no server management, auto-scaling, and pay-per-use pricing. Traditionally, Echo apps run as HTTP servers on their own ports, but Lambda functions are invoked by events (like API Gateway HTTP requests). In this guide, weâll integrate the Echo framework with Lambdaâs API Gateway event trigger, enabling your existing routes to run on Lambda with minimal code changes. We assume you already have a Go Echo app (or at least basic familiarity with Echo) and focus on adapting it for serverless deployment.
Running Echo on Lambda brings the benefits of quick scaling and minimal idle cost. Thanks to Goâs fast startup and the efficiency of Echo, the performance overhead is small â Go is among the fastest AWS Lambda runtimes in terms of cold starts and execution speed. We will, however, highlight some cold start considerations and tips to keep your function responsive. Letâs dive into the steps: modifying the application code, containerizing the app for Lambda, deploying to AWS, and testing the serverless Echo application.
đ Adapting an Echo Application for AWS Lambda
The key to running Echo on Lambda is to translate incoming Lambda events (from API Gateway) into HTTP requests that Echo can understand, and vice versa for responses. AWS provides a convenient library for Go, aws-lambda-go-api-proxy, which includes an adapter for the Echo framework. This adapter handles the conversion between API Gateway events and Echoâs request/response objects. Using this library, you can preserve your existing Echo routes and middleware â the adapter will funnel API Gateway calls through your Echo router.
1. Installing the Lambda Echo adapter
Begin by adding the AWS Lambda libraries to your Go module. In your project, run:
go get github.com/aws/aws-lambda-go/lambda
go get github.com/awslabs/aws-lambda-go-api-proxy/echo
The first package (aws-lambda-go/lambda) is essential â it implements the Lambda runtime interface for Go. Including this package (and calling lambda.Start in your code, as weâll do) makes your Go binary capable of receiving events from the Lambda service. The second package is the Echo adapter provided by AWS Labs, which weâll use to bridge API Gateway events to the Echo framework.
2. Minimal code changes in main.go
With the libraries in place, you only need to add a small amount of code to initialize the adapter and start the Lambda function. Below is an example of how to modify your main.go
:
package main
import (
"context"
"os"
"github.com/labstack/echo/v4"
// Import the Lambda event and adapter packages
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
echoadapter "github.com/awslabs/aws-lambda-go-api-proxy/echo"
)
var echoLambda *echoadapter.EchoLambdaV2 // adapter for API Gateway HTTP API events
func main() {
// 1. Initialize your Echo instance and routes as usual
e := echo.New()
// ... (register routes, middleware, etc.)
// e.GET("/hello", handlerFunc) for example
// 2. Detect if running in AWS Lambda environment
if os.Getenv("AWS_LAMBDA_FUNCTION_NAME") != "" {
// We are in Lambda, so do not start Echo in the usual way.
// Initialize the Echo adapter for API Gateway V2 (HTTP API)
echoLambda = echoadapter.NewV2(e)
// Start the Lambda event processing loop with our handler function
lambda.Start(handler)
} else {
// Not in Lambda (running locally or in another environment), start Echo normally
e.Logger.Fatal(e.Start(":8080"))
}
}
// 3. Lambda handler function for API Gateway HTTP API (v2) events
func handler(ctx context.Context, req events.APIGatewayV2HTTPRequest) (events.APIGatewayV2HTTPResponse, error) {
// Proxy the incoming API Gateway request to the Echo instance and return the response
return echoLambda.ProxyWithContext(ctx, req)
}
Letâs break down the changes:
- We declare a package-level echoLambda variable of type *echoadapter.EchoLambdaV2. This will hold the adapter that connects API Gateway HTTP API events to our Echo *echo.Echo router.
- In main(), after setting up the Echo instance e and defining all your routes and middleware, we check for an environment variable AWS_LAMBDA_FUNCTION_NAME. AWS sets this variable (the function name) for processes running on Lambda, so it's a reliable way to detect the Lambda runtime. If this variable is present, it means the code is running as a Lambda function.
- When on Lambda, we initialize the adapter: echoadapter.NewV2(e) wraps our Echo instance. This adapter knows how to convert API Gateway V2 events into standard HTTP requests that Echo can handle. We then call lambda.Start(handler). This instructs the AWS Lambda Go runtime to start receiving events and pass them to the specified handler function.
- The handler function we provide matches the expected signature for API Gateway HTTP API events (events.APIGatewayV2HTTPRequest -> events.APIGatewayV2HTTPResponse). Inside, it simply delegates to echoLambda.ProxyWithContext(ctx, req), which does the work of converting the event to an http.Request, routing it through Echo, and capturing the Echo response to convert back to an API Gateway response.
- If the environment variable is not set (meaning we are running the app locally or in a non-Lambda context), we fall back to the normal Echo startup: e.Start(":8080") to run an HTTP server on port 8080. This dual-mode setup is very useful for testing and for gradually migrating existing applications â you can still run the app normally, and when deployed to Lambda it will automatically switch to the event handling mode.
How it works: When the function runs on Lambda, the call to lambda.Start(handler) never returns; it will continuously loop, waiting for incoming events (invocations) from the Lambda service and passing them to your handler. The echoLambda adapter, having been initialized with your Echo router, will handle each request. All your defined routes, middleware, and handlers will work as usual â for example, an HTTP GET /hello request from API Gateway will be translated to an Echo context and trigger the same "/hello" route handler as it would on a normal server.
Choosing APIGateway V1 or V2: In our code, we used EchoLambdaV2 and the APIGatewayV2HTTPRequest event type, which correspond to API Gatewayâs HTTP API (the newer, simpler and lower-latency version of API Gateway introduced by AWS). If instead you plan to use the older REST API (APIGateway v1), the adapter library also provides EchoLambda (without V2) for the v1 events. Youâd import and use events.APIGatewayProxyRequest/Response and call echoadapter.New(e) accordingly. However, we recommend using HTTP APIs in most cases, as they are cheaper and have lower overhead. Just ensure that when creating your API Gateway, you choose HTTP API so that the events match the types expected by our handler. (The code above is for HTTP APIs.)
Initialization at cold start: Itâs worth noting that we set up the Echo instance (e := echo.New(), route definitions, etc.) outside of the handler â in the main() function (and by extension, within the Lambda environment, before calling lambda.Start). This means all the initialization (setting up routes, connecting to databases if any, etc.) happens during the Lambda functionâs cold start, and the initialized Echo router is reused for subsequent invocations. The global echoLambda holds the state between calls. This is important for performance: you wouldnât want to rebuild your entire router on every invocation. By doing it once, subsequent events can be handled faster (just routing to the already-defined handlers). In fact, you could even move the Echo initialization to a global init() function to ensure it runs at import time. The point is to perform expensive setup only once per execution environment.
At this stage, we have modified our application to be Lambda-compatible without disturbing its normal operation. If you run this binary on your local machine, it will start a web server on 8080 as always. If you run it in AWS Lambda, it will not open a socket; instead it will use the Lambda API to handle incoming events. With code changes done, letâs move on to packaging this app into a container image for deployment.
đ§± Containerizing with a Multi-Stage Dockerfile
AWS Lambda supports deploying functions as container images up to 10 GB in size. We will use a multi-stage Docker build to compile our Go Echo application and package it in a lean image based on AWSâs official Lambda base. Using a multi-stage Dockerfile ensures that the final image only contains the compiled binary and the minimal runtime environment needed, keeping the image small and efficient. Below is the Dockerfile, broken into two stages:
# **Stage 1 â Build the Go binary**
FROM golang:1.24-alpine AS build
# Install any needed packages (tzdata for timezone support, git for fetching modules)
RUN apk --no-cache add tzdata git
WORKDIR /var/www/src
# Bring in the Go module files and source code
COPY . ./
RUN go get # (Optional) fetch any direct dependencies (if not already in go.mod)
RUN go mod download # download Go module dependencies
# Build the binary:
# Use -ldflags to inject the Git commit as version (GIT_COMMIT will be passed during build)
ARG GIT_COMMIT
RUN go build -o ./exec -ldflags="-X 'main.Version=$GIT_COMMIT'"
# **Stage 2 â Create the deployment image for Lambda**
FROM public.ecr.aws/lambda/provided:al2023
# Copy the binary from the build stage
COPY --from=build /var/www/src/exec ./exec
# Set the Lambda entry point to our binary executable
ENTRYPOINT [ "./exec" ]
Letâs explain whatâs happening here:
- Stage 1 (Build Stage): We use the official Go 1.24 Alpine image as the build environment. Alpine is lightweight and includes the Go compiler. We add tzdata (if our app deals with timezones) and git (often required for go get or fetching private modules). We set a working directory and copy our source code into the image. Then we run go mod download to fetch dependencies. Finally, we compile the Go program with go build. We name the output binary exec for convenience. The -ldflags="-X 'main.Version=$GIT_COMMIT'" part is optional â itâs injecting a Version variable (defined in our Go codeâs main package) with the Git commit hash, which is a nice way to embed version info into the binary. The actual $GIT_COMMIT value can be passed during the docker build command (using --build-arg GIT_COMMIT=$(git rev-parse HEAD) for example). If you donât need that, you can simplify to RUN go build -o exec . or similar.
- Stage 2 (Runtime Stage): We start from public.ecr.aws/lambda/provided:al2023, which is Amazonâs Amazon Linux 2023 Lambda base image. This base image includes the necessary components for Lambdaâs custom runtime API. In other words, itâs an empty Lambda environment that will run our binary as the function. We copy the compiled exec binary from the build stage into the root of this image. We then set the ENTRYPOINT to ["./exec"]. This means when the Lambda service invokes our container, it will execute our exec binary. Because our binary was built with the aws-lambda-go/lambda package and calls lambda.Start (as we wrote in main.go), it will act as a Lambda-compatible executable. The AWS base image ensures the Runtime Interface Client (RIC) is present to coordinate between the function and the Lambda platform. (For reference, the provided.al2023 image expects your app to either have its own custom runtime interface or use the Lambda Go library which handles it for you. We did the latter by using the Lambda Go SDK.)
A few notes on this Docker setup:
- We used the AWS-provided base image for Go (provided:al2023). This is an âOS-onlyâ image that doesnât include a language runtime (since our Go app is a self-contained binary) but has the Lambda API interface. Using this base is recommended by AWS for Go functions. (Alternatively, AWS offers similar base images for specific languages and even provided.al2 for Amazon Linux 2. AL2023 is fine and supported until 2029)
- By using multi-stage builds, our final image only contains the single binary and the minimal OS libraries needed. The Go compiler and source files from Stage 1 are not included in the final image, which keeps it small. This helps reduce the image download time at cold start and improves security (less attack surface).
- We didnât explicitly include a CMD in the Dockerfile; itâs not needed here because the entrypoint alone suffices (Lambda will invoke the entrypoint). We also didnât set a WORKDIR in the runtime stage â by default, the working directory is root (/) and we copy the binary there. The ENTRYPOINT ["./exec"] will execute the binary from that location. (The Lambda base image might set LAMBDA_TASK_ROOT or similar, but in our simple case we can run from root.)
- If your Echo app requires any static files or templates, ensure they are either embedded in the binary (e.g., using Go embed) or copied into the image. Typically, API-only apps wonât need extra files. If you do have assets, copy them in Stage 2 and adjust your working directory as needed.
đ Performance and Cold Start Considerations
When moving a web app to AWS Lambda, itâs important to understand the cold start behavior and overall performance implications. Here are key points and best practices:
- Cold starts: A cold start occurs when a new Lambda instance (container) is spun up to handle a request. This involves provisioning the runtime, initializing your code (running main() and global inits), etc. Go-based Lambdas are generally very fast to cold start compared to languages like Java or .NET â usually on the order of tens of milliseconds up to a few hundred milliseconds, depending on the binary size and initialization work. In fact, Go and Rust are often cited as having the smallest cold start times among common runtimes. Echo itself is a lightweight framework, so it doesnât add much overhead on startup beyond registering routes.
- What contributes to cold start time? For our Echo app, cold start includes the time to load the container and execute the main() function: creating the Echo instance, setting up routes, and any other startup logic (e.g., establishing a database connection, reading config). The Docker image size can also play a role â a larger image might take slightly longer for AWS to download and start. Our image is fairly small (the alpine-built binary plus AWS base, likely under ~50MB depending on your app), which is good. Using the multi-stage build and minimal base helps keep this small.
- Reuse of instances (warm starts): After a cold start, the Lambda may keep the instance alive for subsequent requests (a âwarmâ invocation). In warm invocations, the Echo app is already initialized, so handling a request is just running through your handlers. This should be as fast as running on a regular server for the most part. AWS typically reuses instances for a while (several minutes) before retiring them, if traffic is steady. According to AWS, in practice <1% of invocations are cold starts for typical workloads, though if your traffic is very sporadic you might see more cold starts.
- Concurrent requests: One difference from running Echo on your own server is that a single Lambda instance handles one request at a time (per concurrent execution). With an Echo HTTP server, you could serve many requests concurrently on one process using Go routines. In Lambda, if 10 requests come in simultaneously, AWS will spin up 10 separate instances (assuming youâve allowed that concurrency). Each instance will handle one request at that moment. This means your app scales automatically, but it also means you might encounter multiple cold starts if many instances need to spin up at once. The benefit is each instance has the full CPU for one request, often leading to fast processing for each request. The downside is if your app isnât busy enough to keep those instances around, you pay the penalty of cold starts for bursts of traffic.
- Provisioned Concurrency: If you have a sensitive endpoint that must always respond quickly, you might consider Lambdaâs Provisioned Concurrency feature. This keeps a number of instances warm and ready to handle requests, eliminating cold start latency at the cost of a constant hourly charge. For example, keeping 1 or 2 instances provisioned during business hours can ensure low latency for a public API. With Go and Echo, you likely wonât need this unless you have strict sub-100ms latency requirements on the first request.
- Tuning memory size: AWS Lambdaâs performance is tied to the memory setting â higher memory gives more CPU. Our function might run fine in 128 MB, but if your Echo app does CPU-intensive work, you can allocate more memory (which linearly increases available CPU). This can also reduce cold start duration because more CPU speed means faster initialization. Thereâs a trade-off with cost, as more memory = higher cost per millisecond. A common approach is to benchmark your function at different memory levels to see where the best price-performance lies.
- Optimize initialization: Since cold start includes running your global initialization, make sure to optimize that path. Lazy-load things if possible or use lightweight clients. For instance, if you connect to a database in main(), that will add to cold start time. Sometimes itâs better to initialize clients on first use rather than at start (depending on your use case). In our example, we set up routes and such, which is usually very fast (microseconds). Echoâs startup is not heavy. If you have any heavy computations (like loading large config files, ML models, etc.), consider moving those to on-demand or using something like Lambda Layers (for large assets) to reduce init time.
- aws-lambda-go-api-proxy overhead: The adapter we use does add a slight overhead for translating the event to an HTTP request. It constructs an http.Request from the JSON event and then after Echo handles it, it constructs the response event. This overhead is usually quite small (microseconds to a few milliseconds). If ultimate performance is needed, one could write a custom handler that avoids this translation and uses the events structs directly, but then youâd lose the advantage of reusing the Echo framework. In most cases, the convenience is well worth the tiny overhead. In fact, the approach of using this proxy is common and recommended for porting existing apps. The alternative AWS provides (discussed next) uses a similar principle but at the container level.
- AWS Lambda Web Adapter (alternative approach): AWS recently released an extension called the Lambda Web Adapter, which can simplify running web frameworks on Lambda. With that, you donât even need to modify your code to call lambda.Start or use the proxy library â you can run Echo as if it were just listening on a port (8080), and the adapter (as a Lambda extension) will capture requests and route them to your web server automatically. In our case, weâve already done the integration manually, but itâs good to know such an option exists. The web adapter might slightly increase cold start (since itâs an additional layer to load) and currently youâd need to include the adapter binary in your image, but it eliminates the need to write a custom handler. If you were starting from scratch or integrating a very complex web app, the adapter is worth looking into. (It supports Echo, Gin, Chi, etc., and works with API Gateway, Function URLs, or ALB similarly) In summary, the awslabs proxy library and the AWS web adapter achieve similar goals; one is in-process (code library) and the other is an external extension.
- Cost considerations: Running on Lambda means youâre billed per execution time and memory. Go is efficient and typically will handle requests quickly. If your Echo app was running on an EC2 or Kubernetes constantly, you might save costs by going serverless, especially if requests are infrequent. However, if you have consistently high load, at some point a constantly warm Lambda could cost more than a stable server â youâd have to analyze. The benefit is you get automatic scaling and zero management. Also remember that API Gateway (if used) has its own cost per request (though HTTP API is cheaper than REST API). If cost is a concern for high volume, you could consider an ALB (Application Load Balancer) as a trigger, since ALBâs pricing might be different (per LCU hours, etc.) for high throughput. But for most moderate uses, the difference is minor.
- Logging and monitoring: Echo logs to stdout by default, which ends up in CloudWatch Logs for each Lambda invocation. Be mindful of logging too much (as it can slow things and incur costs). You can use structured logging and log only essential info, as needed. AWS X-Ray can be used if you need tracing â youâd have to integrate the X-Ray SDK with Echo (or at least capture the handler execution). This might be an advanced topic beyond this guide, but itâs possible to enable X-Ray for deeper performance analysis.
In summary, performance of Go Echo on Lambda is generally excellent. Cold starts are minimal (especially compared to heavier runtimes) â as one independent benchmark noted, âAll languages (except Java and .NET) have a pretty small cold start,â with Go being a top performer. Warm execution of requests should be on par with running on a dedicated server for the same CPU power. The main things to watch are occasional cold start spikes and ensuring your app is stateless (which it should be â Echo by itself doesnât store global state per request, so itâs naturally stateless across invocations).
đ Conclusion
Weâve shown how you can deploy a Go Echo framework application to AWS Lambda with only minor adjustments. By using the AWS Lambda Go API Proxy adapter, you can preserve your existing Echo routes and middleware, making the move to serverless relatively straightforward. We covered writing a small Lambda bootstrap in main.go to initialize the adapter (only a few lines of code) and using a multi-stage Dockerfile to produce a lean container image for Lambda. After pushing the image to ECR and creating a Lambda function, your Echo app runs in the cloud without servers â each request triggers your code via API Gateway or a Lambda URL.
This approach lets you consolidate what might have been multiple microservice endpoints into a single Lambda function (since Echo can route different paths internally), which can simplify deployment. (In one case study, developers combined routes under one Echo Lambda and avoided deploying many separate functions) Keep in mind the trade-offs: a single Lambda handling many routes means all routes scale together, which is usually fine for a coherent API.
We also discussed how to optimize for performance. With Go, cold starts are fast, but itâs still wise to minimize cold start impact by doing one-time initialization and possibly leveraging provisioned concurrency for mission-critical low-latency needs. Monitor your functionâs memory usage and execution time with CloudWatch â you might find that increasing memory reduces runtime enough to be cost-effective (since you pay for time, a faster execution might offset the higher memory cost).
With your Echo app successfully running on Lambda, you can enjoy the benefits of a serverless architecture: automatic scaling, high availability, and no servers to maintain or patch. Development remains almost the same as writing a normal Echo web service, which means you retain productivity and familiarity. For further enhancements, you could integrate other AWS services (e.g., DynamoDB, S3, etc.) by calling their SDKs within your handlers â the Lambda environment will allow outbound calls to AWS services or the internet as configured by your functionâs role and VPC settings.
Finally, remember to handle things like timeouts and retries appropriately (API Gateway might retry on errors, etc.), and return proper HTTP responses through Echo for various conditions. Since Echo is fully in charge of HTTP-level behavior, you have flexibility to use its middleware (like authentication, CORS middleware, etc.) as you would normally. Those will all work in the Lambda context as they would on a standalone server.
By following this guide, youâve containerized a Go Echo application for AWS Lambda with minimal friction, enabling a scalable, serverless deployment. Happy coding, and enjoy your new serverless Echo setup!
References:
AWS Official Docs â Deploy Go Lambda functions with container images
aws-lambda-go-api-proxy (GitHub): Library used to adapt Echo (and other Go frameworks like Gin, Chi) to Lambda events.
Top comments (0)