After four years as Product Manager for Jetpack Compose at Google, taking it from its first public announcement in 2019 through Early Access to general availability, I've learned that great developer tools are about removing friction and cognitive load. Now building serverless applications, I find the same principles apply: developers need tools that work for them, not against them.
TL;DR: When building serverless applications, we often struggle with maintaining consistency between our Lambda functions (application code) and CDK (infrastructure code). This post shows how to use TypeScript's type system to bridge these two worlds, creating a single source of truth for configuration and permissions. We'll explore practical patterns that help you catch errors at compile time, improve IDE support, and reduce the mental overhead of context-switching between infrastructure and application code.
Header image: "Ponte della Costituzione" by Ethan Rera on Flickr - CC BY-SA 2.0
Serverless development is a tale of two worlds. One minute you're deep in application code, crafting business logic in your Lambda functions, feeling productive and in the zone. The next minute you're jolted into infrastructure land, diving into CDK code to figure out why your permissions or environment variables aren't quite right.
I've spent the last three years building serverless applications, and I've felt this context-switching pain firsthand. That's why I want to share some patterns I've developed that help bridge these two worlds. In this post, we'll explore how to create a development experience where your tools work for you, not against you - particularly focusing on how to maintain type safety and confidence across both your application and infrastructure code.
The Challenge: Living in Two Worlds
Picture this: you're implementing a new feature in your Lambda function. The code is flowing, TypeScript is catching potential bugs, your IDE is helping with intelligent autocomplete. Life is good. Then you realize you need to add a new environment variable and grant access to a new AWS service.
Suddenly, you're in a completely different context. You're navigating your CDK infrastructure code, which lives by different rules. It runs at deployment time, not runtime. It's still TypeScript, but it's speaking a different language - one of Constructs, Props, and CloudFormation templates under the hood. Your IDE doesn't help much because the two codebases are completely separate, and you're back in a world of "find all".
In a typical serverless application, these two worlds need to stay synchronized:
-
The Infrastructure World (CDK):
- Lives in your
infra/
directory - Defines the shape of your AWS resources
- Controls permissions, environment variables, and configuration
- Only comes alive during deployment
- Mistakes here often only surface at runtime
- Lives in your
-
The Application World (Lambda):
- Lives in your
src/
directory - Contains your actual business logic
- Runs in response to events in AWS
- Needs to know about decisions made in the infrastructure world
- Has to trust that the environment variables and permissions are correctly set
- Lives in your
The gap between these worlds is where many of our daily frustrations come from. Type safety works great within each world, but breaks down at the boundaries. Your IDE can't tell you if an environment variable is missing from your CDK code when you're writing your Lambda function. It can't warn you if you've forgotten to grant permissions that your code needs.
Key Development Challenges
The disconnect between these worlds creates several critical challenges:
Configuration and Resource Access
- Environment variables must be defined in CDK but aren't type-checked in Lambda code
- Permissions are set in infrastructure but only validated during runtime
- Resource names and ARNs defined in CDK need to match what the Lambda expects
- Configuration errors only surface when the code actually runs
Development Workflow Impact
- Every infrastructure change requires mental context switching
- Simple typos in environment variable names can slip through to production
- Changes in CDK code can have non-obvious impacts on Lambda functionality
- IDE features like "Find References" stop working at the boundary between worlds
- More time spent verifying configurations than writing business logic
- Reduced confidence when making infrastructure changes
The Solution: Building a Bridge Between Worlds
The key to solving these issues is to create a single source of truth that both worlds can understand. Let's take the simple case where a lambda needs to open a connection to a database. Both the database and the lambda are created in CDK.
Infrastructure World (CDK)
First, we create a construct for the Database, with a grantAccess method that knows how to configure the lambda to access the database:
// infra/database.ts (CDK construct)
export class Database extends Construct {
public grantAccess(target: Function) {
// Set up environment variables for the Lambda
target.addEnvironment(EnvVarNames.DB_NAME, `${config.stackPrefix}data`);
}
}
Then when we create a lambda that needs this access:
// infra/mylambda.ts (CDK construct)
export class MyLambda extends Construct {
private createLambda() {
// ... create lambda and retrieve the database construct
database.grantAccess(lambda);
}
}
Application World (Lambda)
This code runs in AWS:
// src/mylambda.ts
import { DB_NAME } from "../../../backend/src/environment";
// Early validation of required environment variables
if (!DB_NAME) {
throw new Error("DB_NAME is not defined in the environment");
}
// Now we can safely use the variable
const dbConnection = connectToDatabase(DB_NAME);
Shared Definitions
Create an enum for all the environment variables in your application:
// backend/envvars.ts (shared between CDK and Lambda)
export enum EnvVarNames {
DB_NAME = "DB_NAME" // Name of the database to connect to
}
And a library that loads all runtime values exporting them with the same names:
// backend/environment.ts (runtime values needed only by Lambda)
export const DB_NAME = process.env[EnvVarNames.DB_NAME];
You will need to maintain consistency between these two files, and here's where unit testing helps. Yes, it's an extra context switch to a third yet mental space, but you have to do this only once:
// backend/environment.test.ts
import * as environment from "./environment";
import { EnvVarNames } from "./envvars";
describe("Environment constants", () => {
Object.values(EnvVarNames).forEach((envVarName) => {
test(`should have a defined constant for ${envVarName}`, () => {
expect(environment).toHaveProperty(envVarName);
});
});
});
Even though these pieces of code run at completely different times and in different contexts, they remain in sync because they're referencing enums and constants that are in sync. This creates several benefits:
- Type safety across both worlds
- IDE support for finding references and refactoring
- Clear documentation of environment variable requirements
- Early detection of configuration issues
Note that we could avoid the use of the
environment.ts
file and just invokeconst DB_NAME = process.env[EnvVarNames.DB_NAME];
in the lambda, but I find that this would make the code less testable and overall harder to read. Importing constants from environment.ts also ensures that we are always using the same constant name when we use this value, which improves consistency and readability across the codebase.
Real World Example: Atlas MongoDB Integration
Let's see how this pattern works in a real-world scenario with MongoDB Atlas integration. We are following best security practices in accessing the database, using IAM authentication, and the database is accessible only from our VPC public IPs. We need to ensure a few things: that the lambda can assume the correct IAM role, that it runs in a VPC (the application has only one VPC), and that all the correct environment variable are set.
This example shows how to maintain type safety and clarity across both infrastructure and application code.
Project setup
Ensure that both your infrastructure code and your lambda environments can access the common defintions by referencing them:
// tsconfig.json, both for your lambda environment and cdk
"references": [{ "path": "../../backend" }]
Infrastructure World: The Atlas Construct
This CDK construct manages database access permissions and environment variables:
import { CfnOutput, Stack } from "aws-cdk-lib";
import {
AccountPrincipal,
Effect,
PolicyStatement,
Role,
} from "aws-cdk-lib/aws-iam";
import { Function } from "aws-cdk-lib/aws-lambda";
import { Construct } from "constructs";
import { config } from "../config";
import { EnvVarNames } from "../../backend/src/model";
export class Atlas extends Construct {
readWriteAccessRole: Role;
constructor(scope: Construct, id: string) {
super(scope, id);
this.createRWAccessRole();
new CfnOutput(this, "MongoDBAccessRole", {
value: this.readWriteAccessRole.roleArn,
});
}
createRWAccessRole() {
this.readWriteAccessRole = new Role(this, "ReadWriteAccessRole", {
assumedBy: new AccountPrincipal(Stack.of(this).account),
});
}
public grantWrite(target: Function) {
if (!target.isBoundToVpc) {
throw new Error("function won't be able to access Atlas if not in a VPC");
}
console.log(`Granting assume role to ${target.grantPrincipal}`);
this.readWriteAccessRole.grantAssumeRole(target.grantPrincipal);
this.readWriteAccessRole.assumeRolePolicy?.addStatements(
new PolicyStatement({
actions: ["sts:AssumeRole"],
effect: Effect.ALLOW,
principals: [target.grantPrincipal],
})
);
console.log(`Setting database access variables for ${target.grantPrincipal}`);
target.addEnvironment(EnvVarNames.ATLAS_CLUSTER_NAME, config.atlasCluster);
target.addEnvironment(
EnvVarNames.ATLAS_ACCESS_ROLE,
this.readWriteAccessRole.roleArn
);
target.addEnvironment(EnvVarNames.DB_NAME, `${config.stackPrefix}data`);
}
}
Infrastructure World: Using the Atlas Construct
Here's how an Apollo Lambda construct gets its database access configured:
export class Apollo extends Construct {
lambda: lambda.Function;
endpoint: lambda.FunctionUrl;
constructor(scope: Construct, id: string) {
super(scope, id);
const stack = Stack.of(this) as RibbitStack;
const { atlas } = stack; // Get the Atlas construct from the stack
const { apollo, apolloEndpoint } = this.createApolloLambda();
// Grant database access to the Lambda
atlas.grantWrite(apollo);
this.lambda = apollo;
this.endpoint = apolloEndpoint;
new CfnOutput(this, "Apollo Endpoint", {
value: this.endpoint.url ?? "",
});
}
}
Application World: Lambda Runtime Code
And here's how the Lambda code uses these environment variables to connect to Atlas, assuming the correct IAM role and refreshing credentials as required using the STSClient @aws-sdk/client-sts
:
import { STSClient, AssumeRoleCommand } from '@aws-sdk/client-sts';
import { MongoClient } from 'mongodb';
import { ATLAS_ACCESS_ROLE, ATLAS_CLUSTER_NAME } from "./environment";
// Early validation of required environment variables
if (!ATLAS_CLUSTER_NAME) {
throw new Error("ATLAS_CLUSTER_NAME is not defined in the environment");
}
if (!ATLAS_ACCESS_ROLE) {
throw new Error("ATLAS_ACCESS_ROLE is not defined in the environment");
}
let mongoClient: MongoClient | null = null;
let iamExpireMillis: Date = new Date();
const sts = new STSClient({});
async function getMongoClient() {
// Check if we have a valid connection
if (mongoClient && iamExpireMillis > new Date()) {
return mongoClient;
}
try {
// Step 1: Assume the IAM role for Atlas access
console.log("Connecting to Atlas with IAM Auth");
const { Credentials } = await sts.send(
new AssumeRoleCommand({
RoleArn: ATLAS_ACCESS_ROLE,
RoleSessionName: "AccessAtlas",
})
);
if (!Credentials || !Credentials.SecretAccessKey) {
throw new Error("Failed to assume db access IAM role");
}
console.log(`Credentials expire at ${Credentials.Expiration}`);
iamExpireMillis = Credentials.Expiration ?? new Date();
// Step 2: Construct MongoDB connection URL with IAM credentials
const { AccessKeyId, SessionToken, SecretAccessKey } = Credentials;
const encodedSecretKey = encodeURIComponent(SecretAccessKey);
const url = new URL(
`mongodb+srv://${AccessKeyId}:${encodedSecretKey}@${ATLAS_CLUSTER_NAME}.mongodb.net`
);
// Step 3: Configure MongoDB connection parameters
url.searchParams.set("authSource", "$external");
url.searchParams.set(
"authMechanismProperties",
`AWS_SESSION_TOKEN:${SessionToken}`
);
url.searchParams.set("w", "majority");
url.searchParams.set("retryWrites", "true");
url.searchParams.set("authMechanism", "MONGODB-AWS");
// Step 4: Establish connection
console.log("Establishing connection to Atlas");
const client = new MongoClient(url.toString());
await client.connect();
console.log("Successfully connected to Atlas");
mongoClient = client;
return mongoClient;
} catch (error) {
console.error("Failed to connect to Atlas:", error);
throw error;
}
}
This real-world example demonstrates how to maintain consistency across the infrastructure/application divide:
-
Early Validation in Both Worlds
- CDK code validates VPC configuration at deploy time
- Lambda code validates environment variables at startup
- Type safety ensures consistency between both contexts
-
Security Best Practices
- IAM roles with minimal required permissions
- VPC requirement enforced at infrastructure level
- Proper role assumption setup in application code
- Secure credential handling
-
Developer Experience
- Type-safe environment variable access
- Clear error messages in both contexts
- Centralized configuration
- Connection reuse and expiration handling
Conclusion
At its core, this pattern isn't about type safety or configuration management - it's about making your development experience smoother and more productive. When we talk about developer experience in serverless applications, we're really talking about reducing the friction of working across these two worlds.
Think about your daily development flow. How much mental energy do you spend:
- Tracking down which environment variables are needed for a new Lambda?
- Double-checking if you've configured all the permissions correctly?
- Wondering if your infrastructure changes might break something in production?
- Context-switching between your CDK and Lambda code?
The patterns we've explored directly address these daily challenges. By creating a single source of truth and leveraging TypeScript's type system, we:
Reduce Mental Load: Focus on solving business problems instead of keeping track of configuration details. The type system remembers for you.
Enable Flow: Spend more time in flow state and less time context-switching between different parts of your application.
Maintain Control: Having strong typing across both worlds means you can move fast without breaking things.
Yes, we get technical benefits like reduced runtime errors and better maintainability. But the real measure of success is in your daily development experience - those moments when you can make changes confidently, when your IDE helps you discover what you need to know, when you can focus on solving interesting problems instead of fighting with configuration.
After all, infrastructure code is more than just YAML and TypeScript - it's a critical part of your application that you interact with every day. Making it more intuitive and safer to work with isn't just about preventing errors; it's about creating an environment where you can do your best work.
This post is part of an ongoing exploration of AWS serverless development best practices, with a particular focus on enhancing the developer experience. I believe that building serverless applications should be both powerful and enjoyable, and I'm passionate about sharing techniques that make that possible. Follow me for more articles on infrastructure design, developer workflow improvements, and other aspects of creating effective serverless solutions.
Do you have any tips on improving Serverless DX? Please share them below!
Top comments (0)