Hey everyone! 👋
In this article, I’ll show how to easily create and run a backend on NodeJS in AWS Lambda using Architect (arc.codes) framework.
A few words about Architect
To be honest, my first impression of this framework was quite skeptical. I had never heard of it before, and it seemed to me that there were already plenty of more popular solutions out there. Another one — especially with such low npm installation stats — just didn’t seem necessary.
But after working with it for a while, I was genuinely surprised. Honestly, it’s one of the most convenient tools I’ve come across in my 11 years of development.
That said, a bit of skepticism still remains. I did encounter a few bugs that were clearly caused by the simple lack of maintenance — and that’s a red flag for me. Because of this, I wouldn’t recommend using it for huge projects. I’ll share one of these issue at the end, along with a simple workaround.
So..
Architect calls itself a framework (which I completely disagree with — it feels more like a library to me). It lets you define your AWS infrastructure as code while providing an excellent developer experience. It supports NodeJS, Python, and Deno, but I’ve only used it with NodeJS so far.
Alright, let’s get to the fun part.
I’ll show you how to build simple CRUD operations for a User entity in NodeJS, run the backend locally, and deploy it to AWS Lambda together with all the necessary infrastructure — including DynamoDB.
The beginning is pretty standard, so I won’t go into too much detail.
After installing Architect and initializing the project, we’ll install all dependencies needed for DynamoDB, AWS Lambda, and type definitions.
yarn init
yarn add -D @architect/architect
yarn add -D architect/plugin-typescript
yarn add -D @aws-lite/dynamodb-types
yarn add -D @types/node
npx arc init
yarn add @architect/functions
yarn add @aws-sdk/client-dynamodb
yarn add @aws-sdk/lib-dynamodb
Let’s add all the necessary commands for running and deploying right away.
I want to highlight the yarn dev command here — it runs the sandbox script, which starts the local environment (including all dependencies) and can even populate the database with test data. I’ll show how that works a bit later.
{
"scripts": {
"dev": "npx arc sandbox",
"deploy:production": "npx arc deploy --production",
"deploy:staging": "npx arc deploy --staging"
},
}
To start, I want to create a contract between the Architect API and my small application’s code.
For that, I only need a single type — Response — which defines the format of how the main application code communicates responses back to the framework.
Response:
export type Response<T> =
| {
code: 200,
payload: T,
}
| {
code: 500 | 404,
payload: string,
};
Since I prefer modular architecture and layering, I’ll create a folder modules/user with three subfolders to separate business logic from infrastructure.
domain — contains services and entities
User entity:
export type User = {
id: string;
name: string;
email: string;
createdAt: string;
updatedAt: string;
};
UserService:
import type { User } from '../entity';
export interface IUserRepository {
findById: (id: string) => Promise<User | undefined>;
deleteById: (id: string) => Promise<void>;
create: (userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>) => Promise<User>;
update: (userData: Omit<User, 'createdAt' | 'updatedAt'>) => Promise<void>;
};
export class UserService {
private repository: IUserRepository;
constructor(repository: IUserRepository) {
this.repository = repository;
}
public async getUserById(id: string): Promise<User | undefined> {
return this.repository.findById(id);
}
public async createUser(userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>): Promise<User> {
return this.repository.create(userData);
}
public async updateUser(userData: Omit<User, 'createdAt' | 'updatedAt'>): Promise<void> {
return this.repository.update(userData);
}
public async deleteUserById(id: string): Promise<void> {
return this.repository.deleteById(id);
}
};
infrastructure — contains repositories for DynamoDB access
User Repository: (here is connection to AWS DynamoDB, and a special “kung-fu” technique with a private constructor and an asynchronous factory method for creating the class, since the database connection is established asynchronously as well)
import crypto from 'node:crypto';
import arc from '@architect/functions';
import type { ArcDB } from '@architect/functions/types/tables';
import { User } from '../../domain/entity';
import { IUserRepository } from '../../domain/service';
type UserTable = {
users: User;
};
export class UserDynamoDBRepository implements IUserRepository {
private db: ArcDB<UserTable>;
private constructor(db: ArcDB<UserTable>) {
this.db = db;
}
public static async create() {
const db = await arc.tables<UserTable>();
return new UserDynamoDBRepository(db);
}
public async findById(id: string): Promise<User | undefined> {
return this.db.users.get({ id });
}
public async deleteById(id: string): Promise<void> {
await this.db.users.delete({ id });
}
public async create(userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>): Promise<User> {
const id = crypto.randomUUID();
const createdAt = new Date().toISOString();
return this.db.users.put({
id,
...userData,
createdAt,
updatedAt: createdAt,
});
}
public async update(userData: Omit<User, 'createdAt' | 'updatedAt'>): Promise<void> {
const { id, ...data } = userData;
const updatedAt = new Date().toISOString();
const payload = { ...data, updatedAt };
const updateExpression = Object.keys(payload).map((key) => `#${key} = :${key}`).join(', ');
const expressionAttributeNames = Object.fromEntries(
Object.keys(payload).map((key) => [`#${key}`, key])
);
const expressionAttributeValues = Object.fromEntries(
Object.entries(payload).map(([key, value]) => [`:${key}`, value])
);
await this.db.users.update({
Key: { id: userData.id },
UpdateExpression: `set ${updateExpression}`,
ExpressionAttributeNames: expressionAttributeNames,
ExpressionAttributeValues: expressionAttributeValues,
});
}
};
presentation — contains controllers, which will later be invoked by AWS Lambda
User Controller:
import { Response } from '../../../common/presentation';
import { User } from '../../domain/entity';
import { UserService } from '../../domain/service';
import { UserDynamoDBRepository } from '../../infrastructure/repository/UserDynamoDBRepository';
export class UserController {
private service: UserService;
private constructor(service: UserService) {
this.service = service;
}
public static async create() {
const repository = await UserDynamoDBRepository.create();
const service = new UserService(repository);
return new UserController(service);
}
public async getUserById(id: string): Promise<Response<User>> {
try {
const user = await this.service.getUserById(id);
if (!user) {
return {
code: 404,
payload: 'User not found',
};
}
return {
code: 200,
payload: user,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
public async deleteUserById(id: string): Promise<Response<void>> {
try {
await this.service.deleteUserById(id);
return {
code: 200,
payload: undefined,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
public async createUser(userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>): Promise<Response<User>> {
try {
const user = await this.service.createUser(userData);
return {
code: 200,
payload: user,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
public async updateUser(userData: Omit<User, 'createdAt' | 'updatedAt'>): Promise<Response<void>> {
try {
await this.service.updateUser(userData);
return {
code: 200,
payload: undefined,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
};
That’s it! We’ve finished the application code — now let’s integrate it into our framework.
Let’s create separate directories outside the modules for each CRUD operation: src/http/get-user, src/http/put-user, src/http/post-user, and src/http/delete-user.
The exact location of these functions doesn’t really matter — I just chose these folder names for my own organization.
src/http/get-user
import arc from '@architect/functions';
import type { HttpRequest, HttpResponse } from '@architect/functions/types/http';
import { UserController } from '../../modules/user/presentation/controller/UserController';
export const handler = arc.http.async(async (request: HttpRequest): Promise<HttpResponse> => {
const { userId } = request.pathParameters;
const controller = await UserController.create();
const response = await controller.getUserById(userId);
return {
statusCode: response.code,
json: {
payload: response.payload,
},
};
});
src/http/post-user
import arc from '@architect/functions';
import type { HttpRequest, HttpResponse } from '@architect/functions/types/http';
import { UserController } from '../../modules/user/presentation/controller/UserController';
export const handler = arc.http.async(async (request: HttpRequest): Promise<HttpResponse> => {
const { name, email } = request.body;
const controller = await UserController.create();
const response = await controller.createUser({ name, email });
return {
statusCode: response.code,
json: {
payload: response.payload,
},
};
});
I won’t include the code for the other functions, as they follow the same pattern.
So, here’s the most interesting part — the main configuration file is called app.arc. It can come in different formats, like YAML, if that’s more convenient for you, but I kept the default format.
In this file, we define the entire infrastructure configuration, including all routes (later, a separate AWS Lambda will be created for each), tables, AWS region, and so on. You can find more information here, but this basic config pretty simple.
Additionally, we specify plugins and my little esbuild.js hack, which I’ll explain a bit later.
@app
# App name
devto
@aws
# App region
region us-west-2
# This param is necessary for architect/plugin-typescript plugin
runtime typescript
@plugins
# Typescript support
architect/plugin-typescript
@typescript
# Additional build configuration
esbuild-config esbuild.js
# Routes
@http
/api/user/:userId
method get
src src/http/get-user
/api/user/:userId
method put
src src/http/put-user
/api/user/:userId
method delete
src src/http/delete-user
/api/user
method post
src src/http/post-user
# DynamoDB tables
@tables
users
id *String
And that’s it! With such a simple configuration, we get a fully functioning infrastructure on AWS Lambda.
Let's try to run it locally
But first, let’s pre-populate our local database with some test users. To do this, we can simply create a JSON file named sandbox-seed.json in the project root, containing data for all the tables we need.
{
"users": [
{
"id": "7e90d8f0-2c39-4d0c-8e5e-403a03ccda93",
"email": "text@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
]
}
...and...
yarn dev
✓ Sandbox @tables created in local database
✓ Sandbox @http (HTTP API mode / Lambda proxy v2.0 format / live reload) routes
get /api/user/:userId ................. src/http/get-user
get /* ................................ public/
post /api/user ......................... src/http/post-user
put /api/user/:userId ................. src/http/put-user
delete /api/user/:userId ................. src/http/delete-user
http://localhost:3333
Here it is — our test user from the seed file:
GET http://localhost:3333/api/user/7e90d8f0-2c39-4d0c-8e5e-403a03ccda93
HTTP/1.1 200 OK
{
"payload": {
"id": "7e90d8f0-2c39-4d0c-8e5e-403a03ccda93",
"email": "text@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
}
Now we can easily create, update, and delete users locally. It’s quite simple and convenient. By default, this mode comes with livereload, but that and many other settings are easy to configure.
But even more interesting is how easily we can deploy this to AWS from scratch.
Deploy
yarn deploy:production
App ⌁ devto
Region ⌁ us-west-2
Profile ⌁ default
Version ⌁ Architect 11.3.0
Compiled project in 0.157s
⚬ Deploy Creating new private deployment bucket: devto-cfn-deployments-b0eb2
⚬ Deploy Initializing deployment
✓ Deploy Generated CloudFormation deployment
✓ Deploy Deployed & built infrastructure
✓ Success! Deployed app in 78.558 seconds
https://rncj6atj15.execute-api.us-west-2.amazonaws.com
✨ Done in 80.92s.
..easy... The entire infrastructure was created from scratch in AWS, and it took just over a minute.
A few requests to check..
POST https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user
{
"email": "test@example.com",
"name": "John Jonson"
}
HTTP/1.1 200 OK
{
"payload": {
"id": "83024338-db08-4756-a43d-04a36cb031d8",
"email": "test@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
}
GET https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
HTTP/1.1 200 OK
{
"payload": {
"id": "83024338-db08-4756-a43d-04a36cb031d8",
"email": "test@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
}
PUT https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
{
"email": "john@example.com",
"name": "John Jonson"
}
HTTP/1.1 200 OK
{
"payload": {
"id": "83024338-db08-4756-a43d-04a36cb031d8",
"email": "john@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T0:00:00.000Z"
}
}
DELETE https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
HTTP/1.1 200 OK
GET https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
HTTP/1.1 404 OK
{
"payload": "User not found"
}
The end.. happy end..
Conclusion: This is a very basic example showing the simplest capabilities of Architect. However, I’ve also used it with queues, WebSockets, and scheduled tasks — all just as easy as what’s described in this article. And, in my opinion, the most important point is that this framework fits nicely with DDD (Domain-Driven Design) patterns. Yes, in my example DDD is implemented very lightly, but it clearly shows that Architect doesn’t force you to deeply integrate its API into your application code — you can easily separate or replace it in the future. All in all, it’s a really cool tool! :)
P.S. And here’s the hack I mentioned earlier. In the version of Architect I used (specifically the @architect/plugin-typescript plugin), I found a bug related to support for an older version of the AWS library for DynamoDB.
In Architect’s code, there’s a check for the current NodeОЫ version. If it’s below 18, it tries to use the old AWS library, dynamically importing it with require(...). However, esbuild, which runs inside the @architect/plugin-typescript plugin, tries to bundle this import regardless of whether the condition for loading it is met.
To fix this, my custom ESLint configuration simply disables bundling for that problematic package.
eslint.js
module.exports = {
external: ['aws-sdk/clients/dynamodb'],
}




Top comments (6)
This is really solid! You explained Architect, DDD structure, and AWS deployment super clearly. Love the sandbox testing approach.
Quick question :- Did running it locally with Architect feel faster than deploying straight to AWS?
Yeah! The local startup takes only a few seconds, and livereload works instantly. The local sandbox has helped me many times when I needed to reproduce bugs from production. I just copied records from DynamoDB into sandbox-seed.json and reproduced the bug locally with the production data copy
Such a great idea — making screenshots is always the slowest part of a launch. The 80/20 “AI + human control” mix sounds perfect. Would love to see preset styles for different app types down the road! 🚀
Hi, Shemith,
Honestly, I am not sure I follow. Could you elaborate?
This is a very interesting post, Dmitry. It highlights the power of abstraction frameworks like Architect.
For my own AWS serverless project (tarihasistani.com.tr), I chose a different path: I used Python and built the entire infrastructure from scratch using Terraform (API GW, Lambda, S3, CloudFront with OAC). That approach gave me maximum control, but it also came with a steep learning curve, especially for complex networking and IAM permissions.
Your article makes me wonder about the trade-offs. Do you think frameworks like Architect (which prioritize simplicity and convention) eventually hit a 'wall' when you need to configure very specific, low-level details (like advanced CloudFront policies or complex IAM roles)? Or does the speed of development they offer almost always outweigh the 'raw control' of using Terraform directly?
Hi, Oguzhan,
Thank you for your feedback and question!
I’m pretty sure that Architect and similar frameworks always have a ceiling, so I avoid using them for very large projects. The main reason is the configuration limits you mentioned. Even if they seem small now, frameworks always lag behind whenever the original API is updated, since they provide an abstraction over it.
In my experience, framework developers implement their own vision of how the abstraction should work. No matter how good they are now, over time they often understand less about the world outside the framework because they focus more on maintaining it than on the bigger ecosystem.
The projects where I used Architect weren’t huge, but they weren’t just demos or MVPs either. They were either small startups with potential to grow into medium-sized apps, or already medium-sized apps. For these kinds of projects, I think using Architect makes sense.
What I also like about it is that in the future, I can easily switch from Architect to something like Terraform, the AWS SDK, or plain Kubernetes if needed.