Hey everyone! š
In this article, Iāll show how to easily create and run a backend on NodeJS in AWS Lambda using Architect (arc.codes) framework.
A few words about Architect
To be honest, my first impression of this framework was quite skeptical. I had never heard of it before, and it seemed to me that there were already plenty of more popular solutions out there. Another one ā especially with such low npm installation stats ā just didnāt seem necessary.
But after working with it for a while, I was genuinely surprised. Honestly, itās one of the most convenient tools Iāve come across in my 11 years of development.
That said, a bit of skepticism still remains. I did encounter a few bugs that were clearly caused by the simple lack of maintenance ā and thatās a red flag for me. Because of this, I wouldnāt recommend using it for huge projects. Iāll share one of these issue at the end, along with a simple workaround.
So..
Architect calls itself a framework (which I completely disagree with ā it feels more like a library to me). It lets you define your AWS infrastructure as code while providing an excellent developer experience. It supports NodeJS, Python, and Deno, but Iāve only used it with NodeJS so far.
Alright, letās get to the fun part
Iāll show you how to build simple CRUD operations for a User entity in NodeJS, run the backend locally, and deploy it to AWS Lambda together with all the necessary infrastructure ā including DynamoDB.
The beginning is pretty standard, so I wonāt go into too much detail.
After installing Architect and initializing the project, weāll install all dependencies needed for DynamoDB, AWS Lambda, and type definitions.
yarn init
yarn add -D @architect/architect
yarn add -D architect/plugin-typescript
yarn add -D @aws-lite/dynamodb-types
yarn add -D @types/node
npx arc init
yarn add @architect/functions
yarn add @aws-sdk/client-dynamodb
yarn add @aws-sdk/lib-dynamodb
Letās add all the necessary commands for running and deploying right away.
I want to highlight the yarn dev command here ā it runs the sandbox script, which starts the local environment (including all dependencies) and can even populate the database with test data. Iāll show how that works a bit later.
{
"scripts": {
"dev": "npx arc sandbox",
"deploy:production": "npx arc deploy --production",
"deploy:staging": "npx arc deploy --staging"
},
}
To start, I want to create a contract between the Architect API and my small applicationās code.
For that, I only need a single type ā Response ā which defines the format of how the main application code communicates responses back to the framework.
Response:
export type Response<T> =
| {
code: 200,
payload: T,
}
| {
code: 500 | 404,
payload: string,
};
Since I prefer modular architecture and layering, Iāll create a folder modules/user with three subfolders to separate business logic from infrastructure.
domain ā contains services and entities
User entity:
export type User = {
id: string;
name: string;
email: string;
createdAt: string;
updatedAt: string;
};
UserService:
import type { User } from '../entity';
export interface IUserRepository {
findById: (id: string) => Promise<User | undefined>;
deleteById: (id: string) => Promise<void>;
create: (userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>) => Promise<User>;
update: (userData: Omit<User, 'createdAt' | 'updatedAt'>) => Promise<void>;
};
export class UserService {
private repository: IUserRepository;
constructor(repository: IUserRepository) {
this.repository = repository;
}
public async getUserById(id: string): Promise<User | undefined> {
return this.repository.findById(id);
}
public async createUser(userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>): Promise<User> {
return this.repository.create(userData);
}
public async updateUser(userData: Omit<User, 'createdAt' | 'updatedAt'>): Promise<void> {
return this.repository.update(userData);
}
public async deleteUserById(id: string): Promise<void> {
return this.repository.deleteById(id);
}
};
infrastructure ā contains repositories for DynamoDB access
User Repository: (here is connection to AWS DynamoDB, and a special ākung-fuā technique with a private constructor and an asynchronous factory method for creating the class, since the database connection is established asynchronously as well)
import crypto from 'node:crypto';
import arc from '@architect/functions';
import type { ArcDB } from '@architect/functions/types/tables';
import { User } from '../../domain/entity';
import { IUserRepository } from '../../domain/service';
type UserTable = {
users: User;
};
export class UserDynamoDBRepository implements IUserRepository {
private db: ArcDB<UserTable>;
private constructor(db: ArcDB<UserTable>) {
this.db = db;
}
public static async create() {
const db = await arc.tables<UserTable>();
return new UserDynamoDBRepository(db);
}
public async findById(id: string): Promise<User | undefined> {
return this.db.users.get({ id });
}
public async deleteById(id: string): Promise<void> {
await this.db.users.delete({ id });
}
public async create(userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>): Promise<User> {
const id = crypto.randomUUID();
const createdAt = new Date().toISOString();
return this.db.users.put({
id,
...userData,
createdAt,
updatedAt: createdAt,
});
}
public async update(userData: Omit<User, 'createdAt' | 'updatedAt'>): Promise<void> {
const { id, ...data } = userData;
const updatedAt = new Date().toISOString();
const payload = { ...data, updatedAt };
const updateExpression = Object.keys(payload).map((key) => `#${key} = :${key}`).join(', ');
const expressionAttributeNames = Object.fromEntries(
Object.keys(payload).map((key) => [`#${key}`, key])
);
const expressionAttributeValues = Object.fromEntries(
Object.entries(payload).map(([key, value]) => [`:${key}`, value])
);
await this.db.users.update({
Key: { id: userData.id },
UpdateExpression: `set ${updateExpression}`,
ExpressionAttributeNames: expressionAttributeNames,
ExpressionAttributeValues: expressionAttributeValues,
});
}
};
presentation ā contains controllers, which will later be invoked by AWS Lambda
User Controller:
import { Response } from '../../../common/presentation';
import { User } from '../../domain/entity';
import { UserService } from '../../domain/service';
import { UserDynamoDBRepository } from '../../infrastructure/repository/UserDynamoDBRepository';
export class UserController {
private service: UserService;
private constructor(service: UserService) {
this.service = service;
}
public static async create() {
const repository = await UserDynamoDBRepository.create();
const service = new UserService(repository);
return new UserController(service);
}
public async getUserById(id: string): Promise<Response<User>> {
try {
const user = await this.service.getUserById(id);
if (!user) {
return {
code: 404,
payload: 'User not found',
};
}
return {
code: 200,
payload: user,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
public async deleteUserById(id: string): Promise<Response<void>> {
try {
await this.service.deleteUserById(id);
return {
code: 200,
payload: undefined,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
public async createUser(userData: Omit<User, 'id' | 'createdAt' | 'updatedAt'>): Promise<Response<User>> {
try {
const user = await this.service.createUser(userData);
return {
code: 200,
payload: user,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
public async updateUser(userData: Omit<User, 'createdAt' | 'updatedAt'>): Promise<Response<void>> {
try {
await this.service.updateUser(userData);
return {
code: 200,
payload: undefined,
};
} catch (error) {
return {
code: 500,
payload: 'Internal server error',
};
}
}
};
Thatās it! Weāve finished the application code ā now letās integrate it into our framework.
Letās create separate directories outside the modules for each CRUD operation: src/http/get-user, src/http/put-user, src/http/post-user, and src/http/delete-user.
The exact location of these functions doesnāt really matter ā I just chose these folder names for my own organization.
src/http/get-user
import arc from '@architect/functions';
import type { HttpRequest, HttpResponse } from '@architect/functions/types/http';
import { UserController } from '../../modules/user/presentation/controller/UserController';
export const handler = arc.http.async(async (request: HttpRequest): Promise<HttpResponse> => {
const { userId } = request.pathParameters;
const controller = await UserController.create();
const response = await controller.getUserById(userId);
return {
statusCode: response.code,
json: {
payload: response.payload,
},
};
});
src/http/post-user
import arc from '@architect/functions';
import type { HttpRequest, HttpResponse } from '@architect/functions/types/http';
import { UserController } from '../../modules/user/presentation/controller/UserController';
export const handler = arc.http.async(async (request: HttpRequest): Promise<HttpResponse> => {
const { name, email } = request.body;
const controller = await UserController.create();
const response = await controller.createUser({ name, email });
return {
statusCode: response.code,
json: {
payload: response.payload,
},
};
});
I wonāt include the code for the other functions, as they follow the same pattern.
So, hereās the most interesting part ā the main configuration file is called app.arc. It can come in different formats, like YAML, if thatās more convenient for you, but I kept the default format.
In this file, we define the entire infrastructure configuration, including all routes (later, a separate AWS Lambda will be created for each), tables, AWS region, and so on. You can find more information here, but this basic config pretty simple.
Additionally, we specify plugins and my little esbuild.js hack, which Iāll explain a bit later.
@app
# App name
devto
@aws
# App region
region us-west-2
# This param is necessary for architect/plugin-typescript plugin
runtime typescript
@plugins
# Typescript support
architect/plugin-typescript
@typescript
# Additional build configuration
esbuild-config esbuild.js
# Routes
@http
/api/user/:userId
method get
src src/http/get-user
/api/user/:userId
method put
src src/http/put-user
/api/user/:userId
method delete
src src/http/delete-user
/api/user
method post
src src/http/post-user
# DynamoDB tables
@tables
users
id *String
And thatās it! With such a simple configuration, we get a fully functioning infrastructure on AWS Lambda.
Let's try to run it locally
But first, letās pre-populate our local database with some test users. To do this, we can simply create a JSON file named sandbox-seed.json in the project root, containing data for all the tables we need.
{
"users": [
{
"id": "7e90d8f0-2c39-4d0c-8e5e-403a03ccda93",
"email": "text@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
]
}
...and...
yarn dev
ā Sandbox @tables created in local database
ā Sandbox @http (HTTP API mode / Lambda proxy v2.0 format / live reload) routes
get /api/user/:userId ................. src/http/get-user
get /* ................................ public/
post /api/user ......................... src/http/post-user
put /api/user/:userId ................. src/http/put-user
delete /api/user/:userId ................. src/http/delete-user
http://localhost:3333
Here it is ā our test user from the seed file:
GET http://localhost:3333/api/user/7e90d8f0-2c39-4d0c-8e5e-403a03ccda93
HTTP/1.1 200 OK
{
"payload": {
"id": "7e90d8f0-2c39-4d0c-8e5e-403a03ccda93",
"email": "text@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
}
Now we can easily create, update, and delete users locally. Itās quite simple and convenient. By default, this mode comes with livereload, but that and many other settings are easy to configure.
But even more interesting is how easily we can deploy this to AWS from scratch.
Deploy
yarn deploy:production
App ā devto
Region ā us-west-2
Profile ā default
Version ā Architect 11.3.0
Compiled project in 0.157s
⬠Deploy Creating new private deployment bucket: devto-cfn-deployments-b0eb2
⬠Deploy Initializing deployment
ā Deploy Generated CloudFormation deployment
ā Deploy Deployed & built infrastructure
ā Success! Deployed app in 78.558 seconds
https://rncj6atj15.execute-api.us-west-2.amazonaws.com
⨠Done in 80.92s.
..easy... The entire infrastructure was created from scratch in AWS, and it took just over a minute.
A few requests to check..
POST https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user
{
"email": "test@example.com",
"name": "John Jonson"
}
HTTP/1.1 200 OK
{
"payload": {
"id": "83024338-db08-4756-a43d-04a36cb031d8",
"email": "test@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
}
GET https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
HTTP/1.1 200 OK
{
"payload": {
"id": "83024338-db08-4756-a43d-04a36cb031d8",
"email": "test@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T00:00:00.000Z"
}
}
PUT https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
{
"email": "john@example.com",
"name": "John Jonson"
}
HTTP/1.1 200 OK
{
"payload": {
"id": "83024338-db08-4756-a43d-04a36cb031d8",
"email": "john@example.com",
"name": "John Jonson",
"createdAt": "2023-01-01T00:00:00.000Z",
"updatedAt": "2023-01-01T0:00:00.000Z"
}
}
DELETE https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
HTTP/1.1 200 OK
GET https://rncj6atj15.execute-api.us-west-2.amazonaws.com/api/user/83024338-db08-4756-a43d-04a36cb031d8
HTTP/1.1 404 OK
{
"payload": "User not found"
}
The end.. happy end..
Conclusion
This is a very basic example showing the simplest capabilities of Architect. However, Iāve also used it with queues, WebSockets, and scheduled tasks ā all just as easy as whatās described in this article. And, in my opinion, the most important point is that this framework fits nicely with DDD (Domain-Driven Design) patterns. Yes, in my example DDD is implemented very lightly, but it clearly shows that Architect doesnāt force you to deeply integrate its API into your application code ā you can easily separate or replace it in the future. All in all, itās a really cool tool! :)
P.S. And hereās the hack I mentioned earlier. In the version of Architect I used (specifically the @architect/plugin-typescript plugin), I found a bug related to support for an older version of the AWS library for DynamoDB.
In Architectās code, thereās a check for the current NodeŠŠ« version. If itās below 18, it tries to use the old AWS library, dynamically importing it with require(...). However, esbuild, which runs inside the @architect/plugin-typescript plugin, tries to bundle this import regardless of whether the condition for loading it is met.
To fix this, my custom ESLint configuration simply disables bundling for that problematic package.
eslint.js
module.exports = {
external: ['aws-sdk/clients/dynamodb'],
}




Top comments (7)
Such a great idea ā making screenshots is always the slowest part of a launch. The 80/20 āAI + human controlā mix sounds perfect. Would love to see preset styles for different app types down the road! š
Hi, Shemith,
Honestly, I am not sure I follow. Could you elaborate?
Thanks for the reply, Dmitrii!
What I meant was ā it would be great if your tool offered ready-made screenshot design templates based on the type of product. For example:
⢠Mobile app style presets
⢠SaaS dashboard style presets
⢠AI tool/Figma plugin style presets
⢠Chrome extension style presets
So users could quickly choose a āthemeā that matches their product and then customize inside.
Hope that makes sense! š
This is a very interesting post, Dmitry. It highlights the power of abstraction frameworks like Architect.
For my own AWS serverless project (tarihasistani.com.tr), I chose a different path: I used Python and built the entire infrastructure from scratch using Terraform (API GW, Lambda, S3, CloudFront with OAC). That approach gave me maximum control, but it also came with a steep learning curve, especially for complex networking and IAM permissions.
Your article makes me wonder about the trade-offs. Do you think frameworks like Architect (which prioritize simplicity and convention) eventually hit a 'wall' when you need to configure very specific, low-level details (like advanced CloudFront policies or complex IAM roles)? Or does the speed of development they offer almost always outweigh the 'raw control' of using Terraform directly?
Hi, Oguzhan,
Thank you for your feedback and question!
Iām pretty sure that Architect and similar frameworks always have a ceiling, so I avoid using them for very large projects. The main reason is the configuration limits you mentioned. Even if they seem small now, frameworks always lag behind whenever the original API is updated, since they provide an abstraction over it.
In my experience, framework developers implement their own vision of how the abstraction should work. No matter how good they are now, over time they often understand less about the world outside the framework because they focus more on maintaining it than on the bigger ecosystem.
The projects where I used Architect werenāt huge, but they werenāt just demos or MVPs either. They were either small startups with potential to grow into medium-sized apps, or already medium-sized apps. For these kinds of projects, I think using Architect makes sense.
What I also like about it is that in the future, I can easily switch from Architect to something like Terraform, the AWS SDK, or plain Kubernetes if needed.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.