DEV Community

Cover image for 🔥 Serverless boilerplate with TypeScript and Express [Open Source] 🚀
Remi W.
Remi W.

Posted on

🔥 Serverless boilerplate with TypeScript and Express [Open Source] 🚀

A Serverless boilerplate to build a backend in Express JS with TypeScript. Built with developer-experience in mind with ESLint, Prettier, Husky and Lint-Staged to help you writing high-quality code.

It also includes VSCode integration for better debugging session.

The Serverless starter code supports several features like:

  • ⚡️ Offline support

  • ⚙️ Environment variable

  • 🚀 Production-ready

  • ✨ Live reload

GitHub logo ixartz / Serverless-Boilerplate-Express-TypeScript

🚀 Boilerplate and Starter for Serverless framework, ExpressJS and TypeScript ⚡️ Made with developer experience first: Serverless framework + Live reload + Offline support + ExpressJS + TypeScript + ESLint + Prettier + Husky + Lint-Staged + Dotenv + VSCode

Serverless Boilerplate with Serverless Framework, ExpressJS and TypeScript Twitter

Serverless Boilerplate Banner

🚀 Serverless Boilerplate is starter code for your backend and REST API based on Serverless framework with Express JS and TypeScript. ⚡️ Made with Serverless framework, Live reload, Offline support, ExpressJS, TypeScript, ESLint, Prettier, Husky, Lint-Staged, Dotenv, VSCode.

Clone this project and use it to create your own backend.

Features

Developer experience first:

  • 🔥 Serverless framework
  • 📖 Local support with Serverless Offline
  • ⚙️ Environment variable with Serverless Dotenv
  • ⚡️ ExpressJS
  • 🎉 Type checking TypeScript with strict mode
  • ✏️ Linter with ESLint with Airbnb configuration
  • 🛠 Code Formatter with Prettier
  • 🦊 Husky for Git Hooks
  • 🚫 Lint-staged for running linters on Git staged files
  • 🗂 VSCode configuration: Debug, Settings, Tasks and extension for ESLint, Prettier, TypeScript
  • HTTP Api instead of API gateway for cost optimization
  • 💨 Live reload

Philosophy

  • Minimal code
  • 🚀 Production-ready

Nextless.js SaaS Boilerplate

Building your SaaS product faster with…




I use this boilerplate to build all my backend projects. Perfect to start any backend projects like building a REST Api in Serverless.

Fully managed without any server maintenance. No need to learn or use Docker. Highly scalable and keep your cost low.

Full-stack serverless SaaS application

If you want to build a SaaS application faster and better without losing your time configuring and installing dependencies, I have built a Serverless SaaS Boilerplate.

It includes everything you need to launch your SaaS in 1 month with authentication, subscription payment, user dashboards, landing pages, form management, infrastructure as code, etc. A full-stack application, 100% Serverless written in TypeScript.

Serverless SaaS Boilerplate

Oldest comments (15)

Collapse
 
rolfstreefkerk profile image
Rolf Streefkerk

why do you use express for serverless?

Collapse
 
andrenbrandao profile image
André Brandão

I am also curious to know. Is this type of setup going to route all requests to a single lambda?

Collapse
 
ixartz profile image
Remi W.

Yes, I've raised the same question. Should I use Express or not? Definitively depend on the scenarios, use cases and context.

With Express JS, you can have all of your routes defined in Expressjs with only one single lambda. So one cold start instead of multiple one. But the cold start is longer...

Thread Thread
 
jlgouwy profile image
Gouwy Jean-louis

So it means if one of your endpoints is more greedy, heavier, longer to process, you need to set your single lambda with the same configuration for all existant endpoints ?
Imagine you have some export or import functions in your app, which requires 1go ram and 10minutes to live, it will be also applied for a simple get customer by id for example ?
So you can’t optimize. Moreover, the coldstart, but you mentionned it.

What are your limits to choose express over serverless framework then ? When won’t you use expressjs ?

Thread Thread
 
ixartz profile image
Remi W.

As I said it's totally based on your use cases and context.
I don't have a use case like you mention it which requires 1 go ram and 10 minutes to live. But, you can still have a catch all routes endpoint and another endpoint for the greedy, heavier, longer endpoint.

You can have one endpoint for all your routes, you can also have one endpoint for one route... Or, you can one endpoint for multiple routes...
You don't need to oppose the two models, you can mix them together.

For example, one endpoint for 3 routes, another endpoint for 2 routes and the last endpoint for 5 routes. You can mix it based on your context.

I didn't choose Express JS over Serverless framework. I add Express in Serverless Framework: I'm just getting the best of the two worlds!

Thread Thread
 
rolfstreefkerk profile image
Rolf Streefkerk

In my opinion you should not use Express for cases where you're using an API Gateway and/or for cases where latency is a real factor for user experience.

For Lambda functions (AWS) or other Function engines, It's additional overhead of code that is simply not needed. Hence, there are serverless specific solutions that reduce code overhead and only provide utility functions.

A very well known library that is used for serverless functions is Middy
middy.js.org/

For route switching, for instance when you're using an API Gateway, the context data already provides sufficient information to do simple If or Case switching on the path and method without the need for a software router like Express.

Thread Thread
 
ixartz profile image
Remi W.

This leads to an endless debate like Vim or Emacs... one endpoint for all routes or one endpoint per route... There aren't any clear answer... Using one endpoint per route has also some limitation like resources limits in CloudFormation stacks. So, you have a limit in number of routes using one endpoint per route.

And another thing, what happen if you have already an Express application?

There isn't only one answer and the answer specific for each use cases...one solution won't fit all...

Thread Thread
 
rolfstreefkerk profile image
Rolf Streefkerk

please re-read my reply.
Have a good day.

Thread Thread
 
ixartz profile image
Remi W.

Yes, I re-read three time...
I'm also feeling you didn't read mine ;)
Maybe my English isn't good enough... Comments aren't the best way to share our backgrounds and needs.

Anyway, have a good day too.

 
jlgouwy profile image
Gouwy Jean-louis

Ok thanks for your reply.

I understand your points, but I can’t see the context it use case to have multiple endpoints for the single lambda.

Maybe to launch quickly a poc… ok.
But for long term, coldstart, caching by lambda, monitoring by each endpoint, clarity in aws console, giving some specific optimisation to lambda as dedicated ram, dedicated lifetime, timeout, deadletter behavior… etc … all of this seems really useful for any cases.

In fact, I don’t see this kind of pattern as production-ready. Do you have some example ?

Indeed, as rodolf said, the middy coupled with serverless, and you will have all you need (flexibility for the optimisation).

Anyway, it’s interesting to interact and see differents pov.

Thread Thread
 
ixartz profile image
Remi W.

I also understand your points even if I'm not agreed with them.
Both solution has drawbacks and advantages... There is an in-between.

Thank you too for your reply and taking the time.

Thread Thread
 
jlgouwy profile image
Gouwy Jean-louis

Ok but you don’t answer to my questions.

What is this drawbacks in our solution compared to the one in this article?
I really want to understand in which specific cases (some examples ?), context you will use that ?

If you don’t agree, there is some specific reason to detail.

I really don’t want to annoy you. I would like to get it through concrete answers, to project myself and see if I shouldn’t use this approach instead.

Thread Thread
 
ixartz profile image
Remi W.

I have already answered in my previous comments with details. I have already shared three examples where one endpoint per route isn't the best solution.

If these drawbacks doesn't applied to you, it's a good news and the pattern you have followed is the best solution for you.

But, it doesn't mean we all need to follow the same pattern. There isn't only one way to do it.

I really don't know what I can add more.

Thread Thread
 
jlgouwy profile image
Gouwy Jean-louis

Ok I didn’t catch your resources limit argument ! Sorry. :-)

And I think what Rob said, he is agree with you but we should not use this pattern when the latency is important for the UX, meaning we need to keep a low coldstart, to be able to respond as quickly as possible :-)

Collapse
 
suhakim profile image
sadiul hakim

nice