DEV Community

Cover image for Testing out Serverless Stack
Marc Nevin
Marc Nevin

Posted on

Testing out Serverless Stack

A serverless approach can be really powerful. Developing software with serverless architectures and services can speed up development time, make managing infrastructure pain-free and typically are cheaper, more flexible solutions. But it's not without its challenges local testing and mocking can be a pain, debugging systems is more trawling logs than anything else, and monitoring and maintenance of dozens of small runtimes can be overly complex.

To alleviate these pains, people have been building a range of platforms and tools to help, with Serverless Framework being one of the best known, they allow you to build on the serverless promise and avoid some of the challenges. Some newer platforms are being developed leaning on lessons from earlier ones and leveraging some developments from cloud providers since.

A new challenger appears...

Serverless Stack (SST), is one of the newer players that pitches itself to be the easiest way for your projects "to go from idea to IPO", selling itself as an end-to-end serverless development, deployment, and monitoring tool for building on AWS. It's pretty similar to the Serverless Framework but instead of relying on terraform, or specialised languages, it’s standing on the shoulders of CDK; allowing you to write and maintain your infrastructure in the same languages you're developing with and push them to AWS as configured stacks.

Key Features

Serverless Stack has a few key features worth digging into.


Serverless Stack uses CDK, one of AWS’ better IaC tools, which allows you to write and persist your cloud infrastructure in familiar languages; a key feature of CDK and SST is its use of high level components called constructs. These are preconfigured infrastructure components and structures that allow you to build faster by leveraging predefined configurations, boilerplates, and pre-written glue logic. SST then builds on these constructs refining them even further, although currently it only supports a few services, into an even more concise version. Allowing for building out of several components that go into making something like an API in a few lines. They currently have a load of constructs supporting A few key patterns, you can see the full list here.

Live Lambdas

Normally, when developing serverless functions, you've got to either deploy and test your changes with live invocations, or locally run the lambda with heavy mocking, using libraries or manual stubs and time; SST proposes a novel solution to this, Live Lambdas. For testing and debugging, they've built functionality that creates a local dev environment that you can invoke from a testing stack the CLI deploys, meaning invocations to a live stack are directed to your local environment rather than a hosted lambda. This allows for local testing, debugging and live reloads, by creating a stub in debug stack that's deployed which redirects invocations of the stub to a local WebSocket and then our Lambda code. This allows us to edit, watch and debug it live, with native support for a few major editors, such as VSC, allowing you to set and observe breakpoints in your editor. There’s a diagram from their docs that helps explain the flow.

SST Live Lambda Architecture

SST Dashboard

Another feature, one you see with similar tools, is a browser based console for observation and monitoring of your local and deployed stacks. It allows you to inspect various aspects of your application, seeing real-time logs in your app, shows you all resources in your stacks and allows you to manually invoke and replay invocations to your services.

SST Console Stacks Tab

It also allows you to explore different resources, such as Databases and Cognito pools, allowing you to see and query the data, users, and configuration of components in your app. It can be connected to local and live remote environments allowing you to see test and debug data, or live data from CloudWatch and other services. Making it pretty useful in developing your application, preloading any data for testing and teasing out any issues when connecting services together.

Spiking the Service

SST sounded promising, so to test it out, I spiked those key features, aiming for a quick test, I aimed to build out a serverless 3 tier application using it; building a simple React frontend, an API accepting form data, and a table storing the form data.

Since SST is built around a series of NPM packages, the only prerequisites it has is NPM and making sure you’ve the AWS CLI already configured. From there we can use SST and its templates to create our project structure, navigate into the created folder and kick off the project.

$ npx create-sst@latest --template=starters/typescript-starter react-app 

$ cd react-app 

$ npm install 

Enter fullscreen mode Exit fullscreen mode

This uses an SST template to generate the structure we need for a TypeScript React app in seconds, from there we can start our stack from NPX and it’ll deploy the basic structure for the first time, within 5-10 minutes.

npx sst start 

Enter fullscreen mode Exit fullscreen mode

From this point, we’ve deployed a basic app that’s observable from the SST console and we can build on towards our goal, it’s fairly straight forward. Rather than repeating SST’s tutorials and “getting started” content here, it makes more sense to direct you to their a great guide covering a simple counter React App using the same services I used, that you can check that out here.

Rather than abridging their guides, I’ve pulled out a sample to show just how concisely it allows you to build. The stacks/MyStack.ts file allows you to implement the new resources in one line, so at the end of that guide the top of that file has;

import {  





} from "@serverless-stack/resources"; 

Enter fullscreen mode Exit fullscreen mode

With a few lines for defining each, for example an API with a POST endpoint is defined with;

const api = new Api(stack, "API", { 

    defaults: { 

      function: { 

        // Allow the API to access the table 

        permissions: [table], 

        // Pass in the table name to the API 

        environment: { 

          tableName: table.tableName, 




    routes: { 

      "POST /": "functions/lambda.handler", 



Enter fullscreen mode Exit fullscreen mode

This defined most of the bare bones of the API in a few lines, an API gateway with an endpoint directed at a Lambda Handler, the environment variables setup for the table you want to use accessible for it. It's easily extended allowing you to add more routes or features to get it production ready fairly easily.

Overall, its surprisingly painless to work with, the constructs enabled rapid standing-up of services, the live debugging was useful for testing and trying different requests and the console allowed us to monitor the debug stack and check invocations locally without having to constantly redeploy the stacks. It had a surprising amount of utility in the time we used it, living up to the simplicity and speed promises of the website.


After spiking it there's still some broader benefits and limitations worth addressing.

NPM - It's the devil

Package management was a challenge, there was a torrent of version clashes between libraries and dependencies that had no easy solution without diving into the weeds which was beyond the scope for me. I imagine with some time you could untangle the knots of dependencies, but a major issue or vulnerability logged with an old version could throw this all up again creating a maintenance overhead.

Live Lambdas ... are Live?

Live lambdas are pretty effective, but they are still live and how easy they were to deploy seems like we could easily accidentally leave some resources running and lead to a wave of security confusion or unplanned cloud spend without any awareness. Security given you might have unprotected and poorly configured endpoints live and spend from leaving resources running accidentally. There's very clearly easy remediations here but in reading the docs it wasn't something jumping out, seems like an easy way to burn yourself on the tool.

Console Sync

The console and the CLI tool are pretty great, fairly easy to use and straightforward. Unless you manage to have a change in the stack that isn't reflected in the SST Console; when they end up out of sync, fixing them wasn't as simple as turning them off and on, it lead to a lot of manual stack task killing, and console reviews until finally killing off enough that the console accepted something had went wrong and it was able to successfully tear down its own provisioned stack. Recommendation here would be making sure whatever IAM role you setup to use with SST is configured that you can manage its resources and add separate monitoring.


Frustratingly, a lot of the places SST struggled, where also the places with the most utility and hope.

NPM - Package management for the whole team

As much as dependency issues are a nightmare, when working in a team, a lot of people have had some interactions with NPM, so they're often somewhat familiar with how it works, how to use it and working with it in a team. It feels "lightweight" and familiar to a lot of people and familiarity would make adopting something like SST into your team a lot more straightforward.

Local Debugging and Testing

Working in VSC and setting breakpoints without having to worry about stubs and mocks was a dream, it sped up the process of chasing down any issues in the course of this spike and it seems like scaling this out to a full-scale project it'd pay dividends for larger features.

Detailed Dashboards

A lot of similar tools have UIs and consoles like this, and much like those, it has a fair amount of utility, being able to inspect and review different aspects of requests to the API, replay them and trace locally would be invaluable. Being able to inspect and review Cognito setups or data in a DB locally would save time trawling through the AWS console, even to those well acquainted. It was a big-time saver and generally just improved the Developer Experience (DevEx).


SST is an interesting tool; it feels like it’s still maturing but it's got a lot of potential and if you can afford / make an early adoption of it on a project I think you'll be able to see fairly big payoff for DevEx and time saving down the line at the cost of some granular control and power. Overall, I think a small team who want to move fast and build an early version of an AWS backed application could do some great things with it early.

Personally, I’m going to try it out on something larger scoped next and see about working with it more in a collaborative setting before looking at a larger adoption.

Cover Image Credit - Photo by Ryan Stone on Unsplash

Image Credit - ServerlessStack

Top comments (0)