DEV Community

Cover image for The need for a new backend framework
Sebastian Wessel
Sebastian Wessel

Posted on • Originally published at Medium on

The need for a new backend framework

My name is Sebastian, and I have been working as a TypeScript & JavaScript developer for many years, primarily as a freelancer.
Throughout my career, I have worked on various projects, ranging from monolithic architectures and cloud microservice architectures to Lambda/FaaS architectures.
I have experience working in small teams as well as global distributed multi-team environments.

Currently, I am working on a TypeScript-based backend framework called PURISTA.
In this article, I aim to explain why there is a need for new frameworks for backend development.

The issues I would like to address with my framework are as follows:

  1. The decision regarding how the software will be deployed must be made early on, at the beginning of the project.
  2. The choice between monolithic, microservice, or Lambda architectures cannot be easily changed or reverted.
  3. Single developers, small teams, and small startups can work faster if they don't have to worry about infrastructure or deployment and can focus solely on business requirements.
  4. Monolithic architectures are generally harder to scale when multiple developers and teams are involved in the project.

In short, the goal is to build software quickly, focusing on business requirements, in a cost-efficient manner, without losing flexibility for the future.


There are various architecture options available, including monoliths, distributed microservices, and applications built with multiple Lambda functions.

Each architecture has its own advantages and disadvantages, and the choice depends on the specific project, codebase, and team dynamics.

Based on my experience, I would like to make two general statements:

Statement 1: When transitioning from a monolithic approach to a distributed approach, the complexity and workload required to manage the software significantly increase.

Statement 2: Converting a monolith into a distributed system requires extensive refactoring and work, especially if the original implementation lacks modularity and separation.

These statements may vary depending on the project, codebase, and team size.


The core idea of my framework is to build software in a similar style to Lambdas and FaaS, utilizing a message-based communication approach inspired by event-driven architectures. Each endpoint, GraphQL query/resolver, or task is treated as a single isolated function.

I have categorized these functions into two types: Commands and Subscriptions.

A Command is a function triggered by someone or something, expecting a response as a result.

On the other hand, a Subscription listens for specific events or message patterns.

The producer of these events or messages has no knowledge of the consuming Subscriptions. Moreover, a Subscription can generate its own events or messages that can be consumed by other Subscriptions.

Commands and Subscriptions are organized into a Service, which can be considered as a domain. A Service primarily provides general configuration and should not contain any business functionality.

So far, so good, right? Now you might be wondering, where is the key to this approach?

The key lies in the fact that the communication and deployment mechanisms are abstracted away by the framework.

The implementation is done against interfaces, allowing flexibility in choosing the communication and deployment strategies.

General big picture of PURISTA framework


For example, let's consider two services: the User service with the registerNewUser command and the Email service with the sendWelcomeMail subscription, which sends an email to every user registered by the registerNewUser command.

In a simple monolithic deployment scenario, the index or main file would look like this:

import { DefaultEventBridge } from '@purista/core'

import { emailV1Service } from './service/email/v1'
import { userV1Service } from './service/user/v1'

const main = async () => {
  const eventBridge = new DefaultEventBridge()
  await eventBridge.start()

  const userService = userV1Service.getInstance(eventBridge)
  await userService.start()

  const emailService = emailV1Service.getInstance(eventBridge)
  await emailService.start()
}

main()
Enter fullscreen mode Exit fullscreen mode

Now, if you want to scale your application, you have two options.

Option one is to simply spin up a new instance, which works well for simple examples. However, in more complex and fault-tolerant scenarios, you may want to distribute the load between instances.

This brings us to option two: adding a message broker to the mix. Currently, there are several possibilities available, with more options constantly emerging. You can currently choose between AMQP (RabbitMQ), MQTT, NATS, and Dapr.

To take your application to the next level, you only need to make a small change in the index or main file:

// import some other event bridge
import { AmqpBridge } from '@purista/amqpbridge'

import { emailV1Service } from './service/email/v1'
import { userV1Service } from './service/user/v1'

const main = async () => {
  // change the event bridge
  const eventBridge = new AmqpBridge()
  await eventBridge.start()

  const userService = userV1Service.getInstance(eventBridge)
  await userService.start()

  const emailService = emailV1Service.getInstance(eventBridge)
  await emailService.start()
}

main()
Enter fullscreen mode Exit fullscreen mode

With this configuration, when a new user is created by instance 1, they will receive a welcome email sent by either instance 1 or instance 2. The work will be evenly shared between the instances.

Imagine that your team and product are growing, and you need to scale. You decide to transition to a multi-repository and microservices architecture.

The process is straightforward. Copy the code into multiple repositories and remove the service folders that are not relevant to each repository. Then, open the index or main files and remove the services that are not needed.

The index file for the User repository will look like this:

import { AmqpBridge } from '@purista/amqpbridge'

import { userV1Service } from './service/user/v1'

const main = async () => {
  const eventBridge = new AmqpBridge()
  await eventBridge.start()

  const userService = userV1Service.getInstance(eventBridge)
  await userService.start()
}

main()
Enter fullscreen mode Exit fullscreen mode

And the index file for the Email repository will look like this:

import { AmqpBridge } from '@purista/amqpbridge'

import { emailV1Service } from './service/email/v1'

const main = async () => {
  const eventBridge = new AmqpBridge()
  await eventBridge.start()

  const emailService = emailV1Service.getInstance(eventBridge)
  await emailService.start()
}

main()
Enter fullscreen mode Exit fullscreen mode

Now, you can deploy each repository as a separate microservice.

Each service can be managed independently, and developers can work on specific repositories without affecting others.

If you find that the microservices architecture does not meet your requirements, you can easily revert the changes.


What about deploying as AWS Lambda or Azure Function?

The approach is similar.

As services are logical groups of commands and subscriptions, you only need to deploy each service individually, with only one command or subscription.

I am currently investigating different approaches to automate this process as much as possible. It is technically feasible, and I aim to provide simple ways to reduce manual steps. This may involve connecting to AWS EventBridge and AWS API Gateway to support real-world scenarios.


As you can see, this approach allows you to postpone the decision between a monolith, microservices, or FaaS-style architecture until later in the development process.

It also provides the flexibility to change your mind and revert the changes without refactoring your entire codebase.

This approach offers the advantage of starting small and easily scaling up. It is particularly suitable for Proof of Concept (PoC) and prototype development, as it allows you to build a stable product that can grow and scale. Additionally, much of the necessary documentation, such as OpenAPI documentation, is automatically generated from your code.


PURISTA provides a handy CLI wizard to work efficient

PURISTA also provides a convenient Command Line Interface (CLI) wizard to enhance your efficiency. This CLI allows you to create projects, add services, commands, and subscriptions effortlessly.

If you're interested in trying it out, you can follow the steps outlined in the Handbook's Quickstart guide using the CLI.

Alternatively, you can watch a small presentation for a quick overview of PURISTA.


In addition to these features, PURISTA offers several other functionalities worth mentioning:

  • A straightforward CLI for project creation and management, including services, commands, and subscriptions.
  • Built-in OpenTelemetry for tracing and observability.
  • Strict validation of input/output schemas.
  • Automatic generation of TypeScript types and OpenAPI documentation based on input/output schemas.
  • Logging capabilities.
  • Abstraction of state stores for sharing and persisting business states.
  • Abstraction of config stores to centralize configurations.
  • Abstraction of secret stores, allowing you to choose the one that suits your needs (e.g., AWS Secret Store, Infisical, Vault).

It's important to note that while not all adapters and brokers are currently available, PURISTA is continuously growing, with plans to abstract file access, such as S3 integration, in the future.


Thank you for taking the time to read my article. I hope you found it enjoyable and not too dull. I invite you to explore my project and share your thoughts, opinions, and ideas with me. Please feel free to reach out to me directly.

Official Website: https://purista.dev

GitHub Repo: https://github.com/sebastianwessel/purista

Top comments (15)

Collapse
 
miketalbot profile image
Mike Talbot ⭐

Very interesting, we've adopted a similar approach with an in house built framework. We use an events system that can cross servers and all the way to the front end if needed. Our modules can be loaded all into one box for the developer or spread across lambdas and multiple production servers, or (as it's Node) across multiple processes on a single server.

The events system also provides hooks which mean that with no changes to the main code we can implement specific client functionality or purchasable modules that alter functionality or augment or change UI.

For me this is the best approach for growing teams, we very rarely have merge conflicts and can easily implement PR environments that behave the same as multi lambda/server distributions.

Excited to dive into your work and see how you've gone about it.

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

Thanks for your comment.
It is very exciting and relieving, to hear, that this approach is not totally rubbish.

I've also ideas and parts from event driven design implemented. Based on my experienced on a CQRS/event driven architecture project.
For example, each message can have an event name attached and subscriptions can subscribe for certain events.
This should allow implementing an event driven architecture easily.
I'm a bit in love with this event driven idea in general.

I prefer this event driven or message based approach with some broker more than regular microservices via HTTP for one big reason:
When working with HTTP based microservices, it can become quickly hard to keep the separation clean and to handle errors correctly.
It starts on very simple things - to take the example from the article:
If you have a User service with some sign-up, and you need to send an email for new users, normally the User service would call the Email service.
But what happens if the Email provider is unreachable and the Email service is failing? How to handle this, who is responsible for retries and stuff? The User service which might get re-deployed before the Email service is working correctly?

PURISTA also provides hooks, but they are more for transforming inputs & outputs or to separate something like authentication/authorization from the business code.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

So I have two thoughts on the connectivity point, and as a framework author I think you only have one real choice, but I guess I have two!

The first time I implemented such an architecture was about 5 years ago, I ended up building a light weight message queue (it used Redis and MySQL as a job queue). I had the basic principle of jobs that would be flagged as complete or in error and then I had much lighter weight plugins that actually raised these jobs in response to the events before the raiser completes. Basically what I'm saying is that the function raising an event would encounter an exception if the job wasn't raised (i.e. any plugin event handler threw an exception). Basically all my API calls were then jobs (super helpful for debugging and really fast because of Redis etc).

So my job queue runner basically pulled an event from the queue and offered it up as an event, if anything could handle it then it ran. This way some servers could be dedicated just to certain heavy lifting simply by configuring them to only run that code, leaving other servers available for the quick turn around stuff. In other words all control was inverted. There was then a monitoring plugin that flagged whether jobs couldn't run (nothing could handle them for some reason), had continually failed to execute etc.

My most recent version of this has been able to be much lighter. I use GraphQL as an end point and allow code to be flagged for execution on other boxes using a HOF. The HOF basically also gets used to identify the code to be loaded on Lambdas or on sub processes. The DX is still "I call a function" for the caller but the developer of the function can choose to run it elsewhere.

This doesn't have all the retry/job management stuff my old framework had - simply because the requirements just aren\t there for it on the new project, but the core principles and DX are the same.

Thread Thread
 
miketalbot profile image
Mike Talbot ⭐ • Edited

A couple of years ago I wrote an article as a briefing for my team on the principles.

And here's an example of my new systems HOF, the function wrapped by makeLambda() will run on a lambda or a sub process, but otherwise there is no "thinking" by the developer to make this happen. Under the surface this is using events to pass all of the information necessary.

export const retrieveChartData = makeLambda(async function retrieveChartData({
    plan,
    group,
    queries,
    effectiveDate,
    windowStart,
}) {
    // Work out all o the queries
    effectiveDate = new Date(effectiveDate ?? Date.now()).beginningOfDay()
    windowStart = new Date(windowStart ?? new Date(effectiveDate ?? Date.now()).addYears(-1)).beginningOfDay()
    const results = await cache.get(
        plan.split("|").at(-1),
        { planId: plan, group, windowStart, windowEnd: new Date(effectiveDate) },
        (p) => getTasksForStoredPlan({}, p)
    )
    const index = keyBy(results.all, "id")
    const output = []
    for (const query of queries) {
        const $and = []
        if (query.classification) {
            $and.push(or(...ensureArray(query.classification).map(classificationQuery)))
        }
        if (query.skill) {
            $and.push(or(...ensureArray(query.skill).map(skillQuery)))
        }
        if (query.completed !== undefined) {
            $and.push(completedQuery(query.completed))
        }
        $and.push(intervalQuery(query.minInterval, query.maxInterval))
        query.$filter = and(...$and)
        const [counts, rangeTotal, total, tasks] = calculateChartData({
            statSource: results.stats,
            from: new Date(query.from ?? windowStart),
            to: new Date(query.to ?? new Date(effectiveDate)),
            predicate: query.$filter,
        })
        const entry = { counts, total, rangeTotal, tasks: tasks.map((t) => index[t.id]) }

        const periodCalculator = times[query.timeGroup]
        const taskGroup = contents[query.contentGroup]

        if (periodCalculator || taskGroup) {
            entry.groups = calculateChartDataInPeriods({
                statSource: results.stats,
                from: new Date(query.from ?? windowStart),
                to: new Date(query.to ?? new Date(effectiveDate)),
                predicate: query.$filter,
                periodCalculator,
                taskGroup,
            })
        }
        output.push(entry)
    }

    return output
})
Enter fullscreen mode Exit fullscreen mode
Collapse
 
ccarcaci profile image
ccarcaci

What you're proposing is basically an heavyweight version of hexagonal architecture.

What if I want, in the future, to move away from the Purista framework?

In short, the goal is to build software quickly, focusing on business requirements, in a cost-efficient manner, without losing flexibility for the future.

In my opinion there is no framework able to solve this problem in an universal way. Because, by design, you need to use a framework and deal with it. Of course, a well designed and focused framework, like Purista looks like, could help.

But in the end, in order to be really hands-free in making or deferring decisions and be minimal, some points should be taken into account:

  • hexagonal architecture (AKA port-adapter)
  • contract testing
  • feature-driven code organization

Those are prescriptions that are beyond any framework (and language).

WDYT?

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

Thanks for your thoughts.
It is quite interesting for me to see, that you mentioned hexagonal architecture.

About my background:
I was working in projects, where the buzzwords were event-driven architecture, domain-driven design and functional programming.
So, I did not have the word "hexagonal architecture" in mind at any time, but as they all share a lot of ideas & concepts, it's totally right to call PURISTA so.

Regarding the question of how to move away from PURISTA in the future:
PURISTA at its core does not really provide a big set of features, which you use to implement the business logic.
The real framework specific functions are the builders, but they orchestrate and organize things. If you look into them, they are stupid simple. Only having the correct TypeScript types is sometimes not so straight forward.
Also, there is no fancy stuff like decorators or similar things, which couples your code directly to some framework functionality.
It is more defining interfaces, the structure and how to orchestrate things. There are also only a few dependencies for the core package (zod schema, OpenTelementry sdk, pino logger).
Because of this, your business code will stay clean and plain as much as possible, and you can use your preferred tools & packages for implementing the business logic.
The framework provides some packages, which contain some ready-to-use implementation of that defined interfaces.
Moving away from PURISTA is possible in general, as the business code is isolated.
But the more interesting question is, why you would do this and what is the "new thing". Because this most likely sounds more like changing the architecture.

About your last sentence in your comment:
You're right, and we are on the same page here.

Tbh, I'm also always struggling with "framework", as PURISTA is more mindset, patterns, structuring, organization and orchestration of things.
Sometimes, I fear, that people are expecting some more framework specific features, if I call it framework. Something like "oh, I only need to call this framework function, instead of writing 50 lines of business code on my own".
You know what I mean?

Collapse
 
ccarcaci profile image
ccarcaci

Also, there is no fancy stuff like decorators or similar things, which couples your code directly to some framework functionality.
It is more defining interfaces, the structure and how to orchestrate things. There are also only a few dependencies for the core package

Yes, that's hexagonal.

Moving away from PURISTA is possible in general, as the business code is isolated.
But the more interesting question is, why you would do this and what is the "new thing". Because this most likely sounds more like changing the architecture.

That's a question for the infra guys that have cost reduction as KPI and engineering managers that "we must shift onto this new fancy technologies".

Jokes apart, the point is that writing code as ready-to-fly-away from the framework lets to identify dangerous coupling points with framework (as you mentioned) and helps in writing more decoupled and clean code. "It's more on the travel than the destination".

Something like "oh, I only need to call this framework function, instead of writing 50 lines of business code on my own"

I understand perfectly. And this ends in projects with 200 dependencies, and misused frameworks. At scale, having 200 dependencies and framework used in the wrong way ends in being unmaintainable.

Being careful in choosing the right framework and leverage its pure power to address the implementation we need and balance the libraries adoption is key. "mindset, patterns, structuring, organization and orchestration of things" are fundamental bricks to guarantee long term maintainability and performances.

Collapse
 
zewa666 profile image
Vildan Softic

I'd be interested in learning more about the testing story of PURISTA. Looking at the getting started the only ref is a jest config along with test files. these primarily serve the business logic unit test side of things. In the light of a microservice, event based architecture integration testing becomes much more important though.

So what I'd really like to see is a sample of user and email service, but how one use case like the signup can be properly integration tested in collab

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

Integration tests are currently not part of the framework itself.
Finding some general approach is difficult - especially because PURISTA is highly modular and stuff like the communication method (MQTT, AMQP...) is not fixed.

You can find a - tbh simple & stupid - example on how to test it, in the repo.
There are some basic integration tests for the event bridges.

github.com/sebastianwessel/purista...

This will work in mono-repos, but as soon as you have multi-repos, it will not be possible this way.

But, some smart people are working on some interesting stuff:
github.com/traceloop/jest-opentele...

I didn't try it out yet, but on first look, it's promising and might be an option, as PURISTA provides OpenTelemetry out of the box.

Collapse
 
zewa666 profile image
Vildan Softic

so your example is exactly what I was looking for. Creating a fake queue, ramping up a service and performing a command. I'd really add this to your docs as an example for how to run integration testels, even if as you said, the use case is narrowed down to specific constraints. It still gives a good understanding of how to approach it.

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

After receiving feedback on this article from Reddit, I decided to update it with the assistance of ChatGPT in order to enhance its readability and, hopefully, improve its overall comprehensibility.

Collapse
 
benkarrington profile image
b.karrington

Totally agree with your statements. Trying to make it easier by building your on fmk is very brave, i wish you all the best

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

Thanks!
Even if it fails - there are a lot of learnings, experiences and I enjoy doing it.
So, it is not wasting of time in any case for me.

Collapse
 
spock123 profile image
Lars Rye Jeppesen • Edited

You keep referencing AWS specific services when talking about cloud (you use Lambda a reference to cloud based container structures, for example).

Is this meant to be AWS specific or is it also useful for other cloud vendors?

Collapse
 
sebastian_wessel profile image
Sebastian Wessel

It’s not AWS specific. I used AWS more as a reference/example, as AWS Lambda is a widely known synonym for function as a service.
Also, AWS provides a whole stack which you can use - like AWS Eventbridge.

On a high level, you only need some runtime for the functions, and something which can act as message broker for communication.
The business implementation is decoupled from the choice of the solution you use for runtime and broker.
That’s the main idea and benefit with this framework.