DEV Community

Vincenzo Chianese
Vincenzo Chianese

Posted on • Updated on

Forewords and domain model

This series is about sharing some of the challenges and lessons I learned during the development of Prism and how some functional concepts lead to a better product.

Note: As of January 2021, I no longer work at Stoplight and I have no control over the current status of the code. There is a fork on my GitHub account that represents the state of the project when I left the company.


In this specific post, I will start explaining what Prism is, detail some of its key features and discuss a little bit about its domain and its intended audience.

This will hopefully help you understand the technical choices I made that I will cover in the next articles.

What Is Prism

GitHub logo stoplightio / prism

Turn any OpenAPI2/3 and Postman Collection file into an API server with mocking, transformations and validations.

Prism is a mock server for OpenAPI 2 (from now on OAS2), OpenAPI 3 (from now on OAS3) and Postman Collections (from now on PC).

For those of you that aren't familiar with such, OAS2/3 and PC are essentially specifications defining a standard and language-agnostic interface to (possibly RESTful) APIs.

To be a little bit more pragmatic:

openapi: 3.0.0
paths:
  /list:
    get:
      description: "Returns a list of stuff"
      responses:
        '200':
          description: "Successful response"
Enter fullscreen mode Exit fullscreen mode

This YAML file is an OpenAPI 3.0 document claiming that:

  1. There's an API
  2. It has a /list path
  3. It has a GET method
  4. When a GET request to the /list endpoint is made, 200 is one of the possible responses you might get, whose details (such as payload shape, returned headers) haven't been specified.

We aren't going to go too much into detail about these formats; if you’re interested, you can go and read the official specifications:

Despite this simple example, we can say that all the specifications allow (with some nuances) to specify pretty complicated scenarios, ranging from authentication, request and response validation, to web hooks, callbacks and example generation.


A mock server is nothing more than a little program that reads the description document and spins up a server that will behave in the way that the document mandates.

Here's an example of Prism starting up with a standard OAS3 document:

Prism starting up with a standard OAS3 document

Prism Peculiarities

Technical decisions and trade-offs were driven by features. Here are the most relevant ones regarding this series:

100% TypeScript

Prism is written entirely in TypeScript. Primarily because Stoplight's stack is largely based on NodeJS and TypeScript.

We are using the maximum level of strictness that TypeScript allows.

A Lot Of Custom Software

Prism is not using any of the web frameworks you usually find on the market and employed for web applications, so you won't find Express, you won't find Hapi, nothing.

It was initially written using Fastify; and at that time I was not working on the project. I ultimately decided to remove it in favour of a tiny wrapper on top of the regular http server that NodeJS offers.

In case you are asking, the main reason for this is because most of the frameworks focus on the 80% of the use cases, which is totally legit.

On the other hand, Prism aims for 100% compatibility with the document types it supports, and, for instance, some of them have some very…creative parameters support that no parser on the market supports.

Another example? OpenAPI 2 and 3 are using path templating, but not the same as URI Templating specified in the RFC6570. For this reason, a custom parser and extractor had to be defined.

This specific case, together with other ones that required special code to be wrote, led us to gradually dismantle and neglect different Fastify features until I realised that we were not using it at all if not for listening on the TCP port; on the contrary, we were just fighting it because it was too opinionated on certain matters, such as errors.

You can find more about the motivations in the relative GitHub issue

Custom Negotiator

Prism contains a custom made negotiator — which is that part of the software that taken an incoming HTTP Request, its validation results (headers, body, security) and the target API Specification document will return the most appropriate response definition that can then be used by the generator to return a response instance to the client.

The negotiator itself is kind of complicated, but I think we've done a good job in both documenting its decision process:

Prism negotiation process

The diagram is also pretty much reflected in the code as functions division.

Input, output and security validation

One of Prism's key features is the extensive validation.

Based on the provided API Description document, Prism will validate different parts of the incoming HTTP request, ranging from deserialising the body according to the content-type header and then checking the resulting object with the provided JSON Schema (if any).

The same goes for the query parameters (because yes, OpenAPI defines encoding for query parameters as well), the headers and ultimately the security requirements.

The input validation result will influence the behaviour of the negotiator as well as the proxy's one.

It turns out that validation is a very complicated part of Prism and, although we have reworked it several times we still haven't got that right.

Prism Request Flow

The journey of an HTTP Request from hitting your application server to return a response to the client is articulated.

We often do not think about it because the web frameworks do usually a very good job in abstracting away all the complexity.

Since Prism is not using any frameworks, I fundamentally had the opportunity of reimplementing almost the whole pipeline — and I started to have observations.

Here's what Prism is doing when a request is coming in:

  • Routing
    • Path Match with templating support, where we also extract the variables from the path, returning 404 in case it fails
    • Method Match, returning 405 in case it fails
    • Server Validation, which is checking the HOST header of the request against the servers listed in the specification document, returning 404 in case it fails
  • Input deserialisation/validation
    • The path parameters get validated according to what is stated in the specification files (whether it's required, whether it's a number or a string) 422/400/default
    • The query string is deserialised following the rules stated in the specification file, returning 422/400/default in case there is a deserialisation failure
    • Headers get validated against the JSON-esque format that OAS2/3 defines; we convert them to a draft7 specification and run ajv on it, returning 422/400/default in case there is a validation failure.
    • Body gets validated against the JSON-esque format that OAS2/3 defines; we convert it to a draft7 specification and run ajv on it, returning 422/400/default in case there is a validation failure.
    • Depending on the security requirements specified in the routed operation, Prism will check the presence of certain headers and when possible it will also try to validate that their content respects the general format required for such security requirements. Returns 401/400/default
  • Negotiator/Proxy
    • The negotiator kicks in and looks for an appropriate response definition based on the validation result, the requested content type, the accepted media types and so on. It returns 2XX/406/500/User Defined Status code depending on the found response definition.
    • If the Proxy is on, Prism will skip the negotiator and send the result to the upstream server and take note of the returned response.
  • Output violation and serialisation
    • Response Headers, whether they're generated from a response definition, extracted from an example or returned from a Proxy request get validated agains the response definition, returning 500 (erroring the request or a violation header) in case they do not match
    • Response Body, whether it's generated from a response definition, extracted from an example or returned from a Proxy request, gets validated agains the response definition, returning 500 (erroring the request or a violation header) in case they do not match.

Here comes the first key observation: almost every step that Prism executes might fail, and each failure has a specific semantic meaning and precise status code is associated.

Last time that I checked, on over 32 "exit paths", 30 of these were errors and only two of them were a "successfully returned response". Doing some math:

2/32=1/16=0,06 2/32 = 1/16 = 0,06

This fundamentally says that, in case of evenly distributed exit paths occurrences, only 6% of the request will be successful.

Are the exit path occurrences evenly distributed? Although I do not have a specific answer to that (but hopefully we will, since we're gathering statistics in the hosted version of Prism) — we have some empirical evidence I’ll talk about in the next paragraph that we can keep in mind.

Prism User

Prism is a developer tool and, although it can be used as a runtime component, it is primarily used by API designers and client developers during the development phase of the API.

This is a very important detail since the typical developer that is using Prism has totally different aims from a regular API developer. The following table summarised some the differences that I've identified with an Application Developer

Client Application Developer API Developer
Clear mission in mind No idea of what they're doing
Probably read API documentation Experimental phase
Likely sending valid data Likely sending garbage
Aims for success Changes code and spec every second

When you're developing an application, you're likely striving for success — and so you're going to create all the requests you need with likely valid data, likely following the flow indicated in the documentation.

On the other hand, when mocking an API with Prism, you're deep in the design phase. You'll probably tweak the document multiple times per minute (and Prism will hot-reload the document). You'll likely send invalid data all the time because you just forgot what you wrote in the document. You'll try weird combinations of stuff that is never supposed to happen.

We stated some paragraphs before that in case of evenly distributed exit path occurrences, only 6% of the request will be successful.

Now that we have clarified a little bit the typical user of Prism, it's fair to say that the exit paths occurrences are clearly not evenly distributed and, although we can't give a precise number, we can claim that's heavily leaning towards the errors side.

Essentially, when you send a request to Prism, most likely you'll get an error as a response.

After thinking a lot about this, I wrote this sentence that was the key factor to radically change Prism's architecture.

Prism's job is to return errors.

In the next article we'll talk about the abstraction used to model such use cases correctly and how I found it accidentally.

Top comments (0)