DEV Community

José Haro Peralta
José Haro Peralta

Posted on

How bad models ruin an API (or why design-first is the way to go)

Documenting API schemas is important

OpenAPI allows you to define the schema of the data that’s exchanged over the API. You can define schemas for your query parameters, your request payloads, your response payloads, and more. Schemas are defined using a subset of JSON Schema syntax, and typically a schema is an object with a collection of properties. The following is an example of a simple schema with OpenAPI valid syntax:

Person:
  type: object
  properties:
    name:
      type: string 
    age: 
      type: integer
Enter fullscreen mode Exit fullscreen mode

The point of having schemas in an API specification is to give clients an expectation of what the data that they’ll receive from the server will look like. In that sense, schemas are super important, since they allow client developers to build their integrations correctly.

Despite their usefulness, I often come across APIs that don’t provide schemas for their response payloads. These are typically APIs for internal consumption, usually an internal product of the company. In such cases, it seems developers think it’s an unnecessary overhead to write good documentation for the API. After all, it’s them who’re building the API server and the client at the same time. They talk to each other and they agree on what the API payloads look like. So what’s the point of documentation?

The point of documentation is that at some point the composition of the team is going to change. At some point, new team members will join, and sometime in the future the original team members probably won’t be around any longer. In that case, having documentation is extremely useful if not indispensable. Without documentation, the only way to get an idea of what the API response payloads look like is by interacting with the server and harvesting responses.

But not all schemas are the same

In some cases, we do have response schemas documented in the API, but with very poor models. I recently came across a response payload model with a schema similar to below definition:

Data:
  type: object
  properties:
    data:
      type: array
      items:
        type: array
          items:
            oneOf:
              - string
              - number
    columns:
      type: array
      items:
        type: string
Enter fullscreen mode Exit fullscreen mode

When looking at the actual response payload, the data served by the API looked like this (not the actual data):

{
 "columns": [
  "date",
  "engine_type",
  "brand",
  "model",
  "value"
 ],
 "data": [
  [
   "2021–09–01",
   "electric",
   "01",
   "Tesla",
   10
  ],
  [
   "2021–09–01",
   "combustion",
   "01",
   "Ford",
   0
  ]
 ]
}
Enter fullscreen mode Exit fullscreen mode

I call this a schemaless schema, or free-form schema. Schemas like this are useless. It’s not obvious how data is structured here. As it turned out, the data served by the API was meant to be represented in a table. The columns field tells us the column names of the table. The data field contains the values that must be populated for each column. In strict order.

See what the problem is here? Various problems actually. To begin with, we’re relying on positional order to determine how a value maps to a property. That’s a lot of information that the client application has to have and to manage.
It’s also problematic for the backend. The slightest change in the code could cause the values to come in the wrong order, or we might have arrays with missing elements, and everything would break or give us an erroneous representation of the data.

Schemaless schemas make testing difficult. Tools like Dredd and Schemathesis rely on your API documentation to generate tests and validate your API responses. A collection of free-form arrays like the above model will pass nearly every test, even if the length of the arrays or their contents are wrong. Schemaless schemas are also useless for API mocking, which is a fundamental part of building reliable API integrations.

The above schema can be improved with a few simple changes. Instead of relying on positional order in an array, we define an array of objects with explicit properties and values:

Data:
  type: array
  items:
    type: object
    properties:
      date: 
        type: date
      engine_type: 
        type: string
      brand:
        type: string
      model:
        type: string
      value:
        type: number
Enter fullscreen mode Exit fullscreen mode

The data from this schema might look like this:

[
  {
    "date": "2021-09-01",
    "engine_type": "electric",
    "brand": "Tesla",
    "model": "01",
    "value": 10
  },
  {
    "date": "2021-09-01",
    "engine_type": "combustion",
    "brand": "Ford",
    "model": "02",
    "value": 100
  }
]
Enter fullscreen mode Exit fullscreen mode

So much more understandable!! Isn’t it?? Designing good schemas is actually not that difficult. You just need to spend some time working on them. If that’s all it takes, how come we end up so often with bad schemas?

Code-first is a risky strategy

This situation is actually pretty common in APIs that have been built with a code-first approach, without much design consideration. That was indeed the case of the above-mentioned API. I could bring in a lot more examples of bad API models resulting from lack of a design stage (i.e. resulting from code-first). On one occasion, I had to work with an API in which date fields looked like this:

Date:
  type: object
  properties:
    day:
      type: integer
    month:
      type: integer
    year:
      type: integer
Enter fullscreen mode Exit fullscreen mode

Apparently, the server and the client development teams couldn’t agree on what the date format should look like, so they opted for an object representation of the date. The funny thing about this model is it doesn’t even have a timezone field! In any case, date fields shouldn’t be a matter of debate. We have ISO standards. All languages and frameworks can handle ISO dates. OpenAPI accepts date and date-time as field types. If only developers had spent one minute looking at the OpenAPI documentation...

On multiple occasions, I’ve had properties which are defined as an array of just one element. Something along the lines of the below example:

Entity:
  type:
    object
  properties:
    type:
      type: array
      items:
        type: string
      minItem: 1
      maxItem: 1
Enter fullscreen mode Exit fullscreen mode

I mean, why?? Why would you even do that? The property is singular (indicating it’s just one element). Arrays are for variable lengths of elements. Where in the world would you use arrays for just one element? The reason, again, was that the API had been built with a code-first approach. The part of the application that consumed this bit of data required input in the form of an array, and somewhere in the application the code would blow up if you gave it a list of more than one element.

What is missing in this case is a separation of concerns between the business layer and the API layer, and a data transfer object (DTO) that translates the input from the API into the format that is accepted by the code internals. In fact, by skipping all these things, we’ve achieved an excellent degree of tight-coupling between low-level application code and the API. The consequences of this tight-coupling are disastrous, since the slightest change in the server code can break the API and/or the client application in unexpected ways. The best part? All of this could’ve been avoided by spending five minutes designing the interface before jumping straight into the implementation.

Are we doomed?

Are we doomed? No, we’re not doomed. The moral of the story is that to build a good API, you need to spend some time designing it before jumping into the implementation. That doesn’t mean you can’t code from the beginning. If you don’t like to write raw yaml or JSON, you can use an API documentation-generation framework to drive the design of your API from your code. Just make sure you don’t commit your code and release it as the actual API implementation before the design has been agreed upon.

My recommendation is, once the design is agreed, use your documentation-generation framework to render the OpenAPI specification, make any changes to it if required, and commit the file. Your OpenAPI specification file becomes the source of truth for your API. From that moment on, drive changes to your API from that file, and test your API implementation against that file. Don’t use the dynamically generated documentation as the source of truth. Dynamically generated documentation is just a reflection of the backend code, which is not necessarily correct.


If you liked this article, you’ll also like my book Microservice APIs. The book is conceived as a one-stop guide for learning how to design and build microservices and how to drive their integration with APIs.

You can download two chapters of the book for free from this link.

And if you’re interested, you can get a 40% discount from the book’s price by using the following code on Manning’s website: slperalta.

In addition to my blog, I also upload video tutorials to YouTube.

Feel free to connect with me on LinkedIn!

Latest comments (0)