DEV Community

Juraj Malenica
Juraj Malenica

Posted on

Should you explicitly define APIs when using Microservices?

If you have:

  • a project that is running tens, or even hundreds of microservices,
  • with multiple teams handling different parts,
  • all being connected with Kafka,

how do you ensure other developers can easily talk to any microservice?

Pros and Cons of an API

I'm not completely sure if explicitly defining an API is the best solution for this situation. In the company where I work, we have them defined and there are mixed feelings.

👍 On the one hand, explicitly saying what a microservice expects as an input seems logical - other developers should then only care about reading that, and not worrying about the underlying implementation.
And if someone makes a change in their API, it will be clearly visible via the changelog on that API.
Also, when creating a new feature, developers can start by defining the APIs which helps a lot in the thought process.

👎 On the other hand, making an API easily readable is hard, and almost impossible for scenarios with complex data structures.
It also takes a lot of time to write it and maintain it.
If the developer who wrote it is near, oftentimes it's much easier to just ask them for help.

Approaches to defining an API

There are two approaches to defining an API I'm going to cover.

To keep things simple, we'll write an API for a microservice that can add or subtract N numbers.

For our example: all we need are two fields:

  • action which can be ADD/SUBTRACT,
  • arguments which is a list of numbers.

The message payload into the microservice will look like this:

{
  "action": "ADD",
  "arguments": [4, 8.3, 11]
}
Enter fullscreen mode Exit fullscreen mode

Approach 1: JsonSchema

This is the approach I have the most experience with. JsonSchema is a vocabulary that allows you to annotate and validate JSON documents (our messages from Kafka).

Everything in the schema is pretty self-explanatory, so I won't go into any details.

{
  "type": "object",
  "$schema": "http://json-schema.org/draft-07/schema#",
  "required": [
    "action",
    "arguments"
  ],
  "properties": {
    "action": {
      "type": "string",
      "enum": ["ADD", "SUBTRACT"]
    },
    "arguments": {
      "type": "array",
      "items": {
        "type": "number"
      }
    },
  },
  "additionalProperties": false
}
Enter fullscreen mode Exit fullscreen mode

I can only tell you that things get pretty crazy when there are more complex data structures (> 700 lines).

To cope with the low readability and to test our schema, we oftentimes write examples for different cases. This way, other developers can look at them instead and reference the real schema for fine details.

P.S. An awesome tool for debugging schemas is this JSON Schema Validator. I've been using it for months now.

Approach 2: Python dataclasses

For this example, we'll stick with Python, but I guess the approach is applicable to most object-oriented languages.

Basically, Python introduced dataclasses in version 3.7, and they make defining data structures easier and cleaner.

from dataclasses import dataclass
from enum import Enum
from typing import List


class ActionOptions(Enum):
    ADD = 'ADD'
    SUBTRACT = 'SUBTRACT'


@dataclass
class Message:
    action: ActionOptions
    arguments: List[float]
Enter fullscreen mode Exit fullscreen mode

And then, by using a package dacite we can import our message into this structure:

from dacite import from_dict, Config  


data = {"action": "ADD", "args": [4, 8.3, 11]}
message = from_dict(
    Message,
    data,
    config=Config(type_hooks={ActionOptions: ActionOptions}, strict=True)
)
Enter fullscreen mode Exit fullscreen mode

dacite allows us to import more complex data structures and raises an exception when the data doesn't match our structure.

I like the idea that the dataclasses can have different methods on them that may fit its context. It is easy to write any custom validation that can't be easily expressed otherwise.
It is also possibly easier to read for a developer, but I don't know for sure yet.

The downside is that the classes have to be defined bottom-up so python can reference everything correctly (although that can probably be bypassed). Also, dacite is still in active development and needs some improvements.

What to choose?

I'm really looking forward to trying the second approach and battle-testing it. As time passes by, I'm seeing JsonSchema as more of a pain point than a relief. Although, to be honest, maybe I just don't appreciate it enough 🙂

What I want to hear is what do you think:

  • Do you write APIs for your microservices?
  • Which approaches work and which don't?
  • Do you know other reasons why the two approaches I listed could be awesome/awful?

Top comments (9)

Collapse
 
sdwlig profile image
Stephen Williams

I add boilerplate, definitions and conversions, places that need to know details, and documentation as needed. For instance, many past systems required that each layer, such as a middleware router or intermediate service, fully parse, process, and produce the full structure for RPC calls and messages. This is extremely fragile, difficult to maintain, and introduces a lot of unnecessary waste. We generally know this now as we pass JSON / XML objects around as parameters. I see APIs similarly. It is very good to define what APIs exist, after carefully designing the conceptual structure and baseline parameters (JWT token, modes, etc.). Often, the details of the API should not be fully baked into an API definition: Only the client and service app code need all the details.

Graph APIs illustrate this minimalist principle well: The basic API details are described, but the main parameters and results are potentially complex and somewhat fluid as new features & fields are added. The best documentation is a lot of example code with views of current results. That seems dissatisfying, but in practice, it is how a lot of things are developed.

Whenever possible, you should design with forward and backward compatibility, avoiding versioning. APIs should be able to evolve without breaking past usage. Sometimes mistakes are made and you do need versioning; usually, this is best done by updating everything as quickly as possible, retiring the old API, like a schema change in a database.

developers.facebook.com/docs/graph...

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

It depends. I think it's best not to mess with Conway's Law, so...

  • define explicit APIs (and version them) when the service is available to other teams
  • don't, if your team owns the code of all consumers

I think this distinction should be made explicit (e.g. naming conventions) and it should be made very early, maybe as soon as when creating a microservice. You might argue that microservices start with few consumers, maybe a single consumer within your team, but then become popular and are consumed by many others. I don't think that's the case in a corporate environment - here you can plan very well who the possible consumers of your microservice will be.

If in doubt - start with a private microservice. Then, when another team has pretty similar requirements, build a new public microservice that satisfies both teams' requirements. Let the other team use the new microservice first. Fix errors and add missing features. Finally, when the other team is happy, migrate your own code from the private to the public microservice and shutdown the now obsolete private microservice.

Collapse
 
martinwallgren profile image
Martin Wallgren • Edited

If you are looking for ways to load json into classes with validation I highly recommend pydantic. It predates dataclasses, but have support for them too now days.

Collapse
 
jurajmalenica profile image
Juraj Malenica

I finally got a chance to look at it, and it's absolutely amazing. I did just a small demo, but it solves a lot of things that dacite just can't handle properly. Thanks :)

Collapse
 
namdets profile image
Jason Stedman

If your project is complex enough to have tens of microservices, the APIs should definitely be documented somehow.

Also, since one of the key benefits of microservices is being language agnostic so the right tool can always be chosen, that documentation should be language agnostic(swagger).

Complete API documentation when possible also helps each collaborating microservices to utilize TDD when updating for a new version of your API as unit tests rather than depending on some kind of hosted test instance.

Collapse
 
walhow profile image
whowery

Looks like Swagger would be a good way to document Api's, it includes examples and a way to test the API calls.

Collapse
 
jurajmalenica profile image
Juraj Malenica

I'm not sure Swagger supports async communication in this way. Could it be?

Collapse
 
booooh profile image
Ben Dayan

I believe the openAPI spec does allow for async communication as well as callbacks - swagger.io/docs/specification/call...

Thread Thread
 
walhow profile image
whowery • Edited

It's definitely built with microservices in mind:
swagger.io/blog/api-strategy/micro...