DEV Community

Héctor Ramón
Héctor Ramón

Posted on

Stop coupling logic with your HTTP layer!

As a developer, there is one issue that I tend to notice in many codebases: logic coupled with a specific communication layer.

This issue is especially common in the controllers of most web backends, where application logic is usually intertwined with access to the HTTP layer. It is not surprising. Many documentation examples of some of the most well-known frameworks (Rails, Django, Flask, Phoenix, etc.) have this issue.

But, what do I mean when I say "logic intertwined with access to the HTTP layer"? Let's take a look at how a simple login endpoint could be implemented in Ruby on Rails:

class UsersController < ApplicationController
  def login
    user = User.find_by(email: params[:email])

    unless user.password_match?(params[:password])
      return error_response(:invalid_credentials)
    end

    json_response(auth_token: Tokens.auth(user))
  end
end

Simple, right? But... How do we use this UsersController class? How do we test our login logic? Who populates params? What data does it need? The public contract of this code is not clear at all! We have a class with a method that takes no arguments and uses internal (hidden) state to do its job.

When code cannot be easily used in isolation, it becomes harder to understand, test and debug. We should try to avoid it as much as possible.

The code above is tying the logic of the function to some specific communication layer (HTTP, in this case). Using params and json_response forces us to be aware of this layer every time that we want to use this login method.

We can isolate the logic of our endpoint:

ApiLogin = ->(email:, password:) do
  user = User.find_by(email: email)
  return { data: nil, errors: [:invalid_credentials] } unless user.password_match?(password)

  { data: { auth_token: Tokens.auth(user) }, errors: [] }
end

This is now a communication-agnostic function. Its input (email and password) and its output (a simple hash with data and errors) are explicit. Therefore, we can test it very easily:

ApiLogin.call(email: "some@email.com", password: "12345678")
# => { data: { auth_token: "..." }, errors: [] }

Now, we just need to connect this function to our communication layer. We can use an ApplicationController, like before:

class UsersController < ApplicationController
  def login
    response = ApiLogin.call(email: params[:email], params[:password])

    if response.fetch(:errors).empty?
      json_response(response.fetch(:data))
    else
      error_response(response.fetch(:errors))
    end
  end
end

The key point here is that this wiring does not contain any relevant logic that needs specific testing because it can be easily abstracted! Most of the time, connecting a function to an HTTP layer consists of the same steps:

  1. Obtain the input data from the request.
  2. Call a function with the obtained data.
  3. Transform the output of the function into an HTTP response.

With this in mind, we could write a Ruby DSL to produce custom controllers, like this:

UsersController = Http.controller do
  action :login, ApiLogin, email: param(:email), password: param(:password)
end

And extend it as needed. For instance, we could implement header and json methods, analogous to param.

There are many ways to expose a function over HTTP. I personally like to build a pipeline of transformations where the input is the HTTP request and the output is the HTTP response, especially in functional languages with powerful type systems. This pipeline is defined safely and accurately step by step with a flexible API. I will write about how I did this in Haskell soon!

In conclusion, once we split logic from communication, we obtain some interesting benefits:

  • Our application logic becomes easier to use, understand and test.
  • The communication details can be centralized, changed and tested using a single-responsibility abstraction.
  • We can implement a new communication layer (a CLI, a different HTTP layer) to expose the same logic.

Let me know about your thoughts in the comments! Why do you think it's such a common issue? Are you already doing something similar?

Top comments (4)

Collapse
 
larribas profile image
Lorenzo Arribas

Great article!

In my case, I find it very useful to have a second line of public methods.

Imagine something like reacting to an article. I'd have:

  • A basic react_to_article(user, article, reaction) that does exactly that, provided the user and article exist.
  • An http_react_to_article(auth_token, article, reaction) that takes care of token-based authentication before calling the first one.
  • A cli_react_to_article(ssh_key, article, reaction) that uses public SSH key as an authentication method.

My point is, some things are related to the communications protocol and some others are pure app behavior, and having the latter isolated allows you to:

  • Have an admin panel with a completely different approach to permissions (based on the role at the company for instance).
  • Have development tools that create features relying on a public API (that shouldn't change).

Those are my 2 cents on this.

Collapse
 
burdettelamar profile image
Burdette Lamar

I use the same principle, but in the other direction. Instead of avoiding the coupling so that I can build a CLI, I build the CLI so that I cannot couple.

For a desktop app with a GUI, for example, I add a CLI. Both interfaces can work only if the logic is in neither. That keeps me "honest."

Collapse
 
larribas profile image
Lorenzo Arribas

That's a good exercise. CLIs and HTTP are different enough that it forces you to "stay abstract".

Collapse
 
larribas profile image
Lorenzo Arribas

Why do you think it's such a common issue?

When you couple HTTP and logic, you're trading off maintainability for easy initial development.

Why would popular frameworks recommend it as a best practice is where I get lost.

I think that is a bad tradeoff for anything beyond a small prototype or a proof of concept.