Leia em portugês: clique aqui
You have decided to break your old and outdated monolith application into a more modern and efficient microservices architecture. One of the first problems that will arise comes from the fact that a distributed architecture means your small pieces aren't tied to each other anymore, they will have to talk to each other through a pattern of communication called REST, and no longer they will share in-memory data among them. Well, it just happens that one of the most important pieces of data for your whole application is kept in-memory and shared throughout all modules of your application: the user session.
When you do a little research on microservices authentication and authorization, one technology comes up as the best, if not the only, solution: JWT. Basically, this approach suggests that you put all of your session data into a signed / encrypted hash (the token) and send it back to the client that has logged into your application. So, with every request, the client will send back that token (usually in the request header), and then you can verify the authenticity of the token and extract the session data from it. Session data in hands, you can send it to any service you need until you fulfill that request. You can set up a serverless function to decrypt and verify the signature of the token, or even delegate this task to you API Gateway.
This looks so neat and elegant at first, but wait a minute ... it looks fairly more complex than what it used to be, right? Let's take a look at how it used to work with our monolith application, and why did it have to change so drastically.
The old days
For quite a while, HTTP user sessions have been stored in the server's memory, indexed by a randomly generated hash with no meaning - the term "Opaque Token" has even arisen to identify a token with no data in it. This data-less token is then sent back to the browser, and then the server gently asks the browser to store it in a Cookie.
By nature, Cookies are automatically sent back to the server with every request, so after you are logged in, your next request to the server will certainly contain the Cookie, which in turn will contain the token needed to retrieve the respective user data from the server's memory.
This approach has been used for more than a decade by many enterprise-grade application servers, and is considered to be secure. But now we have microservices, so we can't rely on this anymore. Every service is isolated from each other, so we don't have a common place to store this data.
The solution then, as proposed, is to send this data to the client, let it keep the data and send it back whenever needed. This poses a huge bunch of issues we didn't have before, and I'll try to describe some of them now:
Security issues
Most of the implementations suggest that you send the token back to the server using an HTTP header called "Authorization". While doing so, your client has to have the ability to receive, store and retrieve the token. Problem is, if your client has access to this data, then any malicious code also has. In possession of the token, any attacker can try to decrypt it in order to access the data within it, or just use it to access the application.
On the monolith, the server just sent the opaque token back in a SetCookie header, so the client code didn't have to deal with the information because the browser does this automatically. And the Cookie can be set in a way that it can't even be accessed by javascript, so malicious code can't reach it.
Performance issues
In order to keep everything safe, you have to sign the token by running an encryption algorithm so no one will tamper with the data, and also encrypt it to make sure no one will be able to read it.
Also, with every request received, your servers will have to run an unencryption as well as verify the signature by running another encryption. We all know how encryption algorithms are expensive in terms of computing, and none of this encryption / unencryption had to happen with our old monolith, saved by the single run when you have to compare the passwords for login - which you also have to run when using JWT.
Usability issues
JWTs are not the best on tracking session expiration by inactivity. Once you issue a token, it is valid until its own expiration, which is set inside the token. So, either you issue a new token with every request, or you issue another token called Refresh Token with a longer expiration and use it only to get a new token after your token expires. As you may have realized, this just places the same problem elsewhere - the session will expire as soon as the refresh token expires, unless you also refresh it.
As you can see, the solution proposed brings with it a lot of unsolved problems. But how can we achieve effective and secure user session management in a microservices architecture ?
Bringing old concepts to a new world
Remember the actual problem: the user session used to be stored in the server's memory, and many enterprise servers could replicate this chunk of memory among all of its instances in a cluster, so it would be accessible no matter what. But now we hardly have enterprise servers anymore, being that many microservice modules are standalone java / node / python / go / (insert your tech here) applications. How can they share a single portion of memory?
It's actually rather simple: add a central session server.
The idea here is to keep session data in the same fashion as before: create a opaque token to use as key, and then you can add as much data as you want to be indexed by that key. You do it in a place that is central and accessible by every microservice on your network, so whenever any of them needs the data, such data is just a call away.
The best tool for this job is Redis. Redis is an in-memory key-value database with sub-milissecond latency. Your microservices can read user session data as if it was stored directly within their own memory (well, almost, but it is fast). Also, Redis has a feature that is key to this application: it can set a timeout to a key-value pair, so as soon as the time expires, the pair is deleted from the database. The timeout can be reset by issuing a command. Sounds exactly like session timeout, right?
In order for this to work, you will need two things:
1 - The Authentication module
You will have to create one microservice responsible for the authentication of your users. It will receive the request with the username and password, check if the password is correct, and then create the session data on Redis.
It will have to generate the opaque token, retrieve the user data from your database, and then store the token and the data on Redis. Also, as soon as this is done, it will have to return the token to the client who requested the login, preferably in a SetCookie header if the client is a web browser.
2 - The Authorization module
I prefer to make this module sit within every microservice, but you can also set this up on your API Gateway if you like. Its responsibility is to fetch the request and extract the opaque token from it, then reach Redis to retrieve the user session data and make it available to the module that will process that request.
To sum it up
As you can see, the solution is much simpler, faster and more secure than using JWT for user session control. But have in mind these:
- If you are using a single shard of Redis, this can be your single point of failure. I recommend using a more robust production setup with more shards and data replication.
- Session data can be modified by every module with access to Redis - use an "only add never delete" approach, just as you would in the old days.
I hope this helps.
As a bonus, here is a SessionManager to help with the implementation in Java using Jedis and Tomcat's token generator, usually included in Spring Boot:
Have fun!
Top comments (8)
For the first, Thank you for such a good article!
There is one interest issue that I trying to resolve.
With Centralized Authorization Microservice (with API Gateway) this is nice.
It can get token ( session id from client request header), validate it with redis stored data and so on. And of course, this centralized microservice is able to check if a client authenticated. But what to do with Authorization (Checking permissions) ?
Signin microservice can authenticate a user, store token to redis ,etc ....
Centralized Authorization microservice can check/validate client sessionId ( token ), OK!
But then, when getting forward to a "ABC Microservice", how to handle permissions check (authorize client) ?
Option: 1
Each microservice should receive token with permissions from middleware authorization microservice and handle Authorization of Permissions by himself.
Option: 2
Each microservice can send a gRPC request to Centralized Authorization service to handle Authorization (permissions check), etc ...
Option: 3
Each Microservice has access to Authorization service/Redis data and will handle Authorization by own functionality. In other words, there is no any middleware service.
But for all this cases we have one very big issue: How the Centralized Authorization service will know about Other services permissions ?
i.e. Sending a request to Edit User profile data:
Microservice: account
Resource: /account/profile
Request method: PATCH
The /account microservice has its own permissions list.
( Developers of this microservice followed to all rules and standards to make permissions list), OK.
Question 1:
But how the Middleware Authorization microservice can know about this permissions to check ?
Question 2:
Even if account microservice sends to Authorization microservice data about own permissions list ( via gRPC) , anyways, how the Authorization microservice will know, what service should be authorized for ?
Example: Sending request to: /account/profile ...
How the Middleware microservice will know, that should check permissions for account service ?
Interest question right ?
Waiting for your answers and suggestions.
Thank you and sorry for long explanation. I tried to explain more detailed.
What I do is set a list of roles the user has access to in session data. So, when a request hits a microservice, it goes to Redis and retrieve the session data. This data contains the list of roles available for that user, and the service must know which role is the one that allows access to itself. So you just check if that role is contained in the user's role list. This way Authorization is checked at microservice level, which I think is more secure.
hey, good stuff. Could you have any piece of code that shows the concept you outlined in action?
Isn't it a bad idea to have every microservice be able to access this central session store?
I think yes. This is not a good idea, because assuming only the Authorization (middleware ) microservice has access to sessions and other database related to Authentication/Authorization.
With my point of view, "Products" or "Orders" microservice(s) cannot have access to Authorization data.
Thank you.
You can think of it as an isolation concern alright. However, session data is supposed to be shared, right?
This is complex issue. Finally I prefer the option 1:
Each microservice should receive token with permissions from middleware authorization microservice and handle Authorization of Permissions by himself.
And Other microservices, such as "Products", "Catalogs", "Orders" not have access to Authorization database (redis or other db no mater).
Thanks!
Thank you for such a nice and useful article ,
could you explain the part of authentication using radis with any sample of code .