DEV Community

Pietro Bongiovanni
Pietro Bongiovanni

Posted on • Updated on

Liveness and Readiness Probes with Laravel

This post was first published on my blog, to see my posts when I first publish them, check it out!

Liveness and readiness probes are two of the strongest and most useful tools that the Kubernetes suite offers for orchestrating your containers. Kubernetes by itself is very good at rescheduling deployments, restarting pods and, in general, handling extreme situations.
What, with a default configuration, it is lacking is a way of solving problems regarding the state of the applications running in its pods: suppose an application is running in a pod and, due to a deadlock, it is not able to serve any more responses. Your average Kubernetes configuration has no way of detecting such a problem so if you imagine that this is a situation that might happen in one of your application, you will need to set up some tools to avoid it.
Luckily for you, Liveness Probes are designed to solve exactly this problem. Let's see how to implement a simple Liveness Probe and a Readiness Probe in your Laravel application.

First, you will need to set up a health route, a simple route that does nothing but returning a 200 OK status code. You can achieve this by simply adding the following code snippet to your routes.php:

Route::get('/healthz', function () {
    return 'ok';
});
Enter fullscreen mode Exit fullscreen mode

Once you added this simple route to your code, you will need to add the following snippet to your deployment.yml:

livenessProbe:
httpGet:
    path: /healthz
    port: 80
initialDelaySeconds: 15
failureThreshold: 3
periodSeconds: 3
Enter fullscreen mode Exit fullscreen mode

What the above snippet will do is tell Kubernetes to perform a request every periodSeconds seconds to the specified path. If the response has a status code between 200 and 400 the check will be registered as successful, any other status code will be registered as a failure. After failureThreshold consecutive failed checks, K8s will kill the pod and start up a new one. Finally, the initialDelaySeconds indicates how many seconds Kubernetes will wait before starting to send the Liveness Probe check requests: this will give the application time to start up and be ready to serve requests.

Readiness probes, on the other hand, are useful to disable a pod when it is not able to receive traffic, but its HTTP requests are already working. Suppose an application needs to access a database to return data: we can say that if the application is not able to access the database, it is not ready to serve traffic.
This second type of probes is useful to check exactly that: if the check fails, Kubernetes will mark the probe not ready to accept traffic and it will not be exposed to traffic neither from outside nor from inside the cluster.

Let's see how to implement a readiness probe starting from the requirement that our application needs to access a database in order to be considered "ready". First, let's create a /readiness HTTP endpoint: this route will perform a simple check on the database, if it is able to count the content of a check table in the database it will return a 200 OK status code, otherwise a 500 status code will be returned.

Route::get('/readiness', function () {
    $check = DB::table('check')->count();
    if ($check > 0) {
        return 'ok';
    }
    return response('', 500);
});
Enter fullscreen mode Exit fullscreen mode

Once we have a working route, we will need to add the following snippet to our deployment.yml and redeploy our application.

readinessProbe:
httpGet:
    path: /readiness
    port: 80
initialDelaySeconds: 10
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 1
periodSeconds: 15
Enter fullscreen mode Exit fullscreen mode

As you can see, the syntax for Liveness and Readiness probe is quite similar, the only extra field we have here is the successThreshold parameter, which indicates how many successful checks are necessary for the pod to be considered "ready". In the above case, for example, after one successful check the application will be ready and it will start to receive traffic.

As you can see, implementing Liveness and Readiness probes for your application, Laravel or otherwise, is very straight forward: simply add a route which returns a status code between 200 and 400 for a successful check, any other status code for a failure, and add the above snippets to your deployment files and you will be up and running. Clearly, like any other Kubernetes element, Liveness and Readiness probe can be more deeply configured. To explore how to do so, I suggest you check the official documentation, which as always is very thorough.

Top comments (3)

Collapse
 
eduardort profile image
Eduardo Reyes

Lets not forget that the health checks can be more complicated, returning a 200 is ok I guess but it can be so much better.

Also, I think the first deployment.yml code should point to /healtz

Collapse
 
pgoodjohn profile image
Pietro Bongiovanni

The health check can definitely be more complicated, but the requirement is that it returns a Status Code which is 200 < status code < 400 so I went for the most basic one!

Also, good catch, my copy-pasting skills are getting rusty.

Have a great day! :)

Collapse
 
marco_siciliano profile image
Marco Siciliano • Edited

Hi,
i have a laravel websocket deployment running in a pod and a service listening on port 7676 for incoming websocket connections.
My problem now is that i have to configure liveness- /readiness probes to get my ingress running (otherwise my backend service - laravel websockets - remain UNHEALTHY)
How can i add liveness/readiness probes probe to my laravel websocket server?
Any idea?

Do i need to put an nginx in fron of my container running the laravel websocket?

Greetings
Marco