DEV Community

Michael Levan
Michael Levan

Posted on

Automating Cognitive Relief For Engineers

Have you ever been to a point where you’re just stuck and going through the same thought process over and over again without having a new thought for a result? This, in many cases, is burnout, and it’s quite common amongst the engineering community.

As with many engineers, there are plenty of times when an 18-hour day happens or an issue occurs and you spend far more time than you believe you should.

In this blog post, you’ll learn one of the ways to reduce complexity, which results in less cognitive load when it comes to resource optimization

How Engineers Should Think

Engineers have always been under heavy stress environments for two primary reasons:

  1. They’re usually expected to work way more than other departments.
  2. With the mass amount of layoffs, someone has to pick up the slack.

First, there’s the overall expectation. For some unknown reason, it’s pretty common in corporate for engineers to not be expected to sign off at 5:00 PM. Although this is sometimes fine, the truth is, that there are times when engineers need to sign off, recoup, and rest their minds. The human brain isn’t designed to be on 24/7. You need rest and break periods. Otherwise, you burn out.

Second, it’s no secret that there are a massive amount of layoffs happening right now in the tech space, and here’s the thing - just because there are layoffs doesn’t mean there’s less work. In fact, there’s usually more work. Because of that, someone has to pick up the slack, and it’s usually picked up by the engineers who are still left in the organization. That also means they have to not only do their work, but other work.

Whether you’re dealing with layoffs or you’re dealing with the need to stay locked into your work for longer than usual, there are a few methods of reducing cognitive load.

Helpful Methods of Reducing Cognitive Load

The three primary methods you can use to reduce cognitive load, regardless of the situation you’re in are:

  1. Think repeatability, not automation
  2. Spend less time putting out fires
  3. Have a platform that does the work for you

The first is to think about repeatability, which is far different than automation. For example, let’s think about GitOps. Technically, from an automation perspective, you could write a bunch of bash scripts that check in with repositories and deploy what’s changed based on an interval that you can create via a Cron Job. However, is that the most effective way? You have to write your own Controllers, pay for servers to host that automated solution, etc… Instead, you can just use a GitOps Controller that already exists, that’s well documented, has support.

Next, there’s the idea of putting out fires. Going back to Google’s SRE Handbook, engineers have been attempting to reduce cognitive load for years. In fact, the whole idea of a true SRE is to spend no more than 50% of your time putting out fires and in turn relieving yourself of cognitive load. The other 50+ percent should be spent on creating repeatable processes to ensure that the fires don’t keep happening.

Last but certainly not least is the solution that fixes numbers 1 and 2 on the list - have a platform/tool that does the job for you. Tools don’t solve people problems, but if implemented correctly, they can solve repeatable issues. In the case of resource optimization, you don’t want to sit and manually scale, or even set up Manifests to do it for you. Instead, you want a platform to automatically handle scaling for Pods. Otherwise, engineers are going backward and managing Pods like they would servers - manually.

Cognitive Relief with StormForge

In this section, you’re going to learn how to configure StormForge on your Kubernetes cluster. With StormForge, you have the ability to be hands-off when it comes to scaling. You can let the tool handle the work for you so you can get back to the work that you actually want to be doing.

Getting A Cluster Up and Running

To start using StormForge, it’s pretty straightforward. You just run a Helm Chart with values that authenticate and authorize your cluster to StormForge.

First,under clusters, click the green + Add Cluster button.

Image description

Next, give your cluster a name. It doesn’t have to be the exact name of your cluster, but it should be for management purposes.

Image description

You’ll then see a value.yaml file gets generated for you. It contains the authentication and authorization values needed to authenticate your cluster to StormForge.

Image description

Lastly, install StormForge with Helm utilizing the values.yaml file.

Image description

When the installation is completed, you’ll notice it takes roughly 30 minutes to 1 hour to see the cluster fully registered within StormForge.

Generating Recommendations

Once the workloads are registered in StormForge, which can take 30 minutes to 1 hour, go to the Optimize Live tab and click on Workloads.

Choose a Workload.

Image description

Click the green Export Patch command.

Image description

You’ll see a Kubernetes Manifest downloaded which containers the resources, limits, and requests that are recommended for your workload. You no longer have to guess.

Image description

In an upcoming blog post, you’ll learn how to take this a step further where you don’t even have to run the Manifest with the updates when using the StormForge Applier.

Top comments (1)

Collapse
 
oshratn profile image
Oshrat Nir

I like the point of thinking about repetability before adressing automation.