DEV Community

Shahar Shalev
Shahar Shalev

Posted on

From Chaos to Clarity: Tracing Your Flows Like a Pro with Powerful Log Techniques

I recently received a bug report from a client, and I struggled to locate the logs that described what occurred; sometimes, even when you have logs, you don’t make it simple enough to trace the issues.

In this blog post, I’ll give you some simple tips for how to trace your flows and handle fan-in and fan-out processes.

Simple case — 1:1 — Request-Id

Example of simple flow passing the request-id from one process to another

The simplest scenario is when an API creates sub-tasks or sub-processes.
To keep track of the entire flow, we can create a Request-Id (random uuid) and pass it along the way.
Don’t forget to include the request-Id in the response.

Fan-Out — 1:N — Multiple Request-Ids

The main process creates multiple new request-ids for each subprocess

In a Fan-Out process, you have a main process that generates a number of sub-processes, and you need to track each one of them.
For each sub-process, create a new request-id, and then log both the old and the new request-id.

Show that you also log the previous and the new request-id

Fan-In — N:1

Fan-in flow you create new request-id for all the subprocess

In a Fan-In process, you have several sub-processes that are combined together into a single new process.
For the new process create a new Request-Id and log both the old and the new request-id.

Log Aggregation

We know how to track our processes, but what do we actually try to monitor?
In a fan-out flow, we might be interested in tracking the status of each subprocess (success or failure).

For monitoring the flow in each of our logs we will use a combination of logId and requestId 

Example of logs

Then we can aggregate the data and turn it into a table that shows the status of our request-ids.
Some columns will be aggregated with the latest value, and some columns might be contacted together, summed, or averaged.

Take the logs and aggregate them into a table with summary

Additionally, statistics for each flow can be generated, giving insight into error distributions.
These statistics can be used as benchmarks to detect errors automatically when they notice higher-than-normal occurrences.

Create error distributions table

Tip:

Make a class out of all the shared logs to make it easier to expand and use them consistently, as well as to offer type safety and other benefits.

It might look something like this:

import { Logger } from './logger-service';

interface FlowAStatusInput {
   requestId: string,
   status?: 'FAILED_REASON_A' | 'FAILED_REASON_B' | 'SUCCESS',
   description?: string,
   // Other attributes

}

export function logFlowAStatus(input: FLOW_A_STATUS_INPUT) {
  logger.log({
     logId: 'FLOW_A_STATUS',
     ...input
  });
}
Enter fullscreen mode Exit fullscreen mode

Top comments (0)