DEV Community

Ramon Durães
Ramon Durães

Posted on • Updated on

Logs, Distributed Tracing in microservices using .NET / C# and Devprime.

One of the critical areas in microservices development is application observability, which encompasses distributed logs, transaction tracing, and metrics. Traditionally, achieving this observability demanded specific implementations in each application, posing a significant challenge for developers before the Devprime platform.

The Devprime platform serves as a powerful accelerator in microservices development, offering a broad spectrum of features to enhance developers' productivity. One of its key advantages lies in rapidly developing the first native cloud microservice in just 30 minutes, thanks to an intelligent software architecture strategy and automatic code implementation.

Devprime addresses observability challenges by offering an intelligent feature in the Observability adapter, enabling automatic generation of distributed logs using correlation strategies to capture and index logs within the cluster. This simplifies the retrieval and subsequent analysis of these logs in widely used tools like ELK (Elasticsearch, Logstash, Kibana).

Moreover, Devprime also provides automatic implementation of the OpenTelemetry protocol, enabling distributed tracing exposure. These trace data can be consumed by various market tools like Zipkin, Jaeger, and other Application Performance Management (APM) tools.

With these resources available on the Devprime platform, developers can focus on their microservices' core development while observability tasks are handled automatically. This results in increased productivity, reduced development time, and a more robust and reliable microservices architecture.

Getting Started

The much-awaited moment has arrived to practically implement the use of Distributed Log and Distributed Trace in distributed system scenarios. In the first article of this series, we demonstrated the creation of the first microservice, and now we'll utilize two already implemented projects to facilitate understanding the observability context provided by the Devprime platform.

To perform this demonstration, it's necessary to set up the local Docker with MongoDB, RabbitMQ, SEQ, and Jaeger containers, following the steps available in the Devprime documentation. It's important to activate an "devprime" Exchange and the "orderevents" and "paymentevents" queues in RabbitMQ.

Preparing the Environment

  • Install .NET 8.0 or higher.
  • Install Visual Studio Code.
  • Install and activate the Devprime CLI.

Obtaining Sample Code
In this example, we'll use the "Order" microservice and the "Payment" microservice, which can be obtained directly from the Devprime GitHub or implemented manually following the Devprime documentation.

1) Clone the project:

git clone https://github.com/devprime/devprime-microservices-order-payment
Enter fullscreen mode Exit fullscreen mode

2) Update the Stack and usage license through the Devprime CLI.
Enter the cloned folder and type "dp stack."

Running "Order" and "Payment" Microservices

Now that you have the microservices available and updated, ensure you have executed the initial Docker settings at the beginning of the article and navigate to each folder to run each microservice in a command prompt tab.

a) Order (ms-order)

.\run.ps1 or ./run.sh (Linux, macOS)
Enter fullscreen mode Exit fullscreen mode

b) Payment (ms-payment)

.\run.ps1 or ./run.sh (Linux, macOS)
Enter fullscreen mode Exit fullscreen mode

After having both microservices running simultaneously, you can access the URL of the first service, https://localhost:5001, and make a POST request to the API by filling in the order details, which will be processed in the first service and then propagated to the second microservice.

Enabling Observability Settings

Now is the time to enable log capture in the local environment, where we'll use the SEQ tool mentioned at the beginning of the article. Alternatively, you can choose to use ELK (Elasticsearch, Kibana, Beats, and Logstash). For reading the Distributed Trace, we'll use Zipkin, which is compatible with the OpenTelemetry protocol.

In the local environment, configurations are done in the src/App/appsettings.json file available in each project. In our example, we have the "ms-order" and "ms-payment" projects. In the production environment, these parameters are configured through a security vault that shares the information.

Use Visual Studio Code to edit the files in each folder using the following command:

code src/App/appsettings.json
Enter fullscreen mode Exit fullscreen mode

When opening the configurations of each file, locate the DevPrime_Observability key and, at the root, verify if the Enable parameter is set to true. Then, in the Logs section, confirm if Enable is also set to true and if the ShowAppName and HideDateTime options are set to true. ShowAppName displays the microservice name, and HideDateTime hides the display of date and time, as SEQ already provides this information.

Next, go to the export option and verify if Enable is set to true and the default is set to "SEQ." Then, locate the Trace section and ensure that Enable and Type are set to "zipkin."

// JSON snippet provided in the text
Enter fullscreen mode Exit fullscreen mode

Viewing Distributed Log in SEQ

We have completed the configuration of the microservices, and now you just need to access the Order microservice URL at https://localhost:5001 and make a POST request to observe the automatic log behavior provided by Devprime, visible in SEQ at the local URL http://localhost:8000 in this context.

Image description

It's important to note that besides automatically generating standardized logs in all applications, the Devprime platform uses Trace ID and Correlation ID strategies allowing log relationships even when executed in different microservices, as shown in the earlier example with the visualization of ms-order and ms-payment.

Now, repeat the process by making a new post in the Order API and this time, remove the entire line from the "customerName" field and make a new post.

Image description

In this scenario, due to the missing mandatory field, a business rule error was recorded, resulting in an automatic trace identifier that allows the internal team to precisely locate what went wrong in this procedure.

Image description

It's essential to highlight that despite the error, no additional information was returned to the API, as this is an internal feature of Devprime that has a processing pipeline and protection to prevent the exposure of confidential information.

Now, it's time to return to SEQ and query our log filtered by TraceId == "9435d9eb-1e95-49fa-9e94-df7b34fd6d3f," and we'll have the complete details of this application flow with the full error breakdown.

Image description

Viewing Distributed Trace in Zipkin

The Devprime platform exposes distributed traces in the OpenTelemetry format, which has been incorporated into various Application Performance Management (APM) tools in the market, one of which, presented in this context, is Zipkin. You can also use Jaeger or any similar tool.

The next step is to confirm that the Order and Payment microservices are running and make a POST request to the ms-order microservice's API available at https://localhost:5001. In our local Docker environment, Zipkin is running on port 9411. To view the distributed trace, simply access the url http://localhost:9411 and filter the captured events to automatically obtain the visualization below.

Image description

Final Thoughts

Devprime is a platform that accelerates software developer productivity, saving up to 70% of the investment in the backend, as observed in this article focusing on observability. It showcased automatic features that ease software developer's daily work and naturally contribute to Site Reliability Engineering (SRE) teams responsible for managing digital platforms.

In a production environment, it's recommended to use collectors like FluentD or FluentBit to capture logs directly from container outputs and publish them to a repository for future indexing and querying.

To capture OpenTelemetry in the production environment, deploying a collector and repository for querying via compatible tools is necessary. Cloud providers like AWS, Azure, and Google are evolving to provide resources for OpenTelemetry.

What do you think? Share your thoughts in the comments.

[],
Ramon Durães
CEO, Devprime

Image: Freepik/Vectorjuice

Top comments (0)