<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rob Van Pamel</title>
    <description>The latest articles on DEV Community by Rob Van Pamel (@robvanpamel).</description>
    <link>https://dev.to/robvanpamel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/robvanpamel"/>
    <language>en</language>
    <item>
      <title>Find your application's hidden secrets using opentelemetry</title>
      <dc:creator>Rob Van Pamel</dc:creator>
      <pubDate>Thu, 23 Feb 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/robvanpamel/find-your-application-hidden-secrets-using-opentelemetry-3ce4</link>
      <guid>https://dev.to/robvanpamel/find-your-application-hidden-secrets-using-opentelemetry-3ce4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlg837bze7zifuvmkd1g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlg837bze7zifuvmkd1g.jpg" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When your project is growing, the need to gain more insights in your application is growing along. You need to know in detail what is happening inside the application, either to resolve a bug or to improve some performance issues. In the past these insights were mainly received by adding extensive logging. The most of us have already done this, and is the easier step to start with. If you would like to go a step further, adding trace and metric information is the way to go. Trace information is extremely valuable when you start to work with distributed applications. They allow you to follow a request across multiple systems. &amp;lt;!--more--&amp;gt;&lt;/p&gt;

&lt;p&gt;The combination of logs, traces and metrics are called telemetry data. While there are different ways to collect the data above, OpenTelemetry should be the defacto-standard these days. Opentelemetry is a Cloud Native Computing Foundation (CNCF) project because it provides us with an open standard to collect telemetry data from our applications. &lt;em&gt;(the standard isn’t fully approved yet, but this shouldn’t hold us from using it)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The open telemetry projects consists out of several topics which i would like to explain&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Signals&lt;/li&gt;
&lt;li&gt;Open Telemetry Collector&lt;/li&gt;
&lt;li&gt;Instrumenting your application&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Signals
&lt;/h1&gt;

&lt;p&gt;In open telemetry, the signals are the actual collection of different telemetry data that are supported, logs, traces and metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logs
&lt;/h2&gt;

&lt;p&gt;A log is a timestamped text record, either structured (recommended) or unstructured, with metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  Traces
&lt;/h2&gt;

&lt;p&gt;Traces give us the big picture of what happens when a request is made by a user or an application. For example a GET request was executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;A metric is a measurement about a service, captured at runtime.&lt;/p&gt;

&lt;h1&gt;
  
  
  Open Telemetry Collector
&lt;/h1&gt;

&lt;p&gt;The open telemetry Collector is the process which will act as a gateway to receive the signals (telemetry data), process it and send to your observability tool. Using the open telemetry collector isn’t a mandatory step. It is possible to send the telemetry data to your observability tool, however this isn’t recommended for production environments. The collector can take care of retries and more.&lt;br&gt;&lt;br&gt;
There are 2 collector variants available, the &lt;em&gt;‘&lt;a href="https://github.com/open-telemetry/opentelemetry-collector/releases" rel="noopener noreferrer"&gt;normal&lt;/a&gt;‘&lt;/em&gt; and the &lt;em&gt;&lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/releases" rel="noopener noreferrer"&gt;contrib&lt;/a&gt;&lt;/em&gt;. As you expected, the contrib variant includes more receivers, exporters and processors built by the community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Frobvanpamel%2Frobvanpamel.github.io%2Fmain%2F_posts%2Fotel%2Fotel_collector.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Frobvanpamel%2Frobvanpamel.github.io%2Fmain%2F_posts%2Fotel%2Fotel_collector.svg" width="984" height="698"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Receivers
&lt;/h2&gt;

&lt;p&gt;Here you can define how you will ingest data into your collector, it can be one source, but nothings holds you from using multiple receivers. The default is the Open Telemetry Protocol (OTLP). OTLP runs on HTTP(4318) and on gRPC (port 4317). Other options which are out-of-the-box available are Jaeger and Prometheus, but if you use the contrib variant, you have even more options.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://opentelemetry.io/docs/collector/configuration/#receivers" rel="noopener noreferrer"&gt;https://opentelemetry.io/docs/collector/configuration/#receivers&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Processors
&lt;/h2&gt;

&lt;p&gt;Here is where you can add some magic to your traces. Each trace, log or metric can be tweaked in this space. Some examples are adding tags, filtering logs or traces, sample, … More information can be found here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://opentelemetry.io/docs/collector/configuration/#processors" rel="noopener noreferrer"&gt;https://opentelemetry.io/docs/collector/configuration/#processors&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Exporters
&lt;/h2&gt;

&lt;p&gt;Like the name mentions, the exporter is responsible for exporter the telemetry data into your observability tool. Just like at the receiver side, the default is the Open Telemetry Protocol (HTTP and gRPC), but also Jaeger and Prometheus are present. And when using the contrib variant of the collector, much more is possible. In the example below we will be using the datadog exporter, cause we would like to import our data into it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://opentelemetry.io/docs/collector/configuration/#exporters" rel="noopener noreferrer"&gt;https://opentelemetry.io/docs/collector/configuration/#exporters&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;The configuration of the Collector is done via a &lt;code&gt;yaml&lt;/code&gt; file, in which you can define how it works, lets take a look.&lt;/p&gt;

&lt;p&gt;First part that we need to configure is the receiver side. In our example, we will use the OTLP with http and gRPC enabled. We will have to instrument our web-application as well that we ‘export’ the OTLP data over there. For more information see Instrumenting your application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;receivers:
  otlp:
    protocols:
      http:
      grpc:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next part to configure is the processor side. Like mentioned above you can tweak your signals here before sending them to your exporter. The example above has 2 processors. One thing to note over here is that not all the processors are already in a stable phase. I personally don’t mind it as, but you should be aware of this. See &lt;a href="https://github.com/open-telemetry/opentelemetry-collector#stability-levels" rel="noopener noreferrer"&gt;this link for more information about the stability levels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first processor is a filter that we defined for our traces. In the example below we have defined some healthchecks on our application, and we do not want them to ‘pollute’ our traces when everything is configured as it should be (HTTP status = 200). By adding this filter, we can remove these traces, but if the health check fails (HTTP status &amp;lt;&amp;gt; 500), we will still see it. This filter processor has lots of possibilities and examples which can be found &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor" rel="noopener noreferrer"&gt;on the readme&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The next processor which is available is the batch processor. You can already expect what this does, it is batching the data into payloads before it is exported ( in this case to Datadog). Batches can be created based on size or on time.&lt;br&gt;&lt;br&gt;
There is a time out (eg. 10s), after which a batch will be sent regardless of size. On the other hand, you have the batch size. This specifies the amount of signals a bath contains before being sent, send_batch_size.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;processors:
  filter: 
    traces: 
      span: 
       - 'attributes["http.target"] == "/health/ready" and attributes["http.status_code"] == 200'
       - 'attributes["http.target"] == "/health/live" and attributes["http.status_code"] == 200'

  # The batch processor batches telemetry data into larger payloads.
  # It is necessary for the Datadog traces exporter to work optimally,
  # and is recommended for any production pipeline.
  batch:
    # Datadog APM Intake limit is 3.2MB. Let's make sure the batches do not
    # go over that.
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 10s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next part in the configuration, are the exporters. Here you specify where your data will be sent to. I have specified 2 exporters here, a file exporter and also our datadog exporter. &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can find a list of all the exporters provided by the community. For each exporter you have to specify a bit more content, eg the file exporter requires a path to be defined, where the datadog exporter requires a api-key and site. Each exporter has a descent readme file on github where the required attributes are listed. Here you can fine the readme for &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/datadogexporter" rel="noopener noreferrer"&gt;datadog&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exporters:
  file/no_rotation:
    path: ./logging
  datadog:
    api:
      ## The Datadog API key to associate your Agent's data with your organization.
      key: "&amp;lt;Your API key goes here&amp;gt;"
      site: datadoghq.eu
      fail_on_invalid_key: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last part to define in the collector is the pipeline. This is the place where you are connecting the dots. For each signal being a trace, metric or log, you can define its receiver, processor and exporter. This is where the power of the collector becomes visible. You can start to export your telemetry to different exporters, you can receive logs, only from specials sources that you output to other exporters, really nice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service:
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [datadog]
    traces:
      receivers: [otlp]
      processors: [batch, filter]
      exporters: [datadog]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [datadog, file/no_rotation]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you spin up the collector you will see something like this &lt;em&gt;(I manually filtered the output a bit here)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;C:\Users\rob.vanpamel\work\OTELCOL&amp;gt; .\otelcol-contrib.exe --config .\configuration\config.yaml
2023-02-23T10:24:56.365+0100 info service/telemetry.go:90 Setting up own telemetry...
2023-02-23T10:24:56.369+0100 info service/telemetry.go:116 Serving Prometheus metrics {"address": ":8888", "level": "Basic"}
2023-02-23T10:24:56.712+0100 info provider/provider.go:41 Resolved source {"kind": "exporter", "data_type": "metrics", "name": "datadog", "provider": "system", "source": {"Kind":"host","Identifier":"BEACC-8MT3MN3"}}
2023-02-23T10:24:56.718+0100 info clientutil/api.go:47 Validating API key. {"kind": "exporter", "data_type": "metrics", "name": "datadog"}
2023-02-23T10:24:56.967+0100 info clientutil/api.go:51 API key validation successful. {"kind": "exporter", "data_type": "metrics", 
2023-02-23T10:24:57.334+0100 info clientutil/api.go:51 API key validation successful. {"kind": "exporter", "data_type": "logs", "name": "datadog"}
2023-02-23T10:24:57.350+0100 info logs/sender.go:45 Logs sender initialized {"kind": "exporter", "data_type": "logs", "name": "datadog", "endpoint": "https://http-intake.logs.datadoghq.eu"}
2023-02-23T10:24:57.357+0100 info service/service.go:128 Starting otelcol-contrib... {"Version": "0.70.0", "NumCPU": 16}
2023-02-23T10:24:57.357+0100 info extensions/extensions.go:41 Starting extensions...
2023-02-23T10:24:57.358+0100 info service/pipelines.go:86 Starting exporters...
2023-02-23T10:24:57.359+0100 info service/pipelines.go:90 Exporter is starting... {"kind": "exporter", "data_type": "metrics", "name": "datadog"}
2023-02-23T10:24:57.360+0100 info service/pipelines.go:94 Exporter started. {"kind": "exporter", "data_type": "metrics", "name": "datadog"}
2023-02-23T10:24:57.360+0100 info service/pipelines.go:90 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "datadog"}"datadog"}
2023-02-23T10:24:57.362+0100 info service/pipelines.go:98 Starting processors...
2023-02-23T10:24:57.362+0100 info service/pipelines.go:102 Processor is starting... {"kind": "processor", "name": "batch", "pipeline": "metrics"}AppInsights", "pipeline": "traces"}
2023-02-23T10:24:57.373+0100 info service/pipelines.go:106 Processor started. {"kind": "processor", "name": "filter/logs", "pipeline": "logs"}
2023-02-23T10:24:57.374+0100 info service/pipelines.go:102 Processor is starting... {"kind": "processor", "name": "batch", "pipeline": "logs"}
2023-02-23T10:24:57.374+0100 info service/pipelin. 
...  
2023-02-23T10:24:57.383+0100 info otlpreceiver@v0.70.0/otlp.go:112 Starting HTTP server {"kind": "receiver", "name": "otlp", "pipeline": "logs", "endpoint": "0.0.0.0:4318"}
2023-02-23T10:24:57.385+0100 info service/pipelines.go:118 Receiver started. {"kind": "receiver", "name": "otlp", "pipeline": "logs"}
2023-02-23T10:24:57.385+0100 info service/pipelines.go:114 Receiver is starting... {"kind": "receiver", "name": "otlp", "pipeline": "metrics"}
2023-02-23T10:24:57.387+0100 info service/pipelines.go:118 Receiver started. {"kind": "receiver", "name": "otlp", "pipeline": "traces"}
2023-02-23T10:24:57.388+0100 info service/service.go:145 Everything is ready. Begin running and processing data.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Instrumenting your dotnet core application
&lt;/h1&gt;

&lt;p&gt;When you want to collect traces, metrics and logs, you have to make some code changes to your codebase before they can be sent to eg Datadog with OpenTelemetry. Yes, there are possibilities to do the same without code changes, but you are not using the open telemetry standard at that moment, but a vendor specific implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collecting logs
&lt;/h2&gt;

&lt;p&gt;Collecting logs is the easiest part, if you are already using the &lt;code&gt;ILogger&lt;/code&gt; interfaces that dotnet core provides you. The Nuget Packages to add to your project are listed below. Remember that they are still in beta/preview but it shouldn’t hold you back.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol.Logs/1.4.0-rc.3" rel="noopener noreferrer"&gt;OpenTelemetry.Exporter.OpenTelemetryProtocol.Logs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Exporter.Console/1.4.0-rc.3" rel="noopener noreferrer"&gt;OpenTelemetry.Exporter.Console&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you’ve added the packages you need to extend the &lt;code&gt;startup.cs&lt;/code&gt; file and extend the service collection.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void ConfigureServices(IServiceCollection services)
{
    ... // other service registrations etc
    services.AddLogging(loggingBuilder =&amp;gt;
        loggingBuilder.AddOpenTelemetry(otelLoggerOptions =&amp;gt;
        {
            otelLoggerOptions.IncludeFormattedMessage = true;
            otelLoggerOptions.IncludeScopes = true;
            otelLoggerOptions.ParseStateValues = true;
            otelLoggerOptions.AddConsoleExporter();
            otelLoggerOptions.AddOtlpExporter();
        })
    );
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards you can log like you were doing already by injecting the &lt;code&gt;ILogger&amp;lt;&amp;gt;&lt;/code&gt; or &lt;code&gt;ILoggerFactory&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class MyLoggingController
{
    public MyLoggingController(ILogger&amp;lt;MyLoggingClass&amp;gt; logger)
    {
        logger.LogInformation("The logging class is created");
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the result should look like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LogRecord.Timestamp: 2023-02-23T09:39:17.1029870Z
LogRecord.TraceId: 87f9c6ac8d1cc7aea9c105c866a5095b
LogRecord.SpanId: a0d1d3ed1cbd01d0
LogRecord.TraceFlags: Recorded
LogRecord.CategoryName: MyApp.Controllers.v1.MyLoggingController
LogRecord.LogLevel: Information
LogRecord.FormattedMessage: The logging class is created
LogRecord.StateValues (Key:Value):
    OriginalFormat (a.k.a Body): The logging class is created
LogRecord.ScopeValues (Key:Value):
[Scope.0]:SpanId: a0d1d3ed1cbd01d0
[Scope.0]:TraceId: 87f9c6ac8d1cc7aea9c105c866a5095b
[Scope.0]:ParentId: 0000000000000000
[Scope.1]:ConnectionId: 0HMOLG7A3HTFA
[Scope.2]:RequestId: 0HMOLG7A3HTFA:00000001
[Scope.2]:RequestPath: /api/v1/default
[Scope.3]:ActionId: a94a60be-d09e-425e-85d9-c5111ca61cf5
[Scope.3]:ActionName: MyApp.Controllers.v1.MyLoggingController.GetAsync (MyApp)

Resource associated with LogRecord:
service.name: MyApp
service.instance.id: cbfe9734-9fe4-4711-9ea2-fdfe45f2251e

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Collecting Traces
&lt;/h2&gt;

&lt;p&gt;Traces in your web-application are generated by default whenever your web-application receives a request. This means that you don’t have to create them manually. You do have to catch them and send them to the Open Telemetry Collector.&lt;/p&gt;

&lt;p&gt;Enabling this for open telemetry can also be done by extending the services collection. You will see in the configuration that we enable 3 kind of instrumentations.&lt;br&gt;&lt;br&gt;
Instrumentation for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AspNetCore&lt;/li&gt;
&lt;li&gt;SqlClient&lt;/li&gt;
&lt;li&gt;HTTPClient&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The required nuget packages are the ones below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore" rel="noopener noreferrer"&gt;OpenTelemetry.Instrumentation.AspNetCore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http" rel="noopener noreferrer"&gt;OpenTelemetry.Instrumentation.Http&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient" rel="noopener noreferrer"&gt;OpenTelemetry.Instrumentation.SqlClient&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol" rel="noopener noreferrer"&gt;OpenTelemetry.Exporter.OpenTelemetryProtocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Exporter.Console" rel="noopener noreferrer"&gt;OpenTelemetry.Exporter.Console&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/OpenTelemetry.Extensions.Hosting" rel="noopener noreferrer"&gt;OpenTelemetry.Extensions.Hosting&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instrumentation will be added automatically afterwards. IF you think it is too much, you can start filtering in the collector.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  var resourceBuilder = ResourceBuilder
                            .CreateDefault()
                            .AddService("MyService");

  services
    .AddOpenTelemetry()
    .WithTracing(openTelemetryBuilder =&amp;gt;
    {   
        openTelemetryBuilder
            .AddConsoleExporter()
            .AddOtlpExporter()
            .AddSource("MyService.*")
            .ConfigureResource(resourceBuilder)
            .AddAspNetCoreInstrumentation()
            .AddSqlClientInstrumentation(
                options =&amp;gt; options.SetDbStatementForText = true
            )
            .AddHttpClientInstrumentation(
                options =&amp;gt; options.RecordException = true);
                })

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run the application with the above configuration, you notice that logs and traces are exported towards OTLP but also to your console. I think this console is still useful for local development.&lt;/p&gt;

&lt;p&gt;The output on your console should look like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Activity.TraceId: 178afd54db82d620af89f346dd26eeae
Activity.SpanId: 1d3093c32ceb22be
Activity.TraceFlags: Recorded
Activity.ActivitySourceName: OpenTelemetry.Instrumentation.AspNetCore
Activity.DisplayName: api/v{version:apiVersion}/{tenant:length(1,100)}/blogs
Activity.Kind: Server
Activity.StartTime: 2023-02-23T09:33:42.4927438Z
Activity.Duration: 00:00:07.9412154
Activity.Tags:
    net.host.name: localhost
    net.host.port: 44381
    http.method: GET
    http.scheme: https
    http.target: /api/v1/default/blogs
    http.url: https://localhost:44381/api/v1/default/blogs
    http.flavor: 2.0
    http.user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36
    http.route: api/v{version:apiVersion}/{tenant:length(1,100)}/blogs
    http.status_code: 200
    Principal: 0aa00995-cb51-49ed-b2c6-5e5696170db9
Resource associated with Activity:
    service.name: MyApp
    service.instance.id: 43af6b39-2e41-475d-857c-3b3dd3581748

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Remarks on the OTLP exporter
&lt;/h2&gt;

&lt;p&gt;In my situation I’ve deployed my collector in the cloud, but I had some trouble connecting to it. I noticed that data was being sent towards the collector, but it didn’t ended up there. After some digging into Wireshark I saw that mu deployed collector returned a 404 Error status code. After digging around for a solution I got it working by adding the correct path and the correct protocol.&lt;/p&gt;

&lt;p&gt;Path for the OTLP Exporter &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;Path&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Metric&lt;/td&gt;
&lt;td&gt;v1/metrics&lt;/td&gt;
&lt;td&gt;&lt;a href="http://mydeployedcollector.com:4318/v1/metrics" rel="noopener noreferrer"&gt;http://mydeployedcollector.com:4318/v1/metrics&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logs&lt;/td&gt;
&lt;td&gt;v1/logs&lt;/td&gt;
&lt;td&gt;&lt;a href="http://mydeployedcollector.com:4318/v1/logs" rel="noopener noreferrer"&gt;http://mydeployedcollector.com:4318/v1/logs&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traces&lt;/td&gt;
&lt;td&gt;v1/traces&lt;/td&gt;
&lt;td&gt;&lt;a href="http://mydeployedcollector.com:4318/v1/traces" rel="noopener noreferrer"&gt;http://mydeployedcollector.com:4318/v1/traces&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.AddOtlpExporter(oltpexporter =&amp;gt;
{
        oltpexporter.Endpoint = new System.Uri($"{_configuration["OpenTelemetry:EndPoint"]}/v1/traces");
        oltpexporter.Protocol = OpenTelemetry.Exporter.OtlpExportProtocol.HttpProtobuf;
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration you will be able to use opentelemetry and by doing so, getting more and more insights into your application. Which might lead to faster bug resolutions, less issues, … Thank you for reading with me. You have a comment or found an issue? Ping me on twitter or leave them over &lt;a href="https://github.com/robvanpamel/robvanpamel.github.io/issues/new" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;p&gt;Some resources which aren’t shared yet are mentioned over here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://opentelemetry.io/docs/collector/" rel="noopener noreferrer"&gt;https://opentelemetry.io/docs/collector/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md" rel="noopener noreferrer"&gt;https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://opentelemetry.io/docs/collector/configuration/#processors" rel="noopener noreferrer"&gt;https://opentelemetry.io/docs/collector/configuration/#processors&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opentelemetry</category>
      <category>datadog</category>
      <category>dotnet</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Looking back to NDC London 2023</title>
      <dc:creator>Rob Van Pamel</dc:creator>
      <pubDate>Mon, 30 Jan 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/robvanpamel/looking-back-to-ndc-london-2023-347b</link>
      <guid>https://dev.to/robvanpamel/looking-back-to-ndc-london-2023-347b</guid>
      <description>&lt;p&gt;Last week me and my colleagues went to the NDC conference in London. This NDC conference is a 3 day conference where you can attend several talks related to different topics like architecture, cloud, testing, .NET and many more.&lt;/p&gt;

&lt;p&gt;Each day you can choose between 7 different tracks, whatever triggers you the most. If you like hands-on, there are workshops available during the conference.&lt;br&gt;&lt;br&gt;
My goal was to get more up to date with latest architecture patterns and look at some AWS tracks.&lt;/p&gt;

&lt;p&gt;3 of my favourite sessions are listed below:&amp;lt;!--more--&amp;gt; there is much more good content, but I can’t list them all off course.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intentional Code - Minimalism in a World of Dogmatic Design by David Whitney
&lt;/h3&gt;

&lt;p&gt;During this talk David told us how we could create a better design for our applications. Not design in the sense that you should add design patterns or best practices but more to the fundamentals. Looking at how we as a developer feel when we look at code. Do we want to close the file, or can we understand the flow at a glance? Most of the examples aren’t hard to grasp or complicated, but it is so easy to overlook them. It starts very easy for example by adding some newlines to the file, or I call it, “Give your code some air to breathe”. This can already improve the readabiliity a lot! But you can go a few steps forward, can this code be minified and simplified? Look closer if you really need that additional level of abstraction which was added. Most likely you can remove it and don’t need it. This lowers the cognitive load, makes it easier to changes which leads in the end again to a better architecture. The talk continues with more examples and a good reading reference would be “Code that fits in your head”.&lt;/p&gt;

&lt;h3&gt;
  
  
  A perfect match: Dapr &amp;amp; Azure Container Apps by Sander Molenkamp
&lt;/h3&gt;

&lt;p&gt;In this talk I got a good overview of the different possibilities that Dapr provides and how this can be used in combination with Azure Container Apps. My knowledge of Dapr was very limited so it was easy to overwhelm me. You can use Dapr for enabling services like PubSub, State Management, Secret Management, …. You can use the Dapr defaults, but in combination with Azure, you can hook it up with Azure Services like Azure Service Bus, Blob storage etc. Which makes it very powerfull in my opinion. A service that AWS is missing I think, although you can add Dapr yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don’t Build a Distributed Monolith: How to Avoid Doing Microservices Completely Wrong by Jonathan “J.” Tower
&lt;/h3&gt;

&lt;p&gt;Jonathan provided the audience with a top 10 of things to avoid when building microservices. Although I think almost all of them can be found in the book of “Building Microservices” by Sam Newman, I still think this a good reminder! Let me sum some of them up for you. To learn all of them you should go and listen to his talk&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assuming Microservices are always better&lt;/li&gt;
&lt;li&gt;Shared database between microservices&lt;/li&gt;
&lt;li&gt;Microservices are too small&lt;/li&gt;
&lt;li&gt;Starting your microservices from scratch&lt;/li&gt;
&lt;li&gt;Coupling through cross cutting concerns&lt;/li&gt;
&lt;li&gt;Use of sync communications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once again, there were a lot of other interesting sessions but it would take way too long to list all these talks. I must admit that the overall quality of the given talks on the conference was very high. And looking back also includes the nice lunches we had in combination with a good party from the line-breakers on Thursday.&lt;/p&gt;

&lt;p&gt;That will be it for today, see you next time!&lt;/p&gt;

</description>
      <category>ndc</category>
      <category>education</category>
      <category>learning</category>
    </item>
    <item>
      <title>Use vscode web editor in Azure DevOps or Github repositories</title>
      <dc:creator>Rob Van Pamel</dc:creator>
      <pubDate>Sat, 26 Nov 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/robvanpamel/use-vscode-web-editor-in-azure-devops-or-github-repositories-57ik</link>
      <guid>https://dev.to/robvanpamel/use-vscode-web-editor-in-azure-devops-or-github-repositories-57ik</guid>
      <description>&lt;p&gt;I am already a fan of visual studio code, but recently I discovered some features which made it only better.&lt;/p&gt;

&lt;p&gt;This blog on github is created mainly using visual studio code. But actually I don’t have visual studio code installed. I’m editing it in my browser on my iPad from the kitchen! I love the “new” web editor in github. Actually I really love the fact that you can start editing now from anywhere anytime, almost no prerequisites. &amp;lt;!--more--&amp;gt;You can launch vscode in your browser by pressing the dot &lt;code&gt;.&lt;/code&gt; on your keyboard, and a new environment will be created, where your complete repository is cloned and ready to change. How great is that?!&lt;/p&gt;

&lt;p&gt;The best part, is that you can not only do this with Github but also on Azure Devops! Go to your repository in the browser and press the dot &lt;code&gt;.&lt;/code&gt; and the magic happens. For big changes, it might not be the ideal environment, but if you want to create a small change or file a PR, it might just suit your needs.&lt;br&gt;&lt;br&gt;
If you want an overview of the other options, press the question mark &lt;code&gt;?&lt;/code&gt; and you get the overview. It is something small but I found so usefull, I needed it to share with you!&lt;/p&gt;

&lt;p&gt;If you have any questions or comments (including typos ;) ) please leave them over &lt;a href="https://github.com/robvanpamel/robvanpamel.github.io/issues/new"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading and learning along! See you next time!&lt;/p&gt;

&lt;h3&gt;
  
  
  References about vscode web
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://code.visualstudio.com/docs/editor/vscode-web"&gt;https://code.visualstudio.com/docs/editor/vscode-web&lt;/a&gt; ( here, it still mentions readonly for Azure Repos, but this is outdated )&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>vscode</category>
      <category>github</category>
    </item>
    <item>
      <title>Accessing blob storage using User Managed Identities in Azure</title>
      <dc:creator>Rob Van Pamel</dc:creator>
      <pubDate>Mon, 21 Nov 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/robvanpamel/accessing-blob-storage-using-user-managed-identities-in-azure-188n</link>
      <guid>https://dev.to/robvanpamel/accessing-blob-storage-using-user-managed-identities-in-azure-188n</guid>
      <description>&lt;p&gt;In my &lt;a href="https://robvanpamel.github.io/2022/10/31/ManagedIdentities.html" rel="noopener noreferrer"&gt;previous blog post&lt;/a&gt;, the benefits of Managed Identities are handled. As mentioned over there, they increase the security inside your Azure environment. Now we will take this theorie into practice and start working with it. We’ll create an azure function which access a storage account and writes a stream to it, by using the user Managed Identity. &amp;lt;!--more--&amp;gt;&lt;/p&gt;

&lt;p&gt;There will be 2 parts in this blogpost, the first one is to setup up the Azure environment, creating the azure function, the managed Identities etc. The second part where the application will be updated to use the User Managed Identity to write the stream to the blobl storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Azure environment
&lt;/h2&gt;

&lt;p&gt;Creating the Azure environment will be done in combination with Bicep. Let’s start with the Azure Function. The Azure function requires at least an App Service Plan, a storage account and off course a function app.&lt;/p&gt;

&lt;p&gt;The service plan is create on the &lt;code&gt;Dynamic&lt;/code&gt; tier and as &lt;code&gt;reserved&lt;/code&gt; which enabled running on a a Linux environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = {
    location: location
    name: 'asp-blog-managed-identities'
    kind: 'linux'
    sku:{
        name: 'Y1'
        tier: 'Dynamic'
    }
    properties:{
        reserved: true
    }
    tags:{
        blog: 'blog-managed-identities-storage'
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function app and its linked storage account is described below. The functionapp is attached to the App Service Plan via the &lt;code&gt;serviceFarmId&lt;/code&gt;. The purpose of the AzureWebJobsStorage which is listed in the appSettings, is to store the binaries of the function. It isn’t involved in the futher process of accessing data via a user managed identity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource functionApp 'Microsoft.Web/sites@2022-03-01'={
name: 'fct-blobwriter'
location: location
kind: 'functionapp'
identity: {
    type: 'UserAssigned'
    userAssignedIdentities: {
        '${usermanagedIdentity.id}': {}
    }
}  
properties:{
    serverFarmId: appServicePlan.id
    siteConfig: {
        appSettings: [
            {
                name: 'AzureWebJobsStorage'
                value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
            }
            {
                name: 'FUNCTIONS_EXTENSION_VERSION'
                value: '~4'
            }
            {
                name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
                value: applicationInsights.properties.InstrumentationKey
            }
            {
                name: 'FUNCTIONS_WORKER_RUNTIME'
                value: 'dotnet-isolated'
            }
            {
                name: 'AZURE_CLIENT_ID'
                value: usermanagedIdentity.properties.clientId 
            }    
        ]
        minTlsVersion: '1.2'
    }
    httpsOnly: true
}
}

resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' = {
    name: storageAccountName
    location: location
    sku: {
        name: storageAccountType
    }
    kind: 'StorageV2'
    properties: {
    }
    tags:{
        blog: 'blog-managed-identities-storage'
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the identity section, an array of user managed identities are added. In this example only one is added.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;identity: {
    type: 'UserAssigned'
    userAssignedIdentities: {
        '${usermanagedIdentity.id}': {}
    }
}  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next to this, which user managed identity to use, is specified in the appsettings of the function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    name: 'AZURE_CLIENT_ID'
    value: usermanagedIdentity.properties.clientId 
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the function is created, the next step is to create a User Managed Identity, a Role and a RoleAssignment.&lt;/p&gt;

&lt;p&gt;The Role is created below, for simplicity an existing is being used here. You can find a full list with their corresponding guid over &lt;a href="https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#contributor" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@description('This is the built-in Storage Blob Data Contributor role. See https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#contributor')
resource contributorRoleDefinition 'Microsoft.Authorization/roleDefinitions@2018-01-01-preview' existing = {
  scope: subscription()
  name: 'ba92f5b4-2d11-453d-a403-e96b0029c9fe'
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once these are created, it is time to add the RoleAssignment. In this Role Assignment the User Managed Identity and the Role definition are coupled together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
    name: guid(resourceGroup().id, usermanagedIdentity.id, contributorRoleDefinition.id)
    properties:{ 
        principalId: usermanagedIdentity.properties.principalId 
        roleDefinitionId: contributorRoleDefinition.id
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now it is time to move over to the application to start using this.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Application
&lt;/h2&gt;

&lt;p&gt;Before starting using the User Managed Identity, let see how this should typical is done without them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Without User Managed Identity
&lt;/h3&gt;

&lt;p&gt;When no Managed Identity is used, a SAS token is created on the Azure portal to gain access to the blobcontainer. An important aspect to get the SAS token from the Azure Portal, is to store it in a safe manner, eg Azure Keyvault.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static async Task WriteWithSas()
{
    var connectionString =
            $"BlobEndpoint=https://{BLOB_ACCOUNT}.blob.core.windows.net/;{BLOB_SASTOKEN}";
    var client = new BlobContainerClient(connectionString, BLOB_CONTAINER);
    ... 
    ... // stream stuff goes here
    ...
    return await blobContainerClient.UploadBlobAsync($"file-{DateTime.UtcNow:yyyy-MM-dd-HH-mm-ss}.txt", stream);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using User Managed Identity
&lt;/h3&gt;

&lt;p&gt;Now when using the User Managed Identity, we don’t have to securely fetch any identities or so, we can just safely use it, which is the whole idea to make it much safer. Now you’ll notice that there is no SAS token, or another secret involved when creating the connection string. The difference between those, is to use the DefaultAzureCredential. This defaultAzureCrendential will use the user managed identity which is specified in the appsettings of the function &lt;code&gt;AZURE_CLIENT_ID&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string containerEndpoint = $"https://{BLOB_ACCOUNT}.blob.core.windows.net/{BLOB_CONTAINER}";

var client = new BlobContainerClient(new Uri(containerEndpoint), new DefaultAzureCredential());
... 
... // stream stuff goes here
...
return await blobContainerClient.UploadBlobAsync($"file-{DateTime.UtcNow:yyyy-MM-dd-HH-mm-ss}.txt", stream);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using this the Azure function is allowed to access and write a stream to the blob storage!&lt;/p&gt;

&lt;p&gt;The complete example can be found over &lt;a href="https://github.com/robvanpamel/blogs-code/tree/main/2022-ManagedIdentities" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions or comments (including typos ;) ) please leave them over &lt;a href="https://github.com/robvanpamel/robvanpamel.github.io/issues/new" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading and learning along! See you next time!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>managedidentities</category>
      <category>bicep</category>
      <category>blobstorage</category>
    </item>
    <item>
      <title>Improve security with Managed Identities in Azure</title>
      <dc:creator>Rob Van Pamel</dc:creator>
      <pubDate>Mon, 31 Oct 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/robvanpamel/improve-security-with-managed-identities-in-azure-4caa</link>
      <guid>https://dev.to/robvanpamel/improve-security-with-managed-identities-in-azure-4caa</guid>
      <description>&lt;p&gt;When you want to access a resource in Azure like a storage account or a SQL database, there are multiple options available. SharedAccessKeys and connection strings are the most popular one we have nowadays. Who hasn’t used a connectionstring to connect to the sql-database? However, these solutions are based on what you could call “shared credentials” which are not always the most secure way. You have to store the credentials in safe manner, so it doesn’t get compromised.&lt;/p&gt;

&lt;p&gt;This is where Azure Key Vault is the obvious solution. The credentials which must remain secret can be stored in a secure way in a key vault. However, it is already stored safely, there is still a possibility that it leaks or gets compromised by human errors. On top of that, you still need to rotate the secrets, which puts load on the responsible team.&lt;/p&gt;

&lt;p&gt;In most cases, the resource which we want to access runs in the same Azure environment as the resource &amp;lt;!--more--&amp;gt; that would like to access it. So it should be possible that we provide access without the hassle of shared credentials etc. The answer to this challenge is using Managed Identities.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure Managed Identities
&lt;/h1&gt;

&lt;p&gt;Managed identities make it possible to connect multiple resources without the management of secrets or credentials. You don’t even have access to the secret or credential which is used. Azure mananaged identities can be used for each service or resource that supports &lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/managed-identities-status" rel="noopener noreferrer"&gt;Azure AD authentication&lt;/a&gt;. And best part for last, you can use them for free!&lt;/p&gt;

&lt;h3&gt;
  
  
  User-assigned Managed Identity
&lt;/h3&gt;

&lt;p&gt;A user-assigned managed identity is an identity which can be shared accross multiple resources. It is not attached to a given resource, which means that multiple resources, can use that same Managed Identity to access a given resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  System-assigned Managed Identity
&lt;/h3&gt;

&lt;p&gt;A system-assigned managed identity is an identity which is directly attached to a given resource. It is created together with the resource and if the resource is removed, the managed identity is removed as well.&lt;/p&gt;

&lt;p&gt;Managed identities, system or user-assigned, will only define who has the permission to do an action. What and where it can be done, is not defined yet. This is defined inside a Role Assignment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Role assignments
&lt;/h2&gt;

&lt;p&gt;A role assignments in azure is a definition of 3 principles&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What can be done - is defined in the role and its permissions&lt;/li&gt;
&lt;li&gt;Who can do it - is defined in the principal&lt;/li&gt;
&lt;li&gt;Where can it be done - is defined in the context.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Role
&lt;/h3&gt;

&lt;p&gt;Like mentioned above, the role will define what can be done in a list of permissions, eg the storage blob data reader can access the blob storage for read, but he is not able to write to it. Multiple built-in roles are available, but custom roles can be created as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Principal
&lt;/h3&gt;

&lt;p&gt;The principal can be a managed identity, which we discuss currently, but it can also be a user or a group.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scope
&lt;/h3&gt;

&lt;p&gt;The scope is where the user can use the role, For example in the given resource group, or in the subscription.&lt;/p&gt;

&lt;p&gt;Role assignments are maintained individually as well as for user as for system-assigned identities. This implies that in case that a system or a user assigned identity is removed, the role assignment still needs to be updated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployments and Managed Identities
&lt;/h2&gt;

&lt;p&gt;Although both, user or system assigned, managed identities can serve the same needs, user-assigned Managed identities have my preference. This mainly from a pracitcal perspecitice when deploying resources together with the managed identities for that resource. When using system-assigned managed identities, the number of role assignments which are required can increase very rapidly. Be aware that there is a limit when creating these assignments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw18foaegu3pigbawsgc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw18foaegu3pigbawsgc2.png" alt="system-assigned identities" width="577" height="406"&gt;&lt;/a&gt;&lt;em&gt;system assigned identities - 8 role assignements&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the example above (image from microsoft) already 8 role assignments need to be created. When we are working with User Based identities only 2 are required, see below. This already makes the life a bit easier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs668x8i42o5m9s2usvco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs668x8i42o5m9s2usvco.png" alt="user-assigned identities" width="591" height="406"&gt;&lt;/a&gt;&lt;em&gt;user assigned identities - 2 role assignements&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Combining multiple user-assigned Managed Identities is a flexibility that is only possible when working with user-assigned managed identities. We could combine a ‘shared user-assigned identity’ with a more specific user-assigned identity. Although the latter is also possible when combining system and user-managed identities.&lt;/p&gt;

&lt;p&gt;It is important to note that combining several user-assigned identities should still adhere to the principle of the least privilege.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use of user-assigned or system-assigned managed identities?
&lt;/h2&gt;

&lt;p&gt;The preference here goes to the usage of user-assigned above system-assigned managed identities. They are more flexible to be used across multiple resources. This will improve the developer experience, while it isn’t less secure compared to system-assigned managed identities.&lt;/p&gt;

&lt;p&gt;User-assigned managed identities have their own lifecycle, what improves creation of resources and resource groups. While the lifecycle of system-assigned managed identities is attached to the resource. This could require multiple runs of a pipeline before a role-assignment can be done, because you can’t create a role assignment for a resource which isn’t created yet. While with a user-assigned managed identity, the role assignments can be created before the resource are created, which makes it easier to use.&lt;/p&gt;

&lt;p&gt;That’s it for today, thanks for reading along! In the next blog post I ‘ll explain more in detail how you can create user-assigned identities and how to use them to access a storage account from an appservice. See you there!&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>managed</category>
      <category>identities</category>
    </item>
    <item>
      <title>The next steps in Github Pages</title>
      <dc:creator>Rob Van Pamel</dc:creator>
      <pubDate>Sat, 15 Oct 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/robvanpamel/the-next-steps-in-github-pages-4kgp</link>
      <guid>https://dev.to/robvanpamel/the-next-steps-in-github-pages-4kgp</guid>
      <description>&lt;p&gt;After the first steps follows … the next steps. I created the first blogpost and made it public, I already realised that it wasn’t ready yet to be launched. It was way too early, but at least it was something.&lt;/p&gt;

&lt;p&gt;So while waiting &amp;lt;!--more--&amp;gt; in the airport on my next flight I continued to work on it. What needed to be done ? A lot of things!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The title of the site was the one from the repository&lt;/li&gt;
&lt;li&gt;I didn’t had the social media links&lt;/li&gt;
&lt;li&gt;I couldn’t get an excerpt of my blogpost on the main page&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Adding social media links
&lt;/h2&gt;

&lt;p&gt;Adding these social media links didn’t went as fast as I expected it. It seemed that I was using an ‘old’ version of the minima theme. Due to this, the settings which i applied in the &lt;code&gt;_config.yml&lt;/code&gt; didn’t apply.&lt;/p&gt;

&lt;p&gt;You can always refer to the remote theme from github by using a plugin. After changing it to use that remote theme, I got a step further and after tweaking a bit more the media links worked. See the &lt;code&gt;_config.yml&lt;/code&gt; below for more information&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minima:
  social_links:
    twitter: youraccount
    github: youraccount
    linkedin: youraccount-878796454

remote_theme: jekyll/minima
plugins:
  - jekyll-remote-theme

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding titles
&lt;/h2&gt;

&lt;p&gt;By default the title and description of your page are the name and description of the repository. Which is mostly not what you want. So lets change this as well I saw multiple options online, but not all of them worked. This one worked for me in the &lt;code&gt;_config.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;title: Rob Van Pamel
description: Rob Van Pamel

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding Excerpts
&lt;/h2&gt;

&lt;p&gt;Another default setting of the minima theme is that there is no excerpt shown on your post page, but only the title. I like it that you can already see a bit of the post, so I enabled that as well in the&lt;br&gt;&lt;br&gt;
&lt;code&gt;_config.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;show_excerpts: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards add the excerpt seperator in your blogpost header and off course also in the blogpost. Here is an example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;title: The next steps in Github Pages
date: 2022-10-15   
excerpt_separator: &amp;lt;!--more--&amp;gt;
---
After the first steps follows ... the next steps. I created the first blogpost and made it public, I already realised that it wasn't ready yet to be launched. It was way too early, but at least it was something. 

So while waiting &amp;lt;!--more--&amp;gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That will be it for now, it the next one, i would like to change some more smaller things, like a favicon, look at SEO, add tags to the blogposts, and so on.&lt;/p&gt;

</description>
      <category>github</category>
      <category>jekyll</category>
    </item>
    <item>
      <title>The first steps in Github Pages</title>
      <dc:creator>Rob Van Pamel</dc:creator>
      <pubDate>Fri, 14 Oct 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/robvanpamel/the-first-steps-in-github-pages-3i95</link>
      <guid>https://dev.to/robvanpamel/the-first-steps-in-github-pages-3i95</guid>
      <description>&lt;p&gt;&lt;em&gt;“Are you serious, Rob, you don’t know how to create Github pages?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yes, I never looked into to be honest. I know you can create them with regular HTML, CSS and some JS, but I want to work on something new, so I’ll start with Jekyll and markdown.&lt;/p&gt;

&lt;p&gt;I want something that is easy to maintain, so I am able to proceed with it with not too much effort and overhead. Jekyll and markdown seems to be a good fit. &amp;lt;!--more--&amp;gt; But as I’ve never used github pages before so, it’s all new for me. As the purpose of the github pages is to serve as a blog, it’s a very good starting point. Allright lets dive into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Specify a theme
&lt;/h2&gt;

&lt;p&gt;The first thing you need to do is add a _config.yml, which looks like this in my case. Which just specifies the theme and the installed plugins. As a menu is required, the Jekyll menus are added over here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;theme: minima
plugins:
- jekyll-menus

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives already a clean look to your page, although it isn’t fancy yet. But that is something that wil be fixed later, content over presentation, right? For those who know me, I’m not that frontend guy which works on that slick webpage. I’m the one sitting quietly in the back trying to figure out why we experience a too high latency 😏 .&lt;/p&gt;

&lt;p&gt;Okay, next step is adding the blogs itself, which is actually pretty easy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding blog posts
&lt;/h2&gt;

&lt;p&gt;A blog post can be added very easily by adding a new markdown page in a folder called &lt;code&gt;_posts&lt;/code&gt;. The only convention you need to be aware of is the name of your file. This requires the format of &lt;code&gt;&amp;lt;YYYY-MM-DD-TitleOfYourBlogPost&amp;gt;&lt;/code&gt;. In this example, it is &lt;code&gt;2022-10-15-CreatingGitHubPages&lt;/code&gt;, so that aint’t too hard, is it?&lt;/p&gt;

&lt;p&gt;At the top of your blogpost, you can add more metadata, for example title and date, which can be used by some plugins. See below for the example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
title: "The first steps in Github Pages"
date: 2022-10-14
---

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  About page
&lt;/h2&gt;

&lt;p&gt;Every blog needs an &lt;code&gt;about&lt;/code&gt; page, so this one as well. This can be done, by adding a new markdown page where you specify all the contents. If you want to be page to be accessable, you need to tell Jekyll in which menu you want it to appear. That can be done, by adding some metadata at the top of your page again. See the example below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
title: About
menus: header
---

I am a .NET consultant with a focus on architecture and cloud. I ....

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it for now, and I’ll explore some more about it, and will tell you more in an upcoming post! To be continued…&lt;/p&gt;

</description>
      <category>github</category>
      <category>jekyll</category>
    </item>
  </channel>
</rss>
