<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jonathan Mourtada</title>
    <description>The latest articles on DEV Community by Jonathan Mourtada (@jmourtada).</description>
    <link>https://dev.to/jmourtada</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jmourtada"/>
    <language>en</language>
    <item>
      <title>OpenTelemetry tracing with nodejs and express</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Fri, 11 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/opentelemetry-tracing-with-nodejs-and-express-1de7</link>
      <guid>https://dev.to/jmourtada/opentelemetry-tracing-with-nodejs-and-express-1de7</guid>
      <description>&lt;p&gt;In this post I’ll go through a simple example on how to setup OpenTelemetry tracing in a nodejs express application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hello world express
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// index.js
const express = require("express");

const app = express();
const port = 3000;

app.get("/", (req, res) =&amp;gt; {
  res.send("Hello World!");
});

app.listen(port, () =&amp;gt; {
  console.log(`Example app listening on port ${port}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Needed packages
&lt;/h2&gt;

&lt;p&gt;First we need to add the OpenTelemetry api and sdk package&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn add @opentelemetry/api @opentelemetry/sdk-node

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To instrument express, incoming and outgoing http requests we use the express and http instrumentation libraries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn add @opentelemetry/instrumentation-http @opentelemetry/instrumentation-express

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And lastly we add the otlp grpc tracing exporter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn add @opentelemetry/exporter-trace-otlp-grpc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup tracing
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// tracing.js
const OpenTelemetry = require("@opentelemetry/sdk-node");
const Resources = require("@opentelemetry/resources");
const SemanticConventions = require("@opentelemetry/semantic-conventions");
const InstrumentationHttp = require("@opentelemetry/instrumentation-http");
const InstrumentationExpress = require("@opentelemetry/instrumentation-express");
const ExporterTraceOtlpGrpc = require("@opentelemetry/exporter-trace-otlp-grpc");

const sdk = new OpenTelemetry.NodeSDK({
  resource: new Resources.Resource({
    [SemanticConventions.SemanticResourceAttributes.SERVICE_NAME]: "my-service",
  }),
  traceExporter: new ExporterTraceOtlpGrpc.OTLPTraceExporter({
    // url is optional and can be omitted - default is localhost:4317
    url: process.env["OTEL_EXPORTER_OTLP_ENDPOINT"] || undefined,
  }),
  instrumentations: [
    new InstrumentationHttp.HttpInstrumentation(),
    new InstrumentationExpress.ExpressInstrumentation(),
  ],
});

sdk.start();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Run the express server
&lt;/h1&gt;

&lt;p&gt;To configure and start the tracing sdk we can use nodes &lt;code&gt;-r/--require&lt;/code&gt; flag to preload our tracing module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node -r ./tracing.js index.js

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now every request will be traced and exported to the configured otlp receiver.&lt;/p&gt;

</description>
      <category>opentelemetry</category>
      <category>node</category>
      <category>express</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Create a simple azure function</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Sat, 05 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/create-a-simple-azure-function-1952</link>
      <guid>https://dev.to/jmourtada/create-a-simple-azure-function-1952</guid>
      <description>&lt;p&gt;In this post I’ll show you how to create a simple azure function which is triggered by a http call and logs the requests to an &lt;code&gt;appendblob&lt;/code&gt; on azure blob storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a new project
&lt;/h2&gt;

&lt;p&gt;First of all you will need to create a project. I just used the visual studio project template for azure functions(The Azure SDK is needed). Choose to create a Http trigger function and set the authorization level to anonymous&lt;/p&gt;

&lt;h2&gt;
  
  
  The code
&lt;/h2&gt;

&lt;p&gt;Once created you will get a stub for a working http trigger function. Below is an example how to log the request to an appendblob.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Azure.Storage.Blobs;
using System.Globalization;
using Azure.Storage.Blobs.Specialized;
using Azure.Storage.Blobs.Models;
using System.Text;
using Microsoft.Extensions.Primitives;
using System.Linq;

namespace MyAzureFunction
{
  public static class MyAzureFunction
    {
    [FunctionName("MyAzureFunction")]
    public static async Task&amp;lt;IActionResult&amp;gt; Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req,
    [Blob("my-blob-container")] BlobContainerClient container,
    ILogger log)
    {
      var culture = new CultureInfo("sv-SE");
      var clientIp = GetClientIp(req.Headers);

      var logStr = $"{DateTime.Now.ToString("T", culture)} {clientIp}\n";
      log.LogInformation(logStr);

      await container.CreateIfNotExistsAsync();
      var blobName = $"access_{DateTime.Now.ToString("d", culture).Replace("-", "")}.log";
      var appendBlob = container.GetAppendBlobClient(blobName);
      await appendBlob.CreateIfNotExistsAsync();
      await appendBlob.SetHttpHeadersAsync(new BlobHttpHeaders() { ContentType = "text/plain" });
      byte[] blockContent = Encoding.UTF8.GetBytes(logStr);
      using (var ms = new MemoryStream(blockContent))
      {
        await appendBlob.AppendBlockAsync(ms);
      }

      return new OkResult();
    }

    private static string GetClientIp(IHeaderDictionary headers)
    {
      headers.TryGetValue("X-Forwarded-For", out StringValues value);
      var clientIp = value.FirstOrDefault() ?? "missing-ip";
      return clientIp;
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first interesting part is the &lt;code&gt;[Blob("my-blob-container")] BlobContainerClient container&lt;/code&gt;. This injects a &lt;code&gt;BlobContainerClient&lt;/code&gt; that will be used with our interactions with the blob storage of our configured azure storage account.&lt;/p&gt;

&lt;p&gt;It is important to use the new &lt;code&gt;Azure.Storage.Blobs&lt;/code&gt; namespace. I had some troubles following old code examples that used deprecated namespaces and types. More information can be found here &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-output?tabs=csharp#additional-types"&gt;https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-output?tabs=csharp#additional-types&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The function creates an appendblob for every new date and sets the &lt;code&gt;content-type&lt;/code&gt; of the blob to &lt;code&gt;text/plain&lt;/code&gt;. It then adds a row to the blob for every incoming http get request.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>serverless</category>
      <category>azurefunctions</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>How to setup OpenTelemetry instrumentation in ASP.NET core</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Sun, 30 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/how-to-setup-opentelemetry-instrumentation-in-aspnet-core-23p5</link>
      <guid>https://dev.to/jmourtada/how-to-setup-opentelemetry-instrumentation-in-aspnet-core-23p5</guid>
      <description>&lt;p&gt;In this post I’ll show how to setup OpenTelemetry tracing, metrics and logging for AspNetCore on dotnet 6&lt;/p&gt;

&lt;h2&gt;
  
  
  New project
&lt;/h2&gt;

&lt;p&gt;Start by creating a new project. I’ll be using the webapi template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet new webapi --output opentelemetry-example

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add required packages
&lt;/h2&gt;

&lt;p&gt;Then we need to add the OpenTelemetry packages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package --prerelease OpenTelemetry.Exporter.Console
dotnet add package --prerelease OpenTelemetry.Extensions.Hosting
dotnet add package --prerelease OpenTelemetry.Instrumentation.AspNetCore

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first package is the console exporter which is for outputing telemetry data on to your local developer console. For a real project you would probably use something like the jaeger or the otlp exporter.&lt;/p&gt;

&lt;p&gt;The Extension.Hosting package includes the extension methods &lt;code&gt;AddOpenTelemetryTracing&lt;/code&gt; and &lt;code&gt;AddOpenTelemetryMetrics&lt;/code&gt; which is used on &lt;code&gt;IServiceCollection&lt;/code&gt;to setup OpenTelemetry. It also adds the &lt;code&gt;LoggingBuilder&lt;/code&gt; extension method &lt;code&gt;AddOpenTelemetry&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The Instrumentation.AspNetCore package collects telemetry data about incoming web requests. It provides the &lt;code&gt;AddAspNetCoreInstrumentation&lt;/code&gt; extension method for &lt;code&gt;TraceProviderBuilder&lt;/code&gt; and &lt;code&gt;MeterProviderBuilder&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add tracing
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services
    .AddOpenTelemetryTracing((builder) =&amp;gt; builder
        // Configure the resource attribute `service.name` to MyServiceName
        .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyServiceName"))
        // Add tracing of the AspNetCore instrumentation library
        .AddAspNetCoreInstrumentation()
        .AddConsoleExporter()
    );

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add metrics
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services
    .AddOpenTelemetryMetrics(builder =&amp;gt; builder
        // Configure the resource attribute `service.name` to MyServiceName
        .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyServiceName"))
        // Add metrics from the AspNetCore instrumentation library
        .AddAspNetCoreInstrumentation()
        .AddConsoleExporter(options =&amp;gt;
        {
            options.MetricReaderType = MetricReaderType.Periodic;
            options.PeriodicExportingMetricReaderOptions.ExportIntervalMilliseconds = 5000;
        }));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add logging
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Host
    .ConfigureLogging(logging =&amp;gt; logging
        .ClearProviders()
        .AddOpenTelemetry(options =&amp;gt;
        {
            // Export the body of the message
            options.IncludeFormattedMessage = true;
            // Configure the resource attribute `service.name` to MyServiceName
            options.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyServiceName"));
            options.AddConsoleExporter(options =&amp;gt;
            {
                options.MetricReaderType = MetricReaderType.Periodic;
                options.PeriodicExportingMetricReaderOptions.ExportIntervalMilliseconds = 5000;
            });
        }));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Complete example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using OpenTelemetry.Logs;
using OpenTelemetry.Metrics;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.

builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

builder.Services
    .AddOpenTelemetryTracing((builder) =&amp;gt; builder
        // Configure the resource attribute `service.name` to MyServiceName
        .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyServiceName"))
        // Add tracing of the AspNetCore instrumentation library
        .AddAspNetCoreInstrumentation()
        .AddConsoleExporter()
    );

builder.Services
    .AddOpenTelemetryMetrics(builder =&amp;gt; builder
        // Configure the resource attribute `service.name` to MyServiceName
        .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyServiceName"))
        // Add metrics from the AspNetCore instrumentation library
        .AddAspNetCoreInstrumentation()
        .AddConsoleExporter(options =&amp;gt;
        {
            options.MetricReaderType = MetricReaderType.Periodic;
            options.PeriodicExportingMetricReaderOptions.ExportIntervalMilliseconds = 5000;
        }));

builder.Host
    .ConfigureLogging(logging =&amp;gt; logging
        .ClearProviders()
        .AddOpenTelemetry(options =&amp;gt;
        {
            // Export the body of the message
            options.IncludeFormattedMessage = true;
            // Configure the resource attribute `service.name` to MyServiceName
            options.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyServiceName"));
            options.AddConsoleExporter(options =&amp;gt;
            {
                options.MetricReaderType = MetricReaderType.Periodic;
                options.PeriodicExportingMetricReaderOptions.ExportIntervalMilliseconds = 5000;
            });
        }));

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.UseAuthorization();

app.MapControllers();

app.Run();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>opentelemetry</category>
      <category>c</category>
      <category>aspnetcore</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Add an Azure static disk with Azure disk CSI driver in Kubernetes</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Fri, 21 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/add-an-azure-static-disk-with-azure-disk-csi-driver-in-kubernetes-5eg5</link>
      <guid>https://dev.to/jmourtada/add-an-azure-static-disk-with-azure-disk-csi-driver-in-kubernetes-5eg5</guid>
      <description>&lt;p&gt;The in tree storage drivers for Kubernetes is planned to be removed in version 1.26-1.27. The new Container Storage Interface(CSI) is the way forward for handling persistent storage inside Kubernetes and it’s been GA since 1.13. For Azure disk the in tree driver is deprecated since 1.19.&lt;/p&gt;

&lt;p&gt;Recently I’ve setup a new AKS cluster and the documentation how to setup and azure static disk volume with Azure disk CSI was a bit hard to find. So I’ll document how I did here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the static disk
&lt;/h2&gt;

&lt;p&gt;The official docs recommends to create disk in the node resource group of your AKS cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az disk create \
--resource-group my-node-resource-group \
--name disk-aks-something \
--location swedencentral \
--os-type linux \
--sku Premium_LRS \
--size-gb 8 \
--query id --output tsv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will output the disk id that you’ll use for as the &lt;code&gt;volumeHandle&lt;/code&gt; in the next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add the persistent volume
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-my-volume
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: disk.csi.azure.com
    readOnly: false
    volumeHandle: /subscriptions/subscription-id/my-node-resource-group/my-resource-group/providers/Microsoft.Compute/disks/disk-aks-something
    volumeAttributes:
      fsType: ext4
      #partition: "1" # optional, remove this if there is no partition

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add the persistent volume claim
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-my-volume
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  volumeName: pv-my-volume
  storageClassName: ""

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add the pvc to your pod
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Pod
apiVersion: v1
metadata:
  name: nginx-azuredisk
spec:
  nodeSelector:
    kubernetes.io/os: linux
  containers:
    - image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
      name: nginx-azuredisk
      command:
        - "/bin/sh"
        - "-c"
        - while true; do echo $(date) &amp;gt;&amp;gt; /mnt/azuredisk/outfile; sleep 1; done
      volumeMounts:
        - name: azuredisk01
          mountPath: "/mnt/azuredisk"
  volumes:
    - name: azuredisk01
      persistentVolumeClaim:
        claimName: pvc-my-volume

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>aks</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How to deploy an aks cluster in azure</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Thu, 13 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/how-to-deploy-an-aks-cluster-in-azure-4km0</link>
      <guid>https://dev.to/jmourtada/how-to-deploy-an-aks-cluster-in-azure-4km0</guid>
      <description>&lt;p&gt;This is a guide on how to deploy a kubernetes cluster in azure(AKS). The steps below is done with az cli in a bash terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a resource group
&lt;/h2&gt;

&lt;p&gt;First of all we need a resource group. This is where our kubernetes cluster will be created together with our custom vnet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az group create --name rg-something-swedencentral-01 --location swedencentral

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating a custom vnet
&lt;/h2&gt;

&lt;p&gt;Then we’ll create our own vnet so we can have control over the assigned IP adresses. The vnet can be created automatically if you don’t have any need for specifying your own subnets. But if you want to have a hybrid cloud connected to a on premise network you need to make sure the cloud network and on premise network doesn’t collide.&lt;/p&gt;

&lt;p&gt;To create the vnet run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az network vnet create --name vnet-something-swedencentral-01 --resource-group rg-something-swedencentral-01 --address-prefixes 10.110.0.0/16

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we create a subnet used for our aks cluster. This subnet is used for both our nodes and pods. It’s important to have enough adresses for the cluster. If you grow out of your subnet the cluster must be recreated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az network vnet subnet create --resource-group rg-something-swedencentral-01 --vnet-name vnet-something-swedencentral-01 --name snet-something-swedencentral-aks-01 --address-prefixes 10.110.0.0/22

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a user assigned managed identity
&lt;/h2&gt;

&lt;p&gt;To use our newly created vnet and subnet the aks cluster needs to have permission to use it. Aks uses a service principal when it calls the azure api:s. A service principal is just a appid with a secret. A service principal secret by default expires after one year and you need to renew the secret. But you can also use a managed identity. A managed identity is a service principal which azure automically renews the secret every 46 days.&lt;/p&gt;

&lt;p&gt;To create a managed identity run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az identity create --resource-group rg-something-swedencentral-01 --name id-aks-something-swedencentral-01

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The we need to create a role assigment to our subnet with the role Network Contributor&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get the id of the subnet
SUBNET_ID=$(az network vnet subnet list \
    --resource-group rg-something-swedencentral-01 \
    --vnet-name vnet-something-swedencentral-01 \
    --query "[0].id" --output tsv)

# Get the appid of the managed identity
APP_ID=$(az identity show --resource-group rg-something-swedencentral-01 --name id-aks-something-swedencentral-01 --query "clientId" --output tsv)

# Create the role assignment
az role assignment create --assignee $APP_ID --scope $SUBNET_ID --role "Network Contributor"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create the aks cluster
&lt;/h2&gt;

&lt;p&gt;To create the cluster we need to run the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az aks create \
--resource-group rg-something-swedencentral-01 \
--name aks-something-swedencentral-01 \
--location swedencentral \
--enable-managed-identity \
--assign-identity $APP_ID \
--network-plugin azure \
--vnet-subnet-id $SUBNET_ID \
--docker-bridge-address 172.17.0.1/16 \
--dns-service-ip 10.96.0.10 \
--service-cidr 10.96.0.0/16 \
--windows-admin-username windowsuser \
--windows-admin-password 'windowspassword' \
--api-server-authorized-ip-ranges &amp;lt;ALLOW_CIDR_RANGE&amp;gt; \
--node-vm-size Standard_D2s_v3 \
--node-count 2 \
--node-resource-group rg-aks-something-swedencentral-01

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ll go through some of the parameters.&lt;/p&gt;

&lt;h3&gt;
  
  
  network-plugin
&lt;/h3&gt;

&lt;p&gt;Aks has two network plugins kubenet and azure cni. Kubenet is fully isolated NAT network and doesn’t support bringing your own vnet. Azure cni is required for windows nodes and if you want to have your own vnet.&lt;/p&gt;

&lt;h3&gt;
  
  
  service-cidr
&lt;/h3&gt;

&lt;p&gt;This is a network that only resides inside the cluster. It’s used by kubernetes services. This ip-range cannot be accessible anywhere else in your network. But the network can be reused inside another kubernetes cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  windows-admin-username and windows-admin-password
&lt;/h3&gt;

&lt;p&gt;This is only needed if you want to run windows nodes in you cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  api-server-authorized-ip-ranges
&lt;/h3&gt;

&lt;p&gt;Azure support both public and private cluster. A private cluster means you have to have a vpn or hybrid cloud setup to manage your kubernetes cluster. A public cluster is by default accessible anywhere. With this parameter you can lock down which ip ranges that have access.&lt;/p&gt;

&lt;h3&gt;
  
  
  node-resource-group
&lt;/h3&gt;

&lt;p&gt;This parameter sets the name of the automatic created node resource group. This resource group is always created and is managed by the kubernetes cluster.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>aks</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The OpenTelemetry collector</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Sat, 08 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/the-opentelemetry-collector-23h8</link>
      <guid>https://dev.to/jmourtada/the-opentelemetry-collector-23h8</guid>
      <description>&lt;p&gt;The OpenTelemetry collector provides a vendor-agnostic implementation on how to receive, process and export telemetry data. The collector is a single binary that can be run as an agent or a gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Versions of the collector
&lt;/h2&gt;

&lt;p&gt;There are two versions of the collector. The core version contains components that are the foundation of the collector such as configuration and the most common receivers, processors and exporters. The contrib version contains all of the components of core but also more vendor-specific and experimental receivers, processors and exporters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collector components
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Receivers
&lt;/h3&gt;

&lt;p&gt;A receiver is how data gets into the collector. Data can either be pushed or pulled. One receiver can support multiple data sources(traces, metrics and logs). An example of a receiver is an OpenTelemetry Protocol(OTLP) endpoint or a prometheus scraper.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exporters
&lt;/h3&gt;

&lt;p&gt;Exporters sends data to one or more backends/destinations. Like the receiver an exporter can also be push or pull based and support multiple data sources. For example an exporter could provide a prometheus endpoint or send data to a jaeger backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Processors
&lt;/h3&gt;

&lt;p&gt;Processors handles data after it’s received but before it’s exported. An example of a processor is the batch processor which puts incoming data in batches. This makes for better compressing of the data and limits the number of outgoing connections.&lt;/p&gt;

</description>
      <category>opentelemetry</category>
      <category>opentelemetrycollect</category>
      <category>observability</category>
    </item>
    <item>
      <title>What is OpenTelemetry?</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Tue, 04 Jan 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/what-is-opentelemetry-2a8h</link>
      <guid>https://dev.to/jmourtada/what-is-opentelemetry-2a8h</guid>
      <description>&lt;p&gt;This is the first post in a planned blog series about OpenTelemetry.&lt;/p&gt;

&lt;p&gt;OpenTelemetry is an open source observability framework which is the result of merging the projects OpenTracing and OpenCensus. It’s goal is to provide language agnostic tools for telemetry data such as tracing, metrics and logging in distributed architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signals
&lt;/h2&gt;

&lt;p&gt;The architecture of OpenTelemetry is designed around signals and can be thought of as categories of telemetry. Each signal provides a way for software to describe itself.&lt;/p&gt;

&lt;p&gt;There are four signals defined in the specification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tracing&lt;/li&gt;
&lt;li&gt;Metrics&lt;/li&gt;
&lt;li&gt;Logs&lt;/li&gt;
&lt;li&gt;Baggage&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tracing
&lt;/h2&gt;

&lt;p&gt;A trace tracks the progress of an request as it’s flows through services in a distributed application. A trace contains a tree of spans. A span represents a unit of work for example in a service or a component.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;A metric is measurement of work being done by a service or a component in a specific point in time. An example of a metric can be a counter of total requests for a http endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logs
&lt;/h2&gt;

&lt;p&gt;A log entry is a text record that has a timestamp. Logs is an independant data source but can also be attached to spans.&lt;/p&gt;

&lt;h2&gt;
  
  
  Baggage
&lt;/h2&gt;

&lt;p&gt;Bagagge is a mechanism for propagating observability events as name/value pairs in a distributed transaction. This can be used to store data about a trace across process-boundaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context propagation
&lt;/h2&gt;

&lt;p&gt;Signals are built on shared a mechanism called context propagation. The context provides a way to store state and accessing data over the lifetime of a distributed transaction. Propagators are used to serialize and deserialize context over different protocols.&lt;/p&gt;

</description>
      <category>opentelemtry</category>
      <category>opentelemetry</category>
      <category>observability</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Debug x86 assembly with GDB</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Sat, 04 Dec 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/debug-x86-assembly-with-gdb-16mf</link>
      <guid>https://dev.to/jmourtada/debug-x86-assembly-with-gdb-16mf</guid>
      <description>&lt;p&gt;This is the sixth post about x86 assembly. In this post I will show how to debug your x86 assembly code with the GNU Project debugger(GDB). GDB can be used with a lot of languages for example assembly, C, C++ and Rust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating debug symbols
&lt;/h2&gt;

&lt;p&gt;First of all you need to tell nasm to generate debug information. This is done with &lt;code&gt;-g&lt;/code&gt; and &lt;code&gt;-F dwarf&lt;/code&gt;. The &lt;code&gt;-g&lt;/code&gt; option tells nasm to generate debug information and the &lt;code&gt;-F dwarf&lt;/code&gt; sets the debugging format to the standardized debugging format &lt;code&gt;DWARF&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start GDB
&lt;/h2&gt;

&lt;p&gt;Start gdb with &lt;code&gt;gdb &amp;lt;your_assemlby_program&amp;gt;&lt;/code&gt; to enter its interactive shell. You can set breakpoints by writing &lt;code&gt;break &amp;lt;function_name or line number&amp;gt;&lt;/code&gt;. By default gdb uses AT&amp;amp;T flavor to change to intel flavor type &lt;code&gt;set dissambly-flavor intel&lt;/code&gt;. To start debugging the program type &lt;code&gt;run&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging the program
&lt;/h2&gt;

&lt;p&gt;The program will run until a crash happens or to any breakpoint you set and pause. If you want to look at the values currently in the registers type &lt;code&gt;info registers&lt;/code&gt; or if you want to see the next instructions at the same time as the value of the registers type &lt;code&gt;layout regs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To execute the next instruction type &lt;code&gt;next&lt;/code&gt;. If you want to step into a function call type &lt;code&gt;step&lt;/code&gt;. If the program crashes gdb will tell you which line that was the last one executing. You can type &lt;code&gt;backtrace&lt;/code&gt; to see the full stack trace&lt;/p&gt;

</description>
      <category>assembly</category>
      <category>nasm</category>
      <category>x86</category>
    </item>
    <item>
      <title>Calling functions in x86 assembly</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Tue, 23 Nov 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/calling-functions-in-x86-assembly-2fpl</link>
      <guid>https://dev.to/jmourtada/calling-functions-in-x86-assembly-2fpl</guid>
      <description>&lt;p&gt;This is the fifth post about x86 assembly. In this post I will show how to call functions in x86 assembly.&lt;/p&gt;

&lt;p&gt;Below is a simple program that has three functions besides main.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global main
extern printf
section .text
main:
        mov rdi, 5
        call times2
        mov rdi, rax
        call print_value
        call exit
times2:
        push rbp
        mov rbp, rsp

        mov rax, rdi
        add rax, rax

        mov rsp, rbp
        pop rbp
        ret
print_value:
        push rbp
        mov rbp, rsp

        push rdi
        mov rdi, fmt
        pop rsi
        mov rax, 0
        call printf wrt ..plt

        mov rsp, rbp
        pop rbp
        ret
exit:
        mov rdi, 0 ; exit code 0
        mov rax, 60 ; system call for exit
        syscall

section .data
fmt: db "The value is %d", 10, 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Passing values to functions
&lt;/h2&gt;

&lt;p&gt;The code above is following the Linux 64-bit ABI where the first six arguments(integers and pointers) are passed through registers. You could also pass the arguments on the stack or hardcoded memory locations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Function prologue and epilogue
&lt;/h2&gt;

&lt;p&gt;The beginning and the end in the functions &lt;code&gt;times2&lt;/code&gt; and &lt;code&gt;print_value&lt;/code&gt; is called the function prologue and epilogue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  push rbp
  mov rbp, rsp

  ...

  mov rsp, rbp
  pop rbp
  ret

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This a convention for preparing the stack for usage. The prologue creates a new stack frame for the called function and the epilogue restores the stack frame for the calling function.&lt;/p&gt;

</description>
      <category>assembly</category>
      <category>nasm</category>
      <category>x86</category>
    </item>
    <item>
      <title>Calling printf from the C standard library in assembly</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Thu, 11 Nov 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/calling-printf-from-the-c-standard-library-in-assembly-1o39</link>
      <guid>https://dev.to/jmourtada/calling-printf-from-the-c-standard-library-in-assembly-1o39</guid>
      <description>&lt;p&gt;This is the fourth post about x86 assembly. In this post I will show an example how to call the printf function from the C standard library in assembly code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;section .text
    default rel
    extern printf
    global main
main:
    ; Create a stack-frame, re-aligning the stack to 16-byte alignment before calls
    push rbp

    mov rdi, fmt
    mov rsi, message
    mov rax, 0

    ; Call printf
    call printf wrt ..plt

    pop rbp ; Pop stack

    mov rax,0   ; Exit code 0
    ret ; Return
section .data
    message: db "Hello, World", 10, 0
    fmt: db "%s", 10, 0


$ nasm printf.asm -f elf64 -o printf.o
$ gcc printf.o
$ ./a.out
# Hello world

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The first part
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  default rel
  extern printf
  global main

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;default rel&lt;/code&gt; is a nasm assembly directive. It tells nasm to use rip relative adressing. In short this makes the assembler to rewrite the references in instructions that uses our &lt;code&gt;fmt&lt;/code&gt; and &lt;code&gt;message&lt;/code&gt; constants relative to the instruction pointer. This is needed because default for the linker in 64-bit linux is to use position-independent code.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;extern printf&lt;/code&gt; part tells the assembler that this symbol exists outside of this file and needs to be referenced at a later stage.&lt;/p&gt;

&lt;p&gt;Last row here is &lt;code&gt;global main&lt;/code&gt; which is needed for gcc and it’s the entry point for libc.&lt;/p&gt;

&lt;h2&gt;
  
  
  main
&lt;/h2&gt;

&lt;p&gt;First we need to align the stack because the x86_64 ABI requires the stackpointer to always be 16-byte aligned so therefor we push a value to it.&lt;/p&gt;

&lt;p&gt;Then we prepare our registers for the function call to &lt;code&gt;printf&lt;/code&gt; and then we call &lt;code&gt;printf wrt ..plt&lt;/code&gt;. What happens here is that we load printf from the libc shared library and this is a little bit complicated but very briefly it says &lt;code&gt;call printf with relation to procedure linkage table&lt;/code&gt;. The PCL will then the first time &lt;code&gt;printf&lt;/code&gt; is called resolve where &lt;code&gt;printf&lt;/code&gt; is in memory with the help of the dynamic link loader in linux. It then stores that adress for future calls. We could link &lt;code&gt;printf&lt;/code&gt; statically which would copy the code of &lt;code&gt;printf&lt;/code&gt; into the executable.&lt;/p&gt;

&lt;p&gt;Lastly we need to set return value of &lt;code&gt;main&lt;/code&gt; to zero and do a &lt;code&gt;ret&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  call and ret
&lt;/h2&gt;

&lt;p&gt;When you do a &lt;code&gt;call&lt;/code&gt; operation two things happen. First the instruction on the next instruction is pushed onto the stack. In our example above it will be the memory adress of the instruction &lt;code&gt;pop   rbp&lt;/code&gt;. Secondly it will jump to the memory adress where &lt;code&gt;printf&lt;/code&gt; starts. When &lt;code&gt;printf&lt;/code&gt; is done it will do a &lt;code&gt;ret&lt;/code&gt; instruction which will pop our memory adress from the stack and jump back to our main function.&lt;/p&gt;

</description>
      <category>assembly</category>
      <category>nasm</category>
      <category>x86</category>
    </item>
    <item>
      <title>x86 CPU registers</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Sat, 06 Nov 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/x86-cpu-registers-id2</link>
      <guid>https://dev.to/jmourtada/x86-cpu-registers-id2</guid>
      <description>&lt;p&gt;This is the third post about x86 assembly. In the previous post I went through every line of a minimal assembly program. In this post I will briefly explain some of the x86 CPU registers.&lt;/p&gt;

&lt;h2&gt;
  
  
  General purpose registers
&lt;/h2&gt;

&lt;p&gt;Most of the work of a CPU is to process data. But reading and storing data from memory slows down the processor. Therefore the processor has storage of it’s own called registers. There are 16 64-bit registers in the x86 architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;rax&lt;/li&gt;
&lt;li&gt;rcx&lt;/li&gt;
&lt;li&gt;rdx&lt;/li&gt;
&lt;li&gt;rbx&lt;/li&gt;
&lt;li&gt;rsi&lt;/li&gt;
&lt;li&gt;rdi&lt;/li&gt;
&lt;li&gt;rsp&lt;/li&gt;
&lt;li&gt;rbp&lt;/li&gt;
&lt;li&gt;r8-r15&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These registers are used for inputs and outputs for different CPU instruction. For example in the application binary interface(ABI) for linux 64-bit the &lt;code&gt;rax&lt;/code&gt; register is used for specifying the syscall number and it’s also used for storing the return value of functions. The arguments for a function is then stored in order &lt;code&gt;rdi&lt;/code&gt;(arg1), &lt;code&gt;rsi&lt;/code&gt;(arg2), &lt;code&gt;rdx&lt;/code&gt;(arg3) and so on. If there are more arguments than six they are stored on the stack.&lt;/p&gt;

&lt;p&gt;The image below shows the calling conventions for syscalls on different CPU architectures for linux 64-bit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A6CoiXIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.mourtada.se/assets/linux_abi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A6CoiXIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.mourtada.se/assets/linux_abi.png" alt="Linux 64-bit ABI" width="572" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Special purpose registers
&lt;/h2&gt;

&lt;p&gt;While the values stored in a general purpose register has no special meaning to the processor, special purpose registers holds state about the program.&lt;/p&gt;

&lt;p&gt;Here are examples of special purpose registers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;rsp - Stores the memory adress to the top of the stack&lt;/li&gt;
&lt;li&gt;rbp - Stores the memory adress to the current stack frame&lt;/li&gt;
&lt;li&gt;rip - Stores the memory adress of the next instruction to execute&lt;/li&gt;
&lt;li&gt;rflags - Here the CPU stores information about the result of arithmetic instructions. For example comparing numbers.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>assembly</category>
      <category>nasm</category>
      <category>x86</category>
    </item>
    <item>
      <title>A minimal assembly program</title>
      <dc:creator>Jonathan Mourtada</dc:creator>
      <pubDate>Fri, 29 Oct 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/jmourtada/a-minimal-assembly-program-14bd</link>
      <guid>https://dev.to/jmourtada/a-minimal-assembly-program-14bd</guid>
      <description>&lt;p&gt;This is the second post about x86 assembly. In the previous post I described how to compile a hello world program. In this post i will go through and even smaller program.&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimal example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          global _start

          section .text
_start: mov rax, 60 ; system call for exit
          mov rdi, 0 ; exit code 0
          syscall ; invoke operating system to exit

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This programs doesn’t print any output and only sets an exit code. If the program doesn’t set an exit code the program will crash with a &lt;code&gt;Segmentation fault&lt;/code&gt; error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sections
&lt;/h2&gt;

&lt;p&gt;There is only one section in this program called &lt;code&gt;.text&lt;/code&gt;. Here you put your source code which is readonly when the program is ran. Sections is used by the assembler to group data and code in different locations in memory. For example in the &lt;code&gt;hello world&lt;/code&gt; example we had a section called &lt;code&gt;.data&lt;/code&gt; which we initialized a constant sequence of bytes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Labels
&lt;/h2&gt;

&lt;p&gt;There is also only one label in this program called &lt;code&gt;_start&lt;/code&gt;. A label can be seen as pointer to an instruction in memory. A label can then be used as a reference in code to run a specific block of code(almost like a function call). The label &lt;code&gt;_start&lt;/code&gt; has a special meaning for the linker &lt;code&gt;ld&lt;/code&gt; and that is where an executable will start executing instructions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Global keyword
&lt;/h2&gt;

&lt;p&gt;The global keyword tells the assembler that this label should be able to be referenced outside of this assembly file. It’s mostly like exporting a function in high level languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;The syntax for operations is &lt;code&gt;opcode &amp;lt;operand&amp;gt;, &amp;lt;operand&amp;gt;&lt;/code&gt;. We first have an opcode which tells the cpu what we want to do and with which operands.&lt;/p&gt;

&lt;p&gt;The instruction &lt;code&gt;mov rax, 60&lt;/code&gt; tells the CPU to move the decimal constant value 60 to a CPU general-purpose register called &lt;code&gt;rax&lt;/code&gt;. When an operand is constant value like above it’s called immediate addressing. CPU registers are small high-speed memory locations in the CPU.&lt;/p&gt;

&lt;h2&gt;
  
  
  Syscall
&lt;/h2&gt;

&lt;p&gt;The syscall is a special operation that tells the CPU that the program wants to talk to the operating system. The program halts and the operating system takes over and reads the &lt;code&gt;rax&lt;/code&gt; register for the syscall number which in linux 64bit is the syscall for &lt;code&gt;exit&lt;/code&gt;. The &lt;code&gt;rdi&lt;/code&gt; register where we put &lt;code&gt;0&lt;/code&gt; holds the value for the exit code. When a syscall is done the program usually resumes executing but not in this case as we told the operating system that we wanted to exit.&lt;/p&gt;

</description>
      <category>assembly</category>
      <category>nasm</category>
      <category>x86</category>
    </item>
  </channel>
</rss>
