<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: rndmh3ro</title>
    <description>The latest articles on DEV Community by rndmh3ro (@rndmh3ro).</description>
    <link>https://dev.to/rndmh3ro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rndmh3ro"/>
    <language>en</language>
    <item>
      <title>Combining jinja2-cli with jq and environment variables</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Mon, 30 Jun 2025 18:00:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/combining-jinja2-cli-with-jq-and-environment-variables-3pb0</link>
      <guid>https://dev.to/rndmh3ro/combining-jinja2-cli-with-jq-and-environment-variables-3pb0</guid>
      <description>&lt;p&gt;In a current project I’m templating files using Jinja2 with the help of the &lt;a href="https://github.com/mattrobenolt/jinja2-cli" rel="noopener noreferrer"&gt;jinja2-cli&lt;/a&gt;. Having used (and &lt;a href="https://dev.to/rndmh3ro/how-i-teach-ansible-to-my-colleagues-a-hands-on-training-session-36gm"&gt;taught&lt;/a&gt; )Ansible for years I’m quite familar with Jinja2 so this was a natural choice.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;jinja2-cli&lt;/code&gt; can receive variables to template by different means. One way is by loading them from a file. This can be json, yaml, toml or even xml.&lt;/p&gt;

&lt;p&gt;Suppose I have a json-file called &lt;code&gt;payload.json&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"NAME": "bar",
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And a jinja2-template called &lt;code&gt;template.j2&lt;/code&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hello {{ NAME }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I can run the following command and it prints the templated contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; jinja2 template.j2 payload.json
hello bar

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now for some real world example. I have a slightly more complicated configurtion file I need to template. This configuration file already exists and used the normal jinja variable-syntax with &lt;code&gt;{{&lt;/code&gt; and &lt;code&gt;}}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I need to load the variables from environment variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export NAME=bar
export TENANT=tenant
export BASE_DOMAIN=example.com
export HUB_FQDN=hub.example.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the &lt;code&gt;jinja2-cli&lt;/code&gt; can also use environment variables, I could change my template to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hello {{ env(NAME) }}
tenant is {{ env(TENANT) }}
base domain is {{ env(BASE_DOMAIN) }}
hub fqdn is {{ env(HUB_FQDN) }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would load the environment variable &lt;code&gt;NAME&lt;/code&gt; into the template.&lt;/p&gt;

&lt;p&gt;But I don’t want to change my template, so I need to create a payload.json file with the variables that the &lt;code&gt;jinja2-cli&lt;/code&gt; can then load.&lt;/p&gt;

&lt;p&gt;How should I create this json-file? I &lt;em&gt;could&lt;/em&gt; just write it by hand and use normal bash variable substitution. But that’s not very nice.&lt;/p&gt;

&lt;p&gt;The go-to tool for handling json on the command line is &lt;a href="https://stedolan.github.io/jq/" rel="noopener noreferrer"&gt;jq&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I can use jq to create the json-file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; jq -n \
--arg NAME $NAME \
--arg TENANT "$TENANT" \
--arg SSP_BASE_DOMAIN "$SSP_BASE_DOMAIN" \
--arg HUB_FQDN "$HUB_FQDN" \
'$ARGS.named' &amp;gt; payload.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cool part about this is &lt;code&gt;$ARGS.named&lt;/code&gt; - this takes all arguments and their vales and prints them. I then write the output into the &lt;code&gt;payload.json&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Now I can export the variables and run the template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; jinja2 template.j2 payload.json
hello bar
tenant is tenant
base domain is example.com
hub fqdn is hub.example.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And I’m done!&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>jinja2</category>
      <category>jq</category>
      <category>yaml</category>
    </item>
    <item>
      <title>How to create repositories in Artifactory with curl</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Mon, 30 Jun 2025 18:00:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/how-to-create-repositories-in-artifactory-with-curl-20gd</link>
      <guid>https://dev.to/rndmh3ro/how-to-create-repositories-in-artifactory-with-curl-20gd</guid>
      <description>&lt;p&gt;I recently created repositories in a new Artifactory instance. This was a testing instance and since I dind’t work with Artifactory much before this, I created them in the web-frontend by hand.&lt;/p&gt;

&lt;p&gt;I then wanted to get them as code so I could recreate them in the production Artifactory instance without doing it all by hand.&lt;/p&gt;

&lt;p&gt;Since I had admin-privileges in the testing instance, I used the &lt;code&gt;/repositories/configurations&lt;/code&gt;-&lt;a href="https://jfrog.com/help/r/jfrog-rest-apis/get-all-repository-configurations" rel="noopener noreferrer"&gt;endpoint&lt;/a&gt; to get the repositories and their configuration in json-format. I then wanted to create these repositories on the production instance.&lt;/p&gt;

&lt;p&gt;For this I built myself this shell one-liner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s -u &amp;lt;USER&amp;gt;:&amp;lt;PASSWORD&amp;gt; "https://artifactory-test.example.com/artifactory/api/repositories/configurations" | jq .REMOTE | curl --location --request PUT -u &amp;lt;USER&amp;gt;:&amp;lt;PASSWORD&amp;gt; "https://artifactory.example.com/artifactory/api/v2/repositories/batch" --header 'Content-Type: application/json' --data @-

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It first reads all repositories from the testing instance, uses &lt;code&gt;jq&lt;/code&gt; to get the remote-repositories, then &lt;code&gt;PUT&lt;/code&gt; these into the production-instance, using the &lt;code&gt;/repositories/batch&lt;/code&gt;-&lt;a href="https://jfrog.com/help/r/jfrog-rest-apis/create-multiple-repositories" rel="noopener noreferrer"&gt;endpoint&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This would have worked, had I got admin-privileges on the production instance, too. But sadly I didn’t!&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;/repositories/configurations&lt;/code&gt;-&lt;a href="https://jfrog.com/help/r/jfrog-rest-apis/get-all-repository-configurations" rel="noopener noreferrer"&gt;endpoint&lt;/a&gt; can only be used as an admin and there is no endpoint to get the configurations of all repositories you have access to.&lt;/p&gt;

&lt;p&gt;But there is an &lt;a href="https://jfrog.com/help/r/jfrog-rest-apis/get-batch-of-repositories-by-name" rel="noopener noreferrer"&gt;endpoint&lt;/a&gt; to get all the repositories &lt;em&gt;without&lt;/em&gt; the configurations. So all I had to do is get the names of the repositories (in the json-key &lt;code&gt;.key&lt;/code&gt;) I had access to, iterate over them and then get the configuration of each repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for repo in $(curl -s -u &amp;lt;USER&amp;gt;:&amp;lt;PASSWORD&amp;gt; "https://artifactory.example.com/artifactory/api/repositories/?project=ocp" | jq -r .[].key); do curl -s -u &amp;lt;USER&amp;gt;:&amp;lt;PASSWORD&amp;gt; "https://artifactory.example.com/artifactory/api/repositories/$repo" | jq .; done | jq -s '.' &amp;gt; repos.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the &lt;code&gt;repos.json&lt;/code&gt;-file containing the configuration of all the repositories I have access to, I can now create (by &lt;code&gt;PUT&lt;/code&gt;ing them) the repositories on the production instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X PUT -s -u &amp;lt;USER&amp;gt;:&amp;lt;PASSWORD&amp;gt; 'https://artifactory.example.com/artifactory/api/v2/repositories/batch' --header 'Content-Type: application/json' --data @repos.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If I want to update them, I can use almost the same command, but instead I need to &lt;code&gt;POST&lt;/code&gt; them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST -s -u &amp;lt;USER&amp;gt;:&amp;lt;PASSWORD&amp;gt; 'https://artifactory.example.com/artifactory/api/v2/repositories/batch' --header 'Content-Type: application/json' --data @repos.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next step (?) - put them into an Ansible-module.&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>curl</category>
      <category>artifactory</category>
      <category>jq</category>
    </item>
    <item>
      <title>Nachbericht Stackconf 2025</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Wed, 21 May 2025 08:15:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/nachbericht-stackconf-2025-35n4</link>
      <guid>https://dev.to/rndmh3ro/nachbericht-stackconf-2025-35n4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0snvzvrvcw2a260vtc3h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0snvzvrvcw2a260vtc3h.jpg" alt="stackconf" width="560" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dieses Jahr war ich zum ersten Mal auf der &lt;a href="https://stackconf.eu/" rel="noopener noreferrer"&gt;Stackconf&lt;/a&gt;. Da Netways auch diese Konferenz organisiert, war es wie gewohnt schön, alte Bekannte und auch neue Gesichter zu treffen. Es ist stets interessant, in den Gesprächen mit Kollegen aus unterschiedlichen Bereichen, von großen als auch kleinen Unternehmen, national und international, über die verschiedenen Themen und Perspektiven auch abseits der Session Themen zu diskutieren.&lt;/p&gt;

&lt;p&gt;Da noch nicht alle &lt;a href="https://stackconf.eu/archives/archive-2025/" rel="noopener noreferrer"&gt;Videos&lt;/a&gt; veröffentlicht wurden, wollte ich dennoch mit wenigen Worten den Hauptinhalt der Sessions wiedergeben.&lt;/p&gt;

&lt;h1&gt;
  
  
  Tag 1
&lt;/h1&gt;

&lt;p&gt;Los ging es mit dem Thema &lt;strong&gt;The Sustainable Infrastructure of the future&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Vereinfacht ging es in dem Vortrag um praktische Tipps zum Thema Green IT, die selbst von Personen die dem Thema eher skeptisch gegenüberstehen ein Gedanke wert sein können.&lt;/p&gt;

&lt;p&gt;Ein paar Beispiele aus dem Vortrag hierfür waren: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Autoscaling, effiziente Nutzung der Ressourcen&lt;/li&gt;
&lt;li&gt;Systeme runterfahren wenn Sie nicht gebraucht werden, z.B. am Wochenende&lt;/li&gt;
&lt;li&gt;Cloud statt eigene Server - Ressourcen Auslastung&lt;/li&gt;
&lt;li&gt;“Follow the Sun” - die Server laufen dort wo auch gerade Solarenergie erzeugt werden kann&lt;/li&gt;
&lt;li&gt;Ist es notwendig, jedes Mal ChatGPT zu fragen, wenn Stackoverflow die Antwort auch liefert?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Aber auch die klare Aussage des Redners, dass die meisten CO2-Ausgleichs-Zertifikate nurGreen Washing sind.&lt;/p&gt;

&lt;p&gt;Direkt im Anschluss ging es weiter mit &lt;strong&gt;Detect &amp;amp; Respond to Threats in Kubernetes with Falco&lt;/strong&gt;. Hierbei handelte es sich um die Vorstellung des Open Source Security Scanners &lt;a href="https://falco.org" rel="noopener noreferrer"&gt;falco&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Interessant daran war, dass falco die Container zur Laufzeit prüft. Dabei kommt der Scanner mit einem vordefiniertem Ruleset, das aber angepasst werden kann sowie eigene Rule Definitionen ermöglicht.&lt;/p&gt;

&lt;p&gt;Das Tool kann in der Cloud aber auch direkt auf dem Host genutzt werden. Es werden sogar direkt Helm-Charts und Docker-Container aber auch Softwarepakete bereitgestellt.&lt;/p&gt;

&lt;p&gt;Beispiele für findings während der Demo waren:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Es gibt eine laufende binary, die nicht Teil des Images ist&lt;/li&gt;
&lt;li&gt;Innerhalb des containers wurde eine shell Session gestartet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Da es sich hier um ein OpenSource Tool handelt, könnte es als Alternative zu den Cloud-Angeboten (z.B. Azure Defender) betrachtet werden. Sowohl im Hinblick auf Kosten als auch Vendor-Lock-In.&lt;/p&gt;

&lt;p&gt;Nach diesen Sessions gab es erstmal einen kurzen Break und die erste Möglichkeit zum Austausch vor Ort.&lt;/p&gt;

&lt;p&gt;Danach ging es weiter mit der eher an Entwickler gerichteten Session &lt;strong&gt;Integrating generative AI into API Platform: Good idea?&lt;/strong&gt;. Hier ging es um eine Vorstellung was möglich ist in Bezug auf die Anbindung von KI-APIs.&lt;/p&gt;

&lt;p&gt;Innerhalb des Vortrages gab es interessante zusätzliche Aussagen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Es ist immer noch schwer für Unternehmen einen sinnvollen Anwendungsfall für KI zu finden.&lt;/li&gt;
&lt;li&gt;Im Hinblick auf Green IT sollte man aber auch die Frage “Muss man die KI anbinden oder macht auch eine Alternative (z.B. Search Engine) mehr Sinn?” betrachten.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Darauf folgte &lt;strong&gt;How NVMe over TCP runs PostgreSQL in Quicksilver mode!&lt;/strong&gt; und &lt;strong&gt;MultiCloud: Behind the Hype&lt;/strong&gt;. Beim letzteren ging es inhaltlich und die Erläuterung welchen Formen von verschiedenen Cloud Typen es gibt und wieso weshalb warum man sich für die verschiedenen Typen entscheiden sollte. Kernaussage war das aktuell meist ein hybrider Ansatz erfolgt wird.&lt;/p&gt;

&lt;p&gt;Nach der Mittagspause und den sich anschließenden Ignite-Talks ging es weiter mit den Themen &lt;strong&gt;Embracing the Local-First Paradigm&lt;/strong&gt; und &lt;strong&gt;The Power of Small Habits in Agile Teams&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Nach einer Kaffeepause folgte ein längerer Open Spaces Block. Beim Open Spaces-Konzept können Konferenzteilnehmer Themen platzieren, über die sie sich mit anderen Teilnehmern austauschen möchten.&lt;/p&gt;

&lt;p&gt;Zum Abschluss des Tages gab es noch eine, sagen wir Leidensgeschichte, unter dem Thema &lt;strong&gt;IP Authentication: A Tale of Performance Pitfalls and Challenges in Prod&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Hieran hat mir vor allem gefallen, dass es kein komplexes Thema und “wir haben das beim ersten Mal gerockt”-Vortrag war. Stattdessen wurde darauf eingegangen, wurde wie oft man den Livegang gemacht hat, wie der Rollback lief und was man dazwischen gemacht hat.&lt;/p&gt;

&lt;p&gt;Damit war der erste Tag offiziell beendet und man ging in die Abendveranstaltung über. Für alle die schon auf Netways-Events waren, wissen wie gut aber auch langwierig diese sind :).&lt;/p&gt;

&lt;h1&gt;
  
  
  Tag 2
&lt;/h1&gt;

&lt;p&gt;Der Tag startete mit der Session &lt;strong&gt;2025: I Don’t Know K8S and at This Point, I’m Too Afraid To Ask&lt;/strong&gt;. Wie der Titel es schon verrät, ging es um Kubernetes. Inhaltich war es einer schöner Crashkurs zum Thema. Wer einfach einen schnellen Überblick über das Thema haben möchte ohne direkt in eine 1-Tages-Schulung zu müssen, sollte sich diesen Vortrag ruhig mal ansehen.&lt;/p&gt;

&lt;p&gt;Der folgende Vortrag &lt;strong&gt;Evolving Shift Left: Integrating Observability into Modern Software Development&lt;/strong&gt; war eine Vorstellung zu Opentelemetry.&lt;/p&gt;

&lt;p&gt;Nach einer Kaffeepause ging es weiter mit Best Practices zum Setup von APIs - &lt;strong&gt;Breaking APIs: how to cook up the perfect design&lt;/strong&gt;. Im darauf folgenden Vortrag &lt;strong&gt;Building a Hyperconverged Proxmox VE Cluster with Ceph&lt;/strong&gt; präsentierte der Redner wie Sie in ihrem Unternehmen mittels Ceph in ihrem Proxmox Cluster Storage bereitgestellt haben.&lt;/p&gt;

&lt;p&gt;Nach der Mittagspause und den darauf folgenden Ignites ging es weiter mit &lt;strong&gt;How Open Source Communities are Defining the Next Generation of Infrastructure.&lt;/strong&gt; Inhaltlich ging es hier um eine Vorstellung von OpenStack und Werbung Teil der Community zu werden.&lt;/p&gt;

&lt;p&gt;Im Vortrag &lt;strong&gt;Zap the Flakes! Leveraging AI to Combat Flaky Tests with CANNIER&lt;/strong&gt; wurde RedHat Cannier (von einem RedHat Mitarbeiter) vorgestellt und auf den aktuellen Implementierungsstand eingegangen.&lt;/p&gt;

&lt;p&gt;Nach einer kleinen Pause gab es auch am 2. Tag wieder Open Spaces.   Zum Abschluss folgte noch die Session &lt;strong&gt;Operator All the (stateful) Things&lt;/strong&gt;. Hier würde auf die Verwaltung von Datenbanken as Code mit Atlas präsentiert.&lt;/p&gt;

&lt;p&gt;Zusammenfassend kann ich sagen, dass es mal wieder ein rundum guter Konferenzbesuch mit vielen Eindrücken und Impulsen war.&lt;/p&gt;

&lt;p&gt;Folgende 4 Vorträge möchte ich aus den folgenden Gründen hervorheben.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hat mich positiv überrascht - The Sustainable Infrastructure of the future&lt;/li&gt;
&lt;li&gt;Sollten wir ausprobieren - Detect &amp;amp; Respond to Threats in Kubernetes with Falco&lt;/li&gt;
&lt;li&gt;Motivation / positives Gefühl steigern was man doch so alles schafft - The Power of Small Habits in Agile Teams&lt;/li&gt;
&lt;li&gt;Schneller Überblick über das komplexe Thema - 2025: I Don’t Know K8S and at This Point, I’m Too Afraid To Ask&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>conferences</category>
      <category>stackconf</category>
      <category>konferenz</category>
      <category>k8s</category>
    </item>
    <item>
      <title>TIL how to test CORS on the command line with curl</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Sat, 28 Sep 2024 19:30:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/til-how-to-test-cors-on-the-command-line-with-curl-4kj5</link>
      <guid>https://dev.to/rndmh3ro/til-how-to-test-cors-on-the-command-line-with-curl-4kj5</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/rndmh3ro/til-how-to-configure-additional-headers-in-gitlabs-nginx-24n-temp-slug-7370555"&gt;last TIL&lt;/a&gt; I talked about how to set additional security headers for Gitlab. But I also had to do this for other applications I was supporting, where it was more straight-forward to do it (meaning: with code).&lt;/p&gt;

&lt;p&gt;I needed to set the &lt;code&gt;access-control-allow-origin&lt;/code&gt; header in the other applications. This header tells the browser from which origins the website is allowed to download resources.&lt;/p&gt;

&lt;p&gt;This is relevant in web apps that run on &lt;code&gt;webapp-frontend.example.com&lt;/code&gt; and want to use data from &lt;code&gt;webapp-backend.example.com&lt;/code&gt;. To allow the browser to access &lt;code&gt;webapp-backend.example.com&lt;/code&gt;, the frontend application needs to allow it in its &lt;code&gt;access-control-allow-origin&lt;/code&gt; header, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;access-control-allow-origin: https://webapp-backend.example.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I set this in the frontend-application and wanted to test it from the command line.&lt;/p&gt;

&lt;p&gt;Here’s how I did it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -v --request OPTIONS 'https://webapp-frontend.example.com' -H 'Origin: https://webapp-backend.example.com' -H 'Access-Control-Request-Method: POST'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course I used curl for testing. To check if &lt;code&gt;access-control-allow-origin&lt;/code&gt; works, you send a &lt;code&gt;OPTIONS&lt;/code&gt;-request to check what methods are allowed by the application. The header &lt;code&gt;'Access-Control-Request-Method: POST'&lt;/code&gt; informs the application that I want to do a &lt;code&gt;POST&lt;/code&gt;-request (I guess that’s not strictly necessary to test the allow-origin header).&lt;/p&gt;

&lt;p&gt;Now to finally test it, I deactivated the &lt;code&gt;access-control-allow-origin&lt;/code&gt; header, to see what will happen. Here’s the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; curl -v --request OPTIONS 'https://webapp-frontend.example.com' -H 'Origin: https://webapp-backend.example.com' -H 'Access-Control-Request-Method: POST'
* Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000
&amp;gt; OPTIONS /healthz HTTP/1.1
&amp;gt; Host: 127.0.0.1:8000
&amp;gt; User-Agent: curl/8.7.1
&amp;gt; Accept: */*
&amp;gt; Origin: https://webapp-backend.example.com
&amp;gt; Access-Control-Request-Method: POST
&amp;gt; 
* Request completely sent off
&amp;lt; HTTP/1.1 400 Bad Request
&amp;lt; date: Fri, 06 Sep 2024 11:44:27 GMT
&amp;lt; server: uvicorn
&amp;lt; vary: Origin
&amp;lt; access-control-allow-methods: GET, POST
&amp;lt; access-control-max-age: 600
&amp;lt; access-control-allow-headers: Accept, Accept-Language, Content-Language, Content-Type
&amp;lt; content-length: 22
&amp;lt; content-type: text/plain; charset=utf-8
&amp;lt; content-security-policy: base-uri 'self'; connect-src 'self'; script-src; script-src-attr; frame-src; object-src 'none'; img-src; style-src; font-src; manifest-src; media-src; default-src
&amp;lt; referrer-policy: no-referrer
&amp;lt; x-frame-options: DENY
&amp;lt; x-content-type-options: nosniff
&amp;lt; cross-origin-embedder-policy: require-corp
&amp;lt; cross-origin-opener-policy: same-origin
&amp;lt; cross-origin-resource-policy: same-origin
&amp;lt; 
* Connection #0 to host 127.0.0.1 left intact
Disallowed CORS origin

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last line tells you that my request was denied. You can also see that there are three access-control headers in the response (&lt;code&gt;access-control-allow-methods&lt;/code&gt;, &lt;code&gt;access-control-max-age&lt;/code&gt;, &lt;code&gt;access-control-allow-headers&lt;/code&gt;), but no &lt;code&gt;access-control-allow-origin&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now I activate the header in the application and re-run the curl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; curl -v --request OPTIONS 'https://webapp-frontend.example.com' -H 'Origin: https://webapp-backend.example.com' -H 'Access-Control-Request-Method: POST'
* Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000
&amp;gt; OPTIONS /healthz HTTP/1.1
&amp;gt; Host: 127.0.0.1:8000
&amp;gt; User-Agent: curl/8.7.1
&amp;gt; Accept: */
&amp;gt; Origin: https://webapp-backend.example.com
&amp;gt; Access-Control-Request-Method: POST
&amp;gt; 
* Request completely sent off
&amp;lt; HTTP/1.1 200 OK
&amp;lt; date: Fri, 06 Sep 2024 11:44:07 GMT
&amp;lt; server: uvicorn
&amp;lt; vary: Origin
&amp;lt; access-control-allow-methods: GET, POST
&amp;lt; access-control-max-age: 600
&amp;lt; access-control-allow-headers: Accept, Accept-Language, Content-Language, Content-Type
&amp;lt; access-control-allow-origin: https://webapp-backend.example.com
&amp;lt; content-length: 2
&amp;lt; content-type: text/plain; charset=utf-8
&amp;lt; content-security-policy: base-uri 'self'; connect-src 'self'; script-src; script-src-attr; frame-src; object-src 'none'; img-src; style-src; font-src; manifest-src; media-src; default-src
&amp;lt; referrer-policy: no-referrer
&amp;lt; x-frame-options: DENY
&amp;lt; x-content-type-options: nosniff
&amp;lt; cross-origin-embedder-policy: require-corp
&amp;lt; cross-origin-opener-policy: same-origin
&amp;lt; cross-origin-resource-policy: same-origin
&amp;lt; 
* Connection #0 to host 127.0.0.1 left intact
OK

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now two things have changed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I get back a &lt;code&gt;OK&lt;/code&gt;-response indicating that my request ist allowed.&lt;/li&gt;
&lt;li&gt;There’s now a header in the response: &lt;code&gt;access-control-allow-origin: https://webapp-backend.example.com&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s it. With this simple method you can test on the command-line if your &lt;code&gt;access-control-allow-origin&lt;/code&gt; works.&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>security</category>
      <category>headers</category>
      <category>curl</category>
    </item>
    <item>
      <title>TIL how to configure additional headers in Gitlab’s nginx</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Tue, 24 Sep 2024 13:30:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/til-how-to-configure-additional-headers-in-gitlabs-nginx-2kae</link>
      <guid>https://dev.to/rndmh3ro/til-how-to-configure-additional-headers-in-gitlabs-nginx-2kae</guid>
      <description>&lt;p&gt;Recently, I had to configure some security headers in GitLab. GitLab uses Nginx as its web server, and it allows for easy configuration changes for &lt;a href="https://docs.gitlab.com/omnibus/settings/nginx.html" rel="noopener noreferrer"&gt;some&lt;/a&gt; settings. For instance, enabling HTTP to HTTPS redirection can be done simply by setting &lt;code&gt;nginx['redirect_http_to_https'] = true&lt;/code&gt; in the &lt;code&gt;gitlab.rb&lt;/code&gt; configuration file.&lt;/p&gt;

&lt;p&gt;However, adding custom headers for security, particularly those that control cross-origin policies, requires a bit more work. These headers are essential for preventing certain types of attacks and ensuring better isolation between websites.&lt;/p&gt;

&lt;p&gt;I needed to set three headers: Cross-Origin-Opener-Policy (COOP), Cross-Origin-Embedder-Policy (COEP), and Cross-Origin-Resource-Policy (CORP). These headers are used to prevent cross-origin attacks, such as Spectre, and ensure that only resources from trusted origins can interact with the site.&lt;/p&gt;

&lt;p&gt;COOP ensures that the window or tab in which the site is running is isolated from any other cross-origin content. COEP guarantees that cross-origin resources can only be embedded if they explicitly grant permission. CORP restricts which origins can access certain resources, preventing untrusted external sites from accessing sensitive content.&lt;/p&gt;

&lt;p&gt;Configuring these in Gitlab’s Nginx was a bit tricky. Nginx requires that every setting ends with a semicolon and a newline. Additionally, for better readability in the &lt;code&gt;gitlab.rb&lt;/code&gt; file, I added line breaks while ensuring there were no spaces after the backslashes at the end of each line. Here’s the final configuration:&lt;/p&gt;

&lt;p&gt;Here’s the final result that you can copy-paste into your &lt;code&gt;gitlab.rb&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nginx['custom_gitlab_server_config'] = "add_header Cross-Origin-Opener-Policy same-origin;\n\
                                        add_header Cross-Origin-Embedder-Policy require-corp;\n\
                                        add_header Cross-Origin-Resource-Policy same-site;"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key thing to note is that there should be no trailing spaces after the backslashes in this multi-line string, as even a single space could cause the configuration to fail.&lt;/p&gt;

&lt;p&gt;Of course you can then add additional headers or other nginx-settings after the &lt;code&gt;add_header&lt;/code&gt;-settings.&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>gitlab</category>
      <category>nginx</category>
      <category>security</category>
    </item>
    <item>
      <title>TIL how to define different Helm-Repos in a template</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Wed, 04 Sep 2024 09:30:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/til-how-to-define-different-helm-repos-in-a-template-5830</link>
      <guid>https://dev.to/rndmh3ro/til-how-to-define-different-helm-repos-in-a-template-5830</guid>
      <description>&lt;p&gt;Recently I had to create a Helm-Chart (still not a fan of it, at all!) where the &lt;code&gt;image&lt;/code&gt; was different depending on if the helm-chart was used for local development or used in production.&lt;/p&gt;

&lt;p&gt;I had to resort to using an if-else-condition that I put into the &lt;code&gt;_helpers.tpl&lt;/code&gt;-file so it can be accessed by any deployment-template in the chart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- /*
Set registry to local docker-registy if deploying locally, else use prodution-registry.
If you use := in if-statement, it will declare a new local variable $env which will not impact the value of $env declared outside the if block.
*/}}

{{- define "ImageFunctions" -}}
{{- $Image := "" }}

{{- if (default .root.Values.LOCAL_DEV false) -}}
    {{- $Image = printf "%s:latest" $name | quote -}}
{{- else -}}
    {{- $Image = printf "%s/%s/%s:latest" "image-registry.openshift-image-registry.svc:5000" .root.Release.Namespace $name | quote -}}
{{- end -}}

{{- $Image -}}
{{- end }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break it down:&lt;/p&gt;

&lt;p&gt;First I define a function called “ImageFunctions” that will be then called in the deployment-templates. This functions first defines an empty &lt;code&gt;Image&lt;/code&gt;-variable. This is needed, because if you define the variable only inside the following if-statement, it won’t be accessible outside the statement and we then cannot use it in our deployments.&lt;/p&gt;

&lt;p&gt;Then comes the if-else condition. Here I use a Value &lt;code&gt;LOCAL_DEV&lt;/code&gt; that is defined in a values-file that will only be used when developing locally. Otherwise it is empty and defaults to &lt;code&gt;false&lt;/code&gt;. This way I only have to define this variable once for local development and not have to define it in the values-files for the production environment.&lt;/p&gt;

&lt;p&gt;Inside the if function I set the &lt;code&gt;Image&lt;/code&gt;-variable to “%s:latest” using a printf-statement. When templating this, it will resolve to for example “frontend-app:latest”. The &lt;code&gt;else&lt;/code&gt;-condition does the same, except it prints a different image-name that can be used in production.&lt;/p&gt;

&lt;p&gt;The line &lt;code&gt;{{- $Image -}}&lt;/code&gt; then propagates the variable to outside the &lt;code&gt;ImageFunctions&lt;/code&gt;-functions so it then can be used in the template.&lt;/p&gt;

&lt;p&gt;Here’s how to use it in your deployment-template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- $Image := include "ImageFunctions" $dict -}}
---
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
        - name: container
          image: {{ $Image }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break this down, too:&lt;/p&gt;

&lt;p&gt;The first line defines a new variable &lt;code&gt;Image&lt;/code&gt; that used the defined &lt;code&gt;ImageFunctions&lt;/code&gt;. It will be filled the the &lt;code&gt;Image&lt;/code&gt;-variable coming from the function and will contain either &lt;code&gt;frontend-app:latest&lt;/code&gt; or &lt;code&gt;image-registry.openshift-image-registry.svc:5000:/frotend-app:latest&lt;/code&gt;, depending on if &lt;code&gt;LOCAL_DEV&lt;/code&gt; was set to &lt;code&gt;true&lt;/code&gt; or &lt;code&gt;false&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now you can just insert the first line into every of your deployment-templates and then use the &lt;code&gt;$Image&lt;/code&gt;-variable. Then the correct image will be used, no matter if you use the chart locally or in production.&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>helm</category>
      <category>yaml</category>
      <category>template</category>
    </item>
    <item>
      <title>Working with Gitlab on the CLI</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Mon, 03 Jun 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/working-with-gitlab-on-the-cli-ee1</link>
      <guid>https://dev.to/rndmh3ro/working-with-gitlab-on-the-cli-ee1</guid>
      <description>&lt;p&gt;&lt;a href="https://gitlab.com/gitlab-org/cli#glab" rel="noopener noreferrer"&gt;Glab&lt;/a&gt; is an open-source tool that allows you to work with GitLab from the command line, eliminating the need to switch to a browser to create or approve merge requests, start a pipeline run, or view issues.&lt;/p&gt;

&lt;p&gt;Glab can work with repositories hosted on gitlab.com as well as with your own GitLab instances. The tool automatically detects which instance it should work with.&lt;/p&gt;

&lt;p&gt;The CLI tool was started by Clement Sam and has been an official GitLab product since 2022.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Glab can be installed in various ways. Since it is written in Golang, the executable can be easily downloaded and run from the &lt;a href="https://gitlab.com/gitlab-org/cli/-/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;. Alternatively, Glab is also available in various package repositories. It runs on Linux, Windows, and macOS.&lt;/p&gt;

&lt;p&gt;All installation options can be found &lt;a href="https://gitlab.com/gitlab-org/cli/-/blob/main/docs/installation_options.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  Registering with the GitLab Instance
&lt;/h3&gt;

&lt;p&gt;Before working with repositories, you need to authenticate with the GitLab instance. For this, you need a Personal Access Token, which you can create in your GitLab profile. For the gitlab.com instance, you can create it &lt;a href="https://gitlab.com/-/profile/personal_access_tokens" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Assign a name to the token and select the “api” and “write_repository” permissions. The generated token will be needed in the next step.&lt;/p&gt;

&lt;p&gt;Now, log in to the GitLab instance using the token by running &lt;code&gt;glab auth login&lt;/code&gt; and answering the prompts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab auth login
? What GitLab instance do you want to log into? gitlab.com
- Logging into gitlab.com
? How would you like to login? Token

Tip: you can generate a Personal Access Token here https://gitlab.com/-/profile/personal_access_tokens
The minimum required scopes are 'api' and 'write_repository'.
? Paste your authentication token: **************************? Choose default git protocol HTTPS
? Authenticate Git with your GitLab credentials? Yes
- glab config set -h gitlab.com git_protocol https
✓ Configured git protocol
- glab config set -h gitlab.com api_protocol https
✓ Configured API protocol
✓ Logged in as rndmh3ro

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can verify a successful login with &lt;code&gt;glab auth status&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab auth status
gitlab.com
  ✓ Logged in to gitlab.com as rndmh3ro (/home/segu/.config/glab-cli/config.yml)
  ✓ Git operations for gitlab.com configured to use https protocol.
  ✓ API calls for gitlab.com are made over https protocol
  ✓ REST API Endpoint: https://gitlab.com/api/v4/
  ✓ GraphQL Endpoint: https://gitlab.com/api/graphql/
  ✓ Token: **************************
git.example.com
  ✓ Logged in to git.example.com as segu (/home/segu/.config/glab-cli/config.yml)
  ✓ Git operations for git.example.com configured to use https protocol.
  ✓ API calls for git.example.com are made over https protocol
  ✓ REST API Endpoint: https://git.example.com/api/v4/
  ✓ GraphQL Endpoint: https://git.example.com/api/graphql/
  ✓ Token: **************************

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Working with Repositories
&lt;/h2&gt;

&lt;p&gt;Once successfully logged into the GitLab instance, you can work with repositories using glab.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloning a Repository
&lt;/h3&gt;

&lt;p&gt;To clone repositories with &lt;code&gt;glab&lt;/code&gt;, run &lt;code&gt;glab repo clone path/to/repo&lt;/code&gt;, followed by an optional target directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab repo clone gitlab-org/cli
Cloning into 'cli'...
remote: Enumerating objects: 18691, done.
remote: Counting objects: 100% (72/72), done.
remote: Compressing objects: 100% (34/34), done.
remote: Total 18691 (delta 53), reused 39 (delta 37), pack-reused 18619
Receiving objects: 100% (18691/18691), 22.98 MiB | 5.97 MiB/s, done.
Resolving deltas: 100% (12391/12391), done.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have multiple repositories in a group to clone, you can do this using &lt;code&gt;glab&lt;/code&gt; as well. Use the &lt;code&gt;--group&lt;/code&gt; option (or &lt;code&gt;-g&lt;/code&gt;) to clone all repositories in the group sequentially:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; GITLAB_HOST=gitlab.com glab repo clone -g gitlab-org
fatal: destination path 'verify-mr-123640-security-policy-project' already exists and is not an empty directory.
Cloning into 'verify-mr-123640'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (3/3), done.
Cloning into 'without-srp'...
remote: Enumerating objects: 30, done.
remote: Counting objects: 100% (14/14), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 30 (delta 7), reused 4 (delta 4), pack-reused 16
Receiving objects: 100% (30/30), 5.94 KiB | 5.94 MiB/s, done.
Resolving deltas: 100% (9/9), done.
Cloning into 'container-scanning-with-sbom'...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Working with Merge Requests
&lt;/h2&gt;

&lt;p&gt;After checking out the repository, you can start working on issues or merge requests (MRs).&lt;/p&gt;

&lt;p&gt;A typical code change process often looks like this: You make your code changes, commit them, and push them to GitLab. Then, you want to create a merge request. Normally, you would now switch to the GitLab website to create the MR. Thanks to &lt;code&gt;glab&lt;/code&gt;, you don’t need to leave the command line.&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;glab mr create&lt;/code&gt;, you can interactively create an MR. You will be guided through the creation process, where you specify the title and description, and then you will be asked if you want to create the MR directly or view it in the web frontend before creating it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr create
? Choose a template Open a blank merge request
? Title: New Feature
? Description &amp;lt;Received&amp;gt;
? What's next? Submit

Creating merge request for test into master in gitlab-org/cli

!351 New Feature (test)
https://gitlab.com/gitlab-org/cli/-/merge_requests/1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then view it. If you want to do this in the browser, run &lt;code&gt;glab mr view&lt;/code&gt; with the &lt;code&gt;--web&lt;/code&gt; (or &lt;code&gt;-w&lt;/code&gt;) parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr view -w 1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also view the same content directly on the command line (including comments):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr view 1297 -R gitlab-org/cli
open • opened by rndmh3ro about 1 hour ago
docs: add installation options with wakemeops !1297

  ## Description

  Add installation options with wakemeops-repository.

  Note: I'm not affiliated with WakeMeOps, just a happy user.

  ## Related Issues

  Resolves #1363

  ## How has this been tested?

  not at all.

  ## Screenshots (if appropriate):

  ### Types of changes

  [] Bug fix (non-breaking change which fixes an issue)
  [] New feature (non-breaking change which adds functionality)
  [] Breaking change (fix or feature that would cause existing functionality
  to change)
  [✓] Documentation
  [] Chore (Related to CI or Packaging to platforms)
  [] Test gap

0 upvotes • 0 downvotes • 5 comments
Labels: Community contribution, documentation, linked-issue, tw::triaged, workflow::ready for review
Assignees: rndmh3ro
Reviewers: aqualls
Pipeline Status: success (View pipeline with `glab ci view add_wakemeops_docs`)
Approvals Status:
Rule "All Members" insufficient approvals (0/1 required):

Rule "/docs/" sufficient approvals (0/0 required):
Amy Qualls aqualls -

✓ This merge request has 1 changes

View this merge request on GitLab: https://gitlab.com/gitlab-org/cli/-/merge_requests/1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re not working on the codebase yourself but are reviewing MRs from others, you can list open merge requests with &lt;code&gt;glab mr list&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr list
Showing 22 open merge requests on gitlab-org/cli (Page 1)

!1297 gitlab-org/cli!1297 docs: add installation options with wakemeops (main) ← (add_wakemeops_docs)
!1296 gitlab-org/cli!1296 fix(repo view): consider current host when viewing different repositories (#1362) (main) ← (1362_repo_view)
!1295 gitlab-org/cli!1295 fix: `glab mr delete` should work properly for forks (main) ← (fix_mr_delete)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, you can check out the MR you want to review:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr checkout 1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re satisfied with the content, you can add a note to the Merge Request and then approve it directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr note -R gitlab-org/cli -m "LGTM"

&amp;gt; glab mr approve
- Approving Merge Request !1297
✓ Approved

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And of course, you can also merge the MR right away:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr merge
? What merge method would you like to use? Rebase and merge
? What's next? Submit
✓ Rebase successful
! No pipeline running on test
✓ Rebased and merged
!1297 New Feature (test)
https://gitlab.com/gitlab-org/cli/-/merge_requests/1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the end of the day, you can view your merged MRs by using &lt;code&gt;glab mr list&lt;/code&gt; options to see only merged (&lt;code&gt;-M&lt;/code&gt;) or your own (&lt;code&gt;-a @me&lt;/code&gt;) MRs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr list -M -a @me
Showing 4 merged merge requests on gitlab-org/cli (Page 1)

!1279 gitlab-org/cli!1279 feat(schedule): Add commands to create and delete schedules (main) ← (create_del_sched)
!1176 gitlab-org/cli!1176 feat(schedule): Add command to run schedules (main) ← (run_schedule)
!1143 gitlab-org/cli!1143 docs: remove duplicate defaults in help (main) ← (fix_help_doc)
!1112 gitlab-org/cli!1112 feat(schedule): Add command to list schedules (main) ← (sched_list)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Working with Pipelines
&lt;/h2&gt;

&lt;p&gt;Code changes are tested through an automatic CICD pipeline. Naturally, &lt;code&gt;glab&lt;/code&gt; offers the ability to work with pipelines.&lt;/p&gt;

&lt;p&gt;To start a pipeline on the main branch, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci run -b main
Created pipeline (id: 540823), status: created, ref: main, weburl: https://git.example.com/example/project/-/pipelines/540823

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can view the status of the pipeline like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci status
(failed) • 01m 11s lint test

https://git.example.com/example/project/-/pipelines/540812
SHA: 275cb8295c69db166e1b1c94936d4c4b67463701
Pipeline State: failed

? Choose an action: Exit

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a pipeline has failed, you can view the logs using &lt;code&gt;glab ci trace&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci trace

Searching for latest pipeline on test...
Getting jobs for pipeline 540812...

? Select pipeline job to trace: kics-scan (1209237) - failed

Getting job trace...
Showing logs for kics-scan job #1209237
Running with gitlab-runner 14.10.1 (f761588f)
  on example-shared-docker swAou6b9
Resolving secrets
Preparing the "docker" executor

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;glab&lt;/code&gt; works excellently with Unix pipes, so you can easily grep for errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci trace 1209237 | grep -i failed
Queries failed to execute: 10
ERROR: Job failed: exit code 50

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Linting
&lt;/h2&gt;

&lt;p&gt;Speaking of errors - you can incorporate them wonderfully into the CICD configuration file, the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you change this file, for example to add a new stage, and make a mistake, you will usually only notice it after you have pushed your changes and wonder why the pipeline does not start.&lt;/p&gt;

&lt;p&gt;Fortunately, you can check (“lint”) the configuration with &lt;code&gt;glab&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If an error has crept in, &lt;code&gt;glab ci lint&lt;/code&gt; will detect it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci lint
Validating...
.gitlab-ci.yml is invalid
1 (&amp;lt;unknown&amp;gt;): did not find expected key while parsing a block mapping at line 2 column 1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After correction, the linting will then report success:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci lint
Validating...
✓ CI/CD YAML is valid!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Working with Schedules
&lt;/h2&gt;

&lt;p&gt;I am particularly proud of the feature to create and run Pipeline Schedules with &lt;code&gt;glab&lt;/code&gt;, because I implemented it.&lt;/p&gt;

&lt;p&gt;Pipeline Schedules are designed to automatically run pipelines at regular intervals.&lt;/p&gt;

&lt;p&gt;You can create these with &lt;code&gt;glab&lt;/code&gt;. To do this, you pass a cron expression (which defines when the pipeline should run), a description, and the branch on which the pipeline should run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule create --cron "0 2 * * *" --description "Run main pipeline everyday" --ref "main" --variable "foo:bar"
Created schedule

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can view the created pipeline schedule with &lt;code&gt;glab schedule list&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule list
Showing 1 schedules on example/project (Page 1)

ID Description Cron Owner Active
1038 Run main pipeline everyday * * * * * segu true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the pipeline schedule outside the defined rhythm, start it with &lt;code&gt;glab schedule run&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule run 1038
Started schedule with ID 1038

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if it is no longer needed, you can simply delete it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule delete 1038
Deleted schedule with ID 1038

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  glab API
&lt;/h2&gt;

&lt;p&gt;Not all functions that Gitlab offers are yet usable with &lt;code&gt;glab&lt;/code&gt;. For such cases, it is possible to communicate directly with the Gitlab API using &lt;code&gt;glab api&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The command to display the pipeline schedules (&lt;code&gt;glab schedule list&lt;/code&gt;) mentioned in the previous section can be replicated using a &lt;code&gt;glab api&lt;/code&gt; call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab api projects/:fullpath/pipeline_schedules/
[
  {
    "id": 1038,
    "description": "Run main pipeline everyday",
    "ref": "main",
    "cron": "* * * * *",
    "cron_timezone": "UTC",
    "next_run_at": "2023-06-22T08:33:00.000Z",
    "active": true,
    "created_at": "2023-06-22T08:24:02.199Z",
    "updated_at": "2023-06-22T08:24:02.199Z",
    "owner": {
      "id": 97,
      "username": "segu",
      "name": "Sebastian Gumprich",
      "state": "active",
    }
  }
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also delete the pipeline schedule this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab api projects/:fullpath/pipeline_schedules/1038 -X DELETE

&amp;gt; glab api projects/:fullpath/pipeline_schedules/1038
glab: 404 Pipeline Schedule Not Found (HTTP 404)
{
  "message": "404 Pipeline Schedule Not Found"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Aliases
&lt;/h2&gt;

&lt;p&gt;To avoid having to remember the sometimes more complicated API calls, &lt;code&gt;glab&lt;/code&gt; has functionality to create aliases.&lt;/p&gt;

&lt;p&gt;Two aliases are already set up by default, which you can display as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab alias list
ci pipeline ci
co mr checkout

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if you want to check out a merge request, you can simply call &lt;code&gt;glab co&lt;/code&gt; instead of &lt;code&gt;glab mr checkout&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can define your own aliases as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab alias set schedule_list 'api projects/:fullpath/pipeline_schedules/'
- Adding alias for schedule_list: api projects/:fullpath/pipeline_schedules/
✓ Added alias.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And of course, you can delete them again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab alias delete schedule_list
✓ Deleted alias schedule_list; was api projects/:fullpath/pipeline_schedules/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set Variables from GitlabCI Locally
&lt;/h2&gt;

&lt;p&gt;Another useful &lt;code&gt;glab&lt;/code&gt; feature is working with CICD variables.&lt;/p&gt;

&lt;p&gt;You can display, create, and delete these as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab variable list

&amp;gt; glab variable set foo bar
✓ Created variable foo for 7001-07/nrwsp with scope *

&amp;gt; glab variable get foo
bar

&amp;gt; glab variable delete foo
✓ Deleted variable foo with scope * for 7001-07/nrwsp

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The variables created this way can be used locally in a simple manner.&lt;/p&gt;

&lt;p&gt;If you use Terraform, for example, you can set your TF_VAR variables easily by setting the output of &lt;code&gt;glab variable get&lt;/code&gt; as an environment variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export TF_VAR_db_root_password=$(glab variable get TF_VAR_db_root_password)
export TF_VAR_secret_key=$(glab variable get TF_VAR_secret_key)
export TF_VAR_access_key=$(glab variable get TF_VAR_access_key)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you copy these &lt;code&gt;export&lt;/code&gt;s into your README, each team member can set the correct Terraform variables with a simple copy-paste, without having to copy them from a password manager in a cumbersome way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash-Completion and Further Information
&lt;/h2&gt;

&lt;p&gt;If you want to know what else &lt;code&gt;glab&lt;/code&gt; can do - the bash autocompletion shows it to you:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.zufallsheld.de%2Fimages%2Fautocomplete.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.zufallsheld.de%2Fimages%2Fautocomplete.gif" alt="glabs autocomplete shows explanations along with completions!"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And many more details can of course be found on the &lt;a href="https://docs.gitlab.com/ee/integration/glab/" rel="noopener noreferrer"&gt;Homepage&lt;/a&gt; of glab.&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>cli</category>
      <category>glab</category>
    </item>
    <item>
      <title>Gitlab von der Kommandozeile aus bedienen</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Mon, 03 Jun 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/gitlab-von-der-kommandozeile-aus-bedienen-432</link>
      <guid>https://dev.to/rndmh3ro/gitlab-von-der-kommandozeile-aus-bedienen-432</guid>
      <description>&lt;p&gt;&lt;a href="https://gitlab.com/gitlab-org/cli#glab"&gt;Glab&lt;/a&gt; ist ein Opensource-Tool, das es ermöglicht mit Gitlab über die Kommandozeile zu arbeiten. Dadurch entfällt das Wechseln zum Browser, um Merge Requests zu erstellen oder zu genehmigen, einen Pipeline-Lauf zu starten oder Issues anzusehen.&lt;/p&gt;

&lt;p&gt;Glab kann mit Repositories arbeiten, die auf gitlab.com gehostet sind, aber auch mit eigenen Gitlab-Instanzen. Das Tool erkennt automatisch, mit welcher Instanz es gerade arbeiten soll.&lt;/p&gt;

&lt;p&gt;Das CLI-Tool wurde von Clement Sam gestartet und ist seit 2022 ein offizielles Gitlab-Produkt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Glab kann auf verschiedene Arten installiert werden. Da es in Golang geschrieben ist, kann die ausführbare Datei problemlos über die &lt;a href="https://gitlab.com/gitlab-org/cli/-/releases"&gt;Releases-Seite&lt;/a&gt; heruntergeladen und ausgeführt werden. Alternativ ist Glab auch in verschiedenen Paket-Repositories verfügbar. Es läuft unter Linux, Windows und macOS.&lt;/p&gt;

&lt;p&gt;Alle Installationsvarianten kann man &lt;a href="https://gitlab.com/gitlab-org/cli/-/blob/main/docs/installation_options.md"&gt;hier&lt;/a&gt; finden!&lt;/p&gt;

&lt;h3&gt;
  
  
  Registrierung an der GitLab-Instanz
&lt;/h3&gt;

&lt;p&gt;Bevor man mit Repositories arbeiten kann, muss man sich an der Gitlab-Instanz authentifizieren. Hierfür benötigt man ein Personal Access Token, welches man sich in seinem Gitlab-Profil erstellen kann. Im Falle der gitlab.com-Instanz ist die Erstellung &lt;a href="https://gitlab.com/-/profile/personal_access_tokens"&gt;hier&lt;/a&gt; zu finden. Man vergibt einen beliebigen Namen für das Token und wählt die Berechtigungen “api” und “write_repository”. Das generierte Token wird dann im nächsten Schritt benötigt.&lt;/p&gt;

&lt;p&gt;Nun meldet man sich mit dem Token an der Gitlab-Instanz an, indem man &lt;code&gt;glab auth login&lt;/code&gt; ausführt und die gestellten Fragen beantwortet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab auth login
? What GitLab instance do you want to log into? gitlab.com
- Logging into gitlab.com
? How would you like to login? Token

Tip: you can generate a Personal Access Token here https://gitlab.com/-/profile/personal_access_tokens
The minimum required scopes are 'api' and 'write_repository'.
? Paste your authentication token: **************************? Choose default git protocol HTTPS
? Authenticate Git with your GitLab credentials? Yes
- glab config set -h gitlab.com git_protocol https
✓ Configured git protocol
- glab config set -h gitlab.com api_protocol https
✓ Configured API protocol
✓ Logged in as rndmh3ro

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Den erfolgreichen Login kann man mittels &lt;code&gt;glab auth status&lt;/code&gt; überprüfen.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab auth status
gitlab.com
  ✓ Logged in to gitlab.com as rndmh3ro (/home/segu/.config/glab-cli/config.yml)
  ✓ Git operations for gitlab.com configured to use https protocol.
  ✓ API calls for gitlab.com are made over https protocol
  ✓ REST API Endpoint: https://gitlab.com/api/v4/
  ✓ GraphQL Endpoint: https://gitlab.com/api/graphql/
  ✓ Token: **************************
git.example.com
  ✓ Logged in to git.example.com as segu (/home/segu/.config/glab-cli/config.yml)
  ✓ Git operations for git.example.com configured to use https protocol.
  ✓ API calls for git.example.com are made over https protocol
  ✓ REST API Endpoint: https://git.example.com/api/v4/
  ✓ GraphQL Endpoint: https://git.example.com/api/graphql/
  ✓ Token: **************************

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mit Repositories arbeiten
&lt;/h2&gt;

&lt;p&gt;Hat man sich erfolgreich an der GitLab-Instanz angemeldet, kann man mittels glab mit Repositories arbeiten.&lt;/p&gt;

&lt;h2&gt;
  
  
  Repository klonen
&lt;/h2&gt;

&lt;p&gt;Zu erst einmal will man natürlich Repositories mittels &lt;code&gt;glab&lt;/code&gt; clonen. Dazu ruft man &lt;code&gt;glab repo clone path/to/repo&lt;/code&gt; auf, gefolgt von einem optionalen Zielverzeichnis.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab repo clone gitlab-org/cli
Cloning into 'cli'...
remote: Enumerating objects: 18691, done.
remote: Counting objects: 100% (72/72), done.
remote: Compressing objects: 100% (34/34), done.
remote: Total 18691 (delta 53), reused 39 (delta 37), pack-reused 18619
Receiving objects: 100% (18691/18691), 22.98 MiB | 5.97 MiB/s, done.
Resolving deltas: 100% (12391/12391), done.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hat man mehrere Repositories in einer Gruppe, die man clonen möchte, kann man dies mittels &lt;code&gt;glab&lt;/code&gt; ebenfalls tun. Hierzu muss man die &lt;code&gt;--group&lt;/code&gt;-Option (oder &lt;code&gt;-g&lt;/code&gt;) nutzen. Damit werden nacheinander alle Repositories der Gruppe gecloned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; GITLAB_HOST=gitlab.com glab repo clone -g gitlab-org
fatal: destination path 'verify-mr-123640-security-policy-project' already exists and is not an empty directory.
Cloning into 'verify-mr-123640'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (3/3), done.
Cloning into 'without-srp'...
remote: Enumerating objects: 30, done.
remote: Counting objects: 100% (14/14), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 30 (delta 7), reused 4 (delta 4), pack-reused 16
Receiving objects: 100% (30/30), 5.94 KiB | 5.94 MiB/s, done.
Resolving deltas: 100% (9/9), done.
Cloning into 'container-scanning-with-sbom'...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mit Merge Requests arbeiten
&lt;/h2&gt;

&lt;p&gt;Nachdem das Repository ausgecheckt wurde, kann man mit der Arbeit an Issues oder Merge Requests (MRs) beginnen.&lt;/p&gt;

&lt;p&gt;Ein normaler Prozess zur Codeänderung sieht meist so aus: Man führt seine Codeänderungen durch, committed sie und pushed sie anschließend in das Gitlab. Daraufhin möchte man einen Merge Request erstellen. Normalerweise würde man nun zur Gitlab-Website wechseln und dort den MR erstellen. Dank &lt;code&gt;glab&lt;/code&gt; muss man die Kommandozeile aber nicht verlassen.&lt;/p&gt;

&lt;p&gt;Mittels &lt;code&gt;glab mr create&lt;/code&gt; wird interaktiv ein MR erstellt. Dabei wird man durch den Erstellungsprozess geführt, indem man Titel und Beschreibung angibt und anschließend gefragt wird, ob man den MR direkt erstellen will oder ihn sich im Webfrontend ansehen will, bevor er erstellt wird.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr create
? Choose a template Open a blank merge request
? Title: New Feature
? Description &amp;lt;Received&amp;gt;
? What's next? Submit

Creating merge request for test into master in gitlab-org/cli

!351 New Feature (test)
https://gitlab.com/gitlab-org/cli/-/merge_requests/1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Danach kann man ihn sich ansehen. Möchte man das im Browser tun, ruft man &lt;code&gt;glab mr view&lt;/code&gt; mit dem Parameter &lt;code&gt;--web&lt;/code&gt; (oder &lt;code&gt;-w&lt;/code&gt;) auf.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr view -w 1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Man kann sich die gleichen Inhalte aber auch direkt auf der Kommandozeile ansehen (inklusive Kommentare):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr view 1297 -R gitlab-org/cli
open • opened by rndmh3ro about 1 hour ago
docs: add installation options with wakemeops !1297

  ## Description

  Add installation options with wakemeops-repository.

  Note: I'm not affiliated with WakeMeOps, just a happy user.

  ## Related Issues

  Resolves #1363

  ## How has this been tested?

  not at all.

  ## Screenshots (if appropriate):

  ### Types of changes

  [] Bug fix (non-breaking change which fixes an issue)
  [] New feature (non-breaking change which adds functionality)
  [] Breaking change (fix or feature that would cause existing functionality
  to change)
  [✓] Documentation
  [] Chore (Related to CI or Packaging to platforms)
  [] Test gap

0 upvotes • 0 downvotes • 5 comments
Labels: Community contribution, documentation, linked-issue, tw::triaged, workflow::ready for review
Assignees: rndmh3ro
Reviewers: aqualls
Pipeline Status: success (View pipeline with `glab ci view add_wakemeops_docs`)
Approvals Status:
Rule "All Members" insufficient approvals (0/1 required):

Rule "/docs/" sufficient approvals (0/0 required):
Amy Qualls aqualls -

✓ This merge request has 1 changes

View this merge request on GitLab: https://gitlab.com/gitlab-org/cli/-/merge_requests/1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Arbeitet man gerade nicht selbst an der Codebasis sondern sieht sich MRs anderer Personen an, kann man sich mittels &lt;code&gt;glab mr list&lt;/code&gt; offene Merge-Requests anzeigen lassen.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr list
Showing 22 open merge requests on gitlab-org/cli (Page 1)

!1297 gitlab-org/cli!1297 docs: add installation options with wakemeops (main) ← (add_wakemeops_docs)
!1296 gitlab-org/cli!1296 fix(repo view): consider current host when viewing a repo details (main) ← (jmc-1334)
!1295 gitlab-org/cli!1295 chore(ci): remove ssh key from build (main) ← (jmc-remove-ssh)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Im Anschluss checkt man den MR aus, den man sich ansehen möchte:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr checkout 1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ist man mit dem Inhalt zufrieden, kann man dem Merge Request zum Beispiel eine Notiz hinzufügen und anschließend direkt approven:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr note -R gitlab-org/cli -m "LGTM"

&amp;gt; glab mr approve
- Approving Merge Request !1297
✓ Approved

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Und natürlich kann man den MR auch gleich mergen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr merge
? What merge method would you like to use? Rebase and merge
? What's next? Submit
✓ Rebase successful
! No pipeline running on test
✓ Rebased and merged
!1297 New Feature (test)
https://gitlab.com/gitlab-org/cli/-/merge_requests/1297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Um sich am Ende des Tages seine gemergeden MRs anzusehen, gibt man &lt;code&gt;glab mr list&lt;/code&gt; Optionen mit, um nur gemergede (&lt;code&gt;-M&lt;/code&gt;) oder meine eigenen (&lt;code&gt;-a @me&lt;/code&gt;) MRs anzusehen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab mr list -M -a @me
Showing 4 merged merge requests on gitlab-org/cli (Page 1)

!1279 gitlab-org/cli!1279 feat(schedule): Add commands to create and delete schedules (main) ← (create_del_sched)
!1176 gitlab-org/cli!1176 feat(schedule): Add command to run schedules (main) ← (run_schedule)
!1143 gitlab-org/cli!1143 docs: remove duplicate defaults in help (main) ← (fix_help_doc)
!1112 gitlab-org/cli!1112 feat(schedule): Add command to list schedules (main) ← (sched_list)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mit Pipelines arbeiten
&lt;/h2&gt;

&lt;p&gt;Code-Änderungen werden durch eine automatische CICD-Pipeline getestet. Natürlich bietet &lt;code&gt;glab&lt;/code&gt; die Möglichkeit, mit Pipelines zu arbeiten.&lt;/p&gt;

&lt;p&gt;Zum Starten einer Pipeline auf dem Main-Branch ruft man folgenden Befehl auf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci run -b main
Created pipeline (id: 540823 ), status: created , ref: main , weburl: https://git.example.com/example/project/-/pipelines/540823 )

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Den Status der Pipeline kann man sich so ansehen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci status
(failed) • 01m 11s lint test

https://git.example.com/example/project/-/pipelines/540812
SHA: 275cb8295c69db166e1b1c94936d4c4b67463701
Pipeline State: failed

? Choose an action: Exit

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ist eine Pipeline fehlgeschlagen, kann man sich die Logs mittels &lt;code&gt;glab ci trace&lt;/code&gt; ansehen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci trace

Searching for latest pipeline on test...
Getting jobs for pipeline 540812...

? Select pipeline job to trace: kics-scan (1209237) - failed

Getting job trace...
Showing logs for kics-scan job #1209237
Running with gitlab-runner 14.10.1 (f761588f)
  on example-shared-docker swAou6b9
Resolving secrets
Preparing the "docker" executor

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;glab&lt;/code&gt; arbeitet hervorragend mit Unix-Pipes zusammen - so kann man sich Fehler einfach heraus greppen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci trace 1209237 | grep -i failed
Queries failed to execute: 10
ERROR: Job failed: exit code 50

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Linting
&lt;/h2&gt;

&lt;p&gt;Wo wir von Fehlern sprechen - man kann sie wunderbar in der CICD-Konfigurationsdatei, der &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; einbauen.&lt;/p&gt;

&lt;p&gt;Ändert man diese Datei, weil man zum Beispiel eine neue Stage einbauen möchte und macht dabei einen Fehler, bekommt man diesen normalerweise erst mit, wenn man seine Änderungen gepushed hat und sich wundert, warum die Pipeline nicht losläuft.&lt;/p&gt;

&lt;p&gt;Zum Glück kann man mit &lt;code&gt;glab&lt;/code&gt; die Konfiguration überprüfen (“linten”).&lt;/p&gt;

&lt;p&gt;Hat sich ein Fehler eingeschlichen, wird &lt;code&gt;glab ci lint&lt;/code&gt; diesen feststellen.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci lint
Validating...
.gitlab-ci.yml is invalid
1 (&amp;lt;unknown&amp;gt;): did not find expected key while parsing a block mapping at line 2 column 1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nach der Korrektur meldet das Linting dann Erfolg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab ci lint
Validating...
✓ CI/CD YAML is valid!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mit Schedules arbeiten
&lt;/h2&gt;

&lt;p&gt;Auf das Feature, Pipeline Schedules mittels &lt;code&gt;glab&lt;/code&gt; zu erstellen und laufen zu lassen, bin ich ganz besonders stolz, denn ich habe sie implementiert.&lt;/p&gt;

&lt;p&gt;Pipeline Schedules sind dafür da, Pipelines in regelmäßigen Abständen automatisch laufen zu lassen.&lt;/p&gt;

&lt;p&gt;Man kann diese per &lt;code&gt;glab&lt;/code&gt; anlegen lassen. Dazu übergibt man dem Befehl eine Cron-Expression (die definiert, wann die Pipeline laufen soll), eine Beschreibung sowie den Branch, auf dem die Pipeline laufen soll:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule create --cron "0 2 * * *" --description "Run main pipeline everyday" --ref "main" --variable "foo:bar"
Created schedule

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Die so angelegte Pipeline schedule kann man sich per &lt;code&gt;glab schedule list&lt;/code&gt; ansehen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule list
Showing 1 schedules on example/project (Page 1)

ID Description Cron Owner Active
1038 Run main pipeline everyday * * * * * segu true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Um die Pipeline Schedule außerhalb des definierten Rythmus laufen zu lassen, startet man sie mit &lt;code&gt;glab schedule run&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule run 1038
Started schedule with ID 1038

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Und wird sie nicht mehr benötigt, kann man sie einfach löschen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab schedule delete 1038
Deleted schedule with ID 1038

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  glab API
&lt;/h2&gt;

&lt;p&gt;Noch sind nicht alle Funktionen, die Gitlab bietet, auch mittels &lt;code&gt;glab&lt;/code&gt; nutzbar. Für solche Fälle ist es möglich, mittels &lt;code&gt;glab api&lt;/code&gt; direkt mit der Gitlab-API zu kommunizieren.&lt;/p&gt;

&lt;p&gt;Den im vorigen Abschnitt erwähnte Befehl zum Anzeigen der Pipeline Schedules (&lt;code&gt;glab schedule list&lt;/code&gt;) kann man mittels eines &lt;code&gt;glab api&lt;/code&gt;-Aufrufes nachbilden:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab api projects/:fullpath/pipeline_schedules/
[
  {
    "id": 1038,
    "description": "Run main pipeline everyday",
    "ref": "main",
    "cron": "* * * * *",
    "cron_timezone": "UTC",
    "next_run_at": "2023-06-22T08:33:00.000Z",
    "active": true,
    "created_at": "2023-06-22T08:24:02.199Z",
    "updated_at": "2023-06-22T08:24:02.199Z",
    "owner": {
      "id": 97,
      "username": "segu",
      "name": "Sebastian Gumprich",
      "state": "active",
    }
  }
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Auch das Löschen der Pipeline Schedule ist so möglich:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab api projects/:fullpath/pipeline_schedules/1038 -X DELETE

&amp;gt; glab api projects/:fullpath/pipeline_schedules/1038
glab: 404 Pipeline Schedule Not Found (HTTP 404)
{
  "message": "404 Pipeline Schedule Not Found"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Aliases
&lt;/h2&gt;

&lt;p&gt;Damit man sich die teils komplizierteren API-Aufrufe nicht merken muss, existiert in &lt;code&gt;glab&lt;/code&gt; die Funktionalität, Aliase anzulegen.&lt;/p&gt;

&lt;p&gt;Es sind bereits zwei Aliase standardmäßig eingerichtet, die man sich wie folgt anzeigen lassen kann:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab alias list
ci pipeline ci
co mr checkout

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Möchte man also einen Merge Request auschecken, kann man statt &lt;code&gt;glab mr checkout&lt;/code&gt; einfach &lt;code&gt;glab co&lt;/code&gt; aufrufen.&lt;/p&gt;

&lt;p&gt;Eigene Aliase kann man wie folgt definieren:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab alias set schedule_list 'api projects/:fullpath/pipeline_schedules/'
- Adding alias for schedule_list: api projects/:fullpath/pipeline_schedules/
✓ Added alias.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Und natürlich kann man sie auch wieder löschen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab alias delete schedule_list
✓ Deleted alias schedule_list; was api projects/:fullpath/pipeline_schedules/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Variablen aus der GitlabCI lokal setzen
&lt;/h2&gt;

&lt;p&gt;Ein weiteres nützliches &lt;code&gt;glab&lt;/code&gt;-Feature ist, mit CICD-Variablen zu arbeiten.&lt;/p&gt;

&lt;p&gt;Auch diese kann man sich anzeigen lassen, sie erstellen und sie löschen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; glab variable list

&amp;gt; glab variable set foo bar
✓ Created variable foo for example/project with scope *

&amp;gt; glab variable get foo
bar

&amp;gt; glab variable delete foo
✓ Deleted variable foo with scope * for example/project

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Die so erstellten Variablen lassen sich auf einfache Art und Weise auch lokal nutzen.&lt;/p&gt;

&lt;p&gt;Nutzt man beispielsweise Terraform, kann man seine TF_VAR-Variablen ganz einfach setzen, indem man die Ausgabe von &lt;code&gt;glab variable get&lt;/code&gt; als Umgebungsvariable setzt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export TF_VAR_db_root_password=$(glab variable get TF_VAR_db_root_password)
export TF_VAR_secret_key=$(glab variable get TF_VAR_secret_key)
export TF_VAR_access_key=$(glab variable get TF_VAR_access_key)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kopiert man diese &lt;code&gt;export&lt;/code&gt;s in seine README, kann jedes Teammitglieder mit einem einfachen Copy-Paste die korrekten Terraform-Variablen setzen, ohne sie umständlich aus einem Passwort-Manager zu kopieren.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash-Completion und weitere Infomationen
&lt;/h2&gt;

&lt;p&gt;Wer nun wissen möchte, was &lt;code&gt;glab&lt;/code&gt; noch alles kann - die bash autocompletion zeigt es einem:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iW8JU9QT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://www.zufallsheld.de/images/autocomplete.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iW8JU9QT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://www.zufallsheld.de/images/autocomplete.gif" alt="glabs autocomplete zeigt neben den Completions auch Erklärungen dazu an!" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Und viele weitere Informationen findet man natürlich auf der &lt;a href="https://docs.gitlab.com/ee/integration/glab/"&gt;Homepage&lt;/a&gt; von glab.&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>cli</category>
      <category>glab</category>
    </item>
    <item>
      <title>Interesting Uses of Ansible’s ternary filter</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Wed, 21 Feb 2024 20:00:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/interesting-uses-of-ansibles-ternary-filter-2mif</link>
      <guid>https://dev.to/rndmh3ro/interesting-uses-of-ansibles-ternary-filter-2mif</guid>
      <description>&lt;p&gt;Some time ago I discovered an interesting use of the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ternary_filter.html"&gt;ternary-filter&lt;/a&gt; in Ansible. A ternary-filter in Ansible is a filter that takes three arguments: a condition, an value if the condition is true, and an alternative value if the condition is false.&lt;/p&gt;

&lt;p&gt;&amp;lt;!-- PELICAN_END_SUMMARY --&amp;gt;&amp;lt;!-- PELICAN_NO_JINJA --&amp;gt;&lt;/p&gt;

&lt;p&gt;Here’s a simple example straight from Ansible’s documentation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: service-foo, use systemd module unless upstart is present, then use old service module
  service:
    state: restarted
    enabled: yes
    use: "{{ (ansible_service_mgr == 'upstart') | ternary('service', 'systemd') }}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But there are many more interesting use cases for this filter and I decided to take a look what Ansible’s collection authors used it for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Display command output only if verbosity is greater than 0.
&lt;/h2&gt;

&lt;p&gt;This was the usage that initially got me interested in different use-cases for the filter.&lt;/p&gt;

&lt;p&gt;Depending on what verbosity level you use when running a playbook (e.g. how many &lt;code&gt;-v&lt;/code&gt; you add to the command), the command will run in quiet-mode (with the &lt;code&gt;-quiet&lt;/code&gt; flag) or not.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Validate configuration
  become: true
  become_user: "{{ consul_user }}"
  ansible.builtin.command: &amp;gt;
    {{ consul_binary }} validate {{ (ansible_verbosity == 0) | ternary("-quiet", "") }}
    {{ consul_config_path }}/config.json {{ consul_configd_path }}
  changed_when: false

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(&lt;a href="https://github.com/ansible-collections/ansible-consul/blob/3ee2b43972e0da2f378422d16c672a7c719c4998/tasks/config.yml#L49"&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing task idempotency in one run without molecule
&lt;/h2&gt;

&lt;p&gt;This use of the ternary-filter is useful for testing task idempotency in one run.&lt;/p&gt;

&lt;p&gt;First, the task-file &lt;code&gt;test_create_scheduler.yml&lt;/code&gt; is imported without a variable set, so the task will change something. Then, the task-file is imported again, however this time with the variable &lt;code&gt;test_proxysql_scheduler_check_idempotence&lt;/code&gt; set to &lt;code&gt;true&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: "{{ role_name }} | test_create_scheduler | test create scheduler"
  import_tasks: test_create_scheduler.yml

- name: "{{ role_name }} | test_create_scheduler | test idempotence of create scheduler"
  import_tasks: test_create_scheduler.yml
  vars:
    test_proxysql_scheduler_check_idempotence: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When importing the test file &lt;code&gt;test_create_scheduler.yml&lt;/code&gt; without the variable &lt;code&gt;test_proxysql_scheduler_check_idempotence&lt;/code&gt;, the assert will check for &lt;code&gt;status is changed&lt;/code&gt;, because the ternary-filter evaluated the variable &lt;code&gt;test_proxysql_scheduler_check_idempotence&lt;/code&gt; as &lt;code&gt;false&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: "{{ role_name }} | {{ current_test }} | check if create scheduler reported a change"
  assert:
    that:
      - "status is {{ test_proxysql_scheduler_check_idempotence|ternary('not changed', 'changed') }}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When importing the test file &lt;code&gt;test_create_scheduler.yml&lt;/code&gt; with the variable &lt;code&gt;test_proxysql_scheduler_check_idempotence&lt;/code&gt;, the assert will check for &lt;code&gt;status is not changed&lt;/code&gt;, because the ternary-filter evaluated the variable &lt;code&gt;test_proxysql_scheduler_check_idempotence&lt;/code&gt; as &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://github.com/ansible-collections/community.proxysql/blob/d4ef72ae73dfad8d46ff639dd2ac76e204635d5b/tests/integration/targets/test_proxysql_scheduler/tasks/test_create_scheduler.yml#L13"&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Do things based on regex searches
&lt;/h2&gt;

&lt;p&gt;In Ansible you can chain filters using a pipe (&lt;code&gt;|&lt;/code&gt;). This allows you to filter based on regex searches (which in hindsight is obvious that it works, but I never thought about that).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: "{{ role_name }} | {{ current_test }} | are we performing a delete"
  set_fact:
    test_delete: "{{ current_test | regex_search('^test_delete') | ternary(true, false) }}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(&lt;a href="https://github.com/ansible-collections/community.proxysql/blob/d4ef72ae73dfad8d46ff639dd2ac76e204635d5b/tests/integration/targets/test_proxysql_mysql_users/tasks/base_test.yml#L5"&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Handle older Python versions easily
&lt;/h2&gt;

&lt;p&gt;In the following task, the cassandra-driver is installed. If the used (obsolete!) Python version starts with 2.7, pip should install the cassandra-driver in version 3.26.*. If a recent Python version is used, pip will install the latest version of the cassandra-driver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Install cassandra-driver
  pip:
    name: "cassandra-driver{{ ansible_python_version.startswith('2.7') | ternary('==3.26.*', '') }}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s definitely not the most elegant solution, but it works. (I’d probably have tried to install the correct cassandra-driver version according to the operating system and its Python version, wher eit should be installed)&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://github.com/ansible-collections/community.cassandra/blob/a35580565c949d7d13bbfd5dca307746e42d3725/tests/integration/targets/setup_cassandra/tasks/cassandra_auth.yml#L130"&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Comment line in template if var is defined
&lt;/h2&gt;

&lt;p&gt;This task will add a line starting with &lt;code&gt;ssl_ciphers&lt;/code&gt;, if the variable &lt;code&gt;zabbix_web_ssl_cipher_suite&lt;/code&gt; is defined and not &lt;code&gt;none&lt;/code&gt;. Otherwise it will add the same line but commented out.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{ (zabbix_web_ssl_cipher_suite is defined and zabbix_web_ssl_cipher_suite is not none) | ternary('', '# ') }}ssl_ciphers {{ zabbix_web_ssl_cipher_suite | default('') }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(&lt;a href="https://github.com/ansible-collections/community.zabbix/blob/facde86d8e388673d503ebc3b19fd0f9f6037798/roles/zabbix_web/templates/nginx_vhost.conf.j2#L73"&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;I used another way to add commented out lines in a template (&lt;a href="https://github.com/dev-sec/ansible-collection-hardening/blob/bdf6d65cfd9d63b7ffe00f67e280f652299283bc/roles/ssh_hardening/templates/opensshd.conf.j2#L48"&gt;see&lt;/a&gt;). I used the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/comment_filter.html"&gt;comment-filter&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{ "HostKeyAlgorithms " ~ ssh_host_key_algorithms|join(',') if ssh_host_key_algorithms else "HostKeyAlgorithms" | comment }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that I see my own code, I could probably use the ternary-filter here, too!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{ ssh_host_key_algorithms | ternary("HostKeyAlgorithms " ~ ssh_host_key_algorithms|join(','), "HostKeyAlgorithms" | comment) }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But I think I actually like the if-else syntax more.&lt;/p&gt;

&lt;p&gt;Do you have any other interesting uses of the ternary-filter?&lt;/p&gt;

</description>
      <category>ansible</category>
    </item>
    <item>
      <title>TIL how to create Files and Commits via the Github-API and Github-CLI</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Mon, 11 Dec 2023 12:55:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/til-how-to-create-files-and-commits-via-the-github-api-and-github-cli-bfh</link>
      <guid>https://dev.to/rndmh3ro/til-how-to-create-files-and-commits-via-the-github-api-and-github-cli-bfh</guid>
      <description>&lt;p&gt;Recently, I found myself needing to incorporate a &lt;a href="https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners"&gt;CODEOWNERS&lt;/a&gt;-file to multiple repositories within our Github organization. The objective was to automatically reviewers for pull-requests. And codeowners are a perfect tool for this. This way every colleague in our company can join our Github organization and get write-access to the repositories (so they can create branches) and finally create a pull requests without needing to fork it to their personal accounts.&lt;/p&gt;

&lt;p&gt;But I absolutely wanted to avoid creating the codeowners-file by either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloning all repositories before adding, committing, and pushing the files (much less through a Pull Request)&lt;/li&gt;
&lt;li&gt;Adding the files through the Github UI (meaning clicking on “Add file”, “Create new File”, inserting the contents, and committing).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fortunately you can also do this via Github’s REST-API! But it was harder than I thought!&lt;/p&gt;

&lt;p&gt;I did not want to add individual persons to the codeowners-file - if the person leaves the company or does not want to be a codeowner anymore, I’d have to change the codeowners-file in all repositories again. Teams, in this context, are hugely beneficial. Github allows the creation of teams in your organization and you can people to them. You can then use these teams in the codeowners-file. This way, deleting a person from a team removes them as a codeowner across all repositories where the team is used.&lt;/p&gt;

&lt;p&gt;After creating the teams and adding people to them (I did this by hand), I needed to give the teams access to the repositories. This I did not by hand but via the API.&lt;/p&gt;

&lt;p&gt;The command to add a team to a repository looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; gh api -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" -f permission=push -X PUT /orgs/telekom-mms/teams/terraform-maintainers/repos/telekom-mms/examplerepo; done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This grants the team &lt;code&gt;terraform-maintainers&lt;/code&gt; access to the &lt;code&gt;examplerepo&lt;/code&gt;-repository and grants them write-access (with &lt;code&gt;-f permission=push&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Now if I want to do this for all terraform-repositories in our organization, I use a simple for-loop on the command-line, thereby searching for all repositories with &lt;code&gt;gh repo list&lt;/code&gt; and then grepping only the repositories with &lt;code&gt;terraform&lt;/code&gt; in their name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; for i in $(gh repo list telekom-mms --json name -q .[].name -L 100 | grep terraform); do gh api -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" -f permission=push -X PUT /orgs/telekom-mms/teams/terraform-maintainers/repos/telekom-mms/$i; done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the teams are added, I can then add them to the codeowners-file. To do this via the API, I basically have to PUT &lt;em&gt;content&lt;/em&gt; into the repository (see the &lt;a href="https://docs.github.com/en/rest/repos/contents?apiVersion=2022-11-28"&gt;docs&lt;/a&gt; for more information). Since this is git after all, I need to do this in a commit. And a commit needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a commit message - this is done by adding the form-field &lt;code&gt;message&lt;/code&gt; with the message as a value.&lt;/li&gt;
&lt;li&gt;a committer - this way trickier to achieve. I need to put a json-string with a name and an email-address as&lt;/li&gt;
&lt;li&gt;the content of the file - this is the content of the file as a base64-encoded string:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final call to the API looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; gh api --method PUT repos/telekom-mms/examplerepo/contents/CODEOWNERS -F message="add codeowners" -F committer:='{"name:"Sebastian Gumprich",email:"sebastian.gumprich@telekom.de"}' -F content="KiBAdGVsZWtvbS1tbXMvdGVycmFmb3JtLW1haW50YWluZXJzCg=="; done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do this in a loop and you’re good to go.&lt;/p&gt;

&lt;p&gt;But of course you will be making an error when updating the file (at least I did) and you will then have to update the file. If you want to update a file using a commit, you cannot use the above command as-is. You need to add the blob-SHA of the file you want to replace.&lt;/p&gt;

&lt;p&gt;How do you get it? By querying the API again (there’s a &lt;a href="https://dev.to-systems"&gt;way&lt;/a&gt; to do this without using the API but I did not do this since the API-way was easier):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; gh api repos/telekom-mms/examplerepo/contents/CODEOWNERS -q .sha
d79b5f4ecfd52ef6ecea0a71744f6ce94a2522da

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now add this SHA to the API-call and then you can update the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; gh api --method PUT repos/telekom-mms/examplerepo/contents/CODEOWNERS -F message="update codeowners" -F committer:='{"name:"Sebastian Gumprich",email:"sebastian.gumprich@telekom.de"}' -F content="KiBAdGVsZWtvbS1tbXMvdGVycmFmb3JtLW1haW50YWluZXJzCg==" -F sha="d79b5f4ecfd52ef6ecea0a71744f6ce94a2522da"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the correct owners in place, now the last step was to set the branch protection rule that defines that codeowners need to do reviews. This took me the longest time since from the &lt;a href="https://docs.github.com/en/rest/branches/branch-protection?apiVersion=2022-11-28#update-branch-protection"&gt;documentation&lt;/a&gt; alone I couldn’t create a working request.&lt;/p&gt;

&lt;p&gt;So I tried to use the existing branch protection of a repository (easily gettable by running &lt;code&gt;gh api repos/telekom-mms/examplerepo/branches/main/protection&lt;/code&gt;), but getting the following code inside a working curl-request and then looping over said curl-request proved to be impossible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "url": "https://api.github.com/repos/telekom-mms/examplerepo/branches/main/protection",
  "required_pull_request_reviews": {
    "url": "https://api.github.com/repos/telekom-mms/examplerepo/branches/main/protection/required_pull_request_reviews",
    "dismiss_stale_reviews": false,
    "require_code_owner_reviews": true,
    "require_last_push_approval": true,
    "required_approving_review_count": 1,
    "bypass_pull_request_allowances": {
      "users": [],
      "teams": [],
      "apps": [
        {
          "id": 2740,
          "slug": "renovate",
          "node_id": "MDM6QXBwMjc0MA==",
          "owner": {
            "login": "renovatebot",
            "id": 38656520,
            "node_id": "MDEyOk9yZ2FuaXphdGlvbjM4NjU2NTIw",
            "avatar_url": "https://avatars.githubusercontent.com/u/38656520?v=4",
            "gravatar_id": "",
            "url": "https://api.github.com/users/renovatebot",
            "html_url": "https://github.com/renovatebot",
            "followers_url": "https://api.github.com/users/renovatebot/followers",
            "following_url": "https://api.github.com/users/renovatebot/following{/other_user}",
            "gists_url": "https://api.github.com/users/renovatebot/gists{/gist_id}",
            "starred_url": "https://api.github.com/users/renovatebot/starred{/owner}{/repo}",
            "subscriptions_url": "https://api.github.com/users/renovatebot/subscriptions",
            "organizations_url": "https://api.github.com/users/renovatebot/orgs",
            "repos_url": "https://api.github.com/users/renovatebot/repos",
            "events_url": "https://api.github.com/users/renovatebot/events{/privacy}",
            "received_events_url": "https://api.github.com/users/renovatebot/received_events",
            "type": "Organization",
            "site_admin": false
          },
          "name": "Renovate",
          "description": "[omitted for brevity]",
          "external_url": "https://www.mend.io/free-developer-tools/renovate/",
          "html_url": "https://github.com/apps/renovate",
          "created_at": "2017-06-02T07:04:12Z",
          "updated_at": "2023-11-06T09:25:36Z",
          "permissions": {
            "administration": "read",
            "checks": "write",
            "contents": "write",
            "emails": "read",
            "issues": "write",
            "members": "read",
            "metadata": "read",
            "packages": "read",
            "pull_requests": "write",
            "statuses": "write",
            "vulnerability_alerts": "read",
            "workflows": "write"
          },
          "events": [
            "issues",
            "pull_request",
            "push",
            "repository"
          ]
        }
      ]
    }
  },
  "required_signatures": {
    "url": "https://api.github.com/repos/telekom-mms/examplerepo/branches/main/protection/required_signatures",
    "enabled": false
  },
  "enforce_admins": {
    "url": "https://api.github.com/repos/telekom-mms/examplerepo/branches/main/protection/enforce_admins",
    "enabled": false
  },
  "required_linear_history": {
    "enabled": false
  },
  "allow_force_pushes": {
    "enabled": false
  },
  "allow_deletions": {
    "enabled": false
  },
  "block_creations": {
    "enabled": false
  },
  "required_conversation_resolution": {
    "enabled": false
  },
  "lock_branch": {
    "enabled": false
  },
  "allow_fork_syncing": {
    "enabled": false
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But I started using the above response and tried to cut it down to a minimal request that works. This took quite some time, but here’s the final result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; curl -X PUT -H "Authorization: token $GITHUB_TOKEN" -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/telekom-mms/examplerepo/branches/main/protection -d '{
  "required_pull_request_reviews": {
    "dismiss_stale_reviews": false,
    "require_code_owner_reviews": true,
    "require_last_push_approval": true,
    "required_approving_review_count": 1,
    "bypass_pull_request_allowances": {
      "apps": ["branch-protection-as-code"]
    }
  },
  "enforce_admins": false,
  "restrictions": null,
  "required_status_checks": null
}'; done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looping over this short piece of code worked. I now am able to manage our repositories at scale (now someone tell me the existing solution to do this).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EG_hbYko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://vg08.met.vgwort.de/na/607884ae79e24cd3896ea36645659eec" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EG_hbYko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://vg08.met.vgwort.de/na/607884ae79e24cd3896ea36645659eec" alt="" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>cloud</category>
      <category>github</category>
      <category>linux</category>
    </item>
    <item>
      <title>How I teach Ansible to my colleagues: A hands-on training session.</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Mon, 02 Oct 2023 20:00:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/how-i-teach-ansible-to-my-colleagues-a-hands-on-training-session-36gm</link>
      <guid>https://dev.to/rndmh3ro/how-i-teach-ansible-to-my-colleagues-a-hands-on-training-session-36gm</guid>
      <description>&lt;p&gt;As someone with years of experience using and teaching Ansible I want to share with you how I teach it to my colleagues in the most practical way possible.&lt;/p&gt;

&lt;p&gt;However, before I get into the actual content of the training, it should be emphasized how important a functioning and identically set up working environment is for a successful learning journey.&lt;/p&gt;

&lt;p&gt;A prerequisite for participation in the training is the possession of a work device with SSH client. The participants then use the SSH client to connect to virtual machines provided specifically for them. If these requirements are not met, problems can arise during the training that distract from the actual content I want to convey. Therefore, it is essential to prepare the training environment in advance to guarantee that everything runs as smoothly as possible. Sing Ansible, this is done almost fully automatically.&lt;/p&gt;

&lt;p&gt;After laying down the prerequisites, let’s move on to the contents of the training. It is designed to last one day and covers the following topics: • What is Ansible and how does it work? • How to install Ansible? • How does Ansible work fundamentally and what components does it consist of? • What are ad hoc commands and how do you use them? • What are playbooks, how do you write them and how do you use them? • What are roles, how do you write them and how do you use them? • Where do you get Ansible roles, i.e. the ecosystem? • Best practices and practical insights for using Ansible&lt;/p&gt;

&lt;p&gt;The training is a balanced mix of theory and practical exercises where the practical parts outweigh the theoretical.&lt;/p&gt;

&lt;p&gt;In the first section, there is a short, theoretical overview of the purpose and functionality of Ansible as well, as a comparison with other tools and an insight into the ecosystem around the automation tool. This theoretical part takes a maximum of one hour, including a welcome and introduction of the participants.&lt;/p&gt;

&lt;p&gt;To craft a personalized learning journey, I encourage learners to share their experiences, hopes, and expectations from Ansible during this introductory phase. This interactive exchange helps shape the session around the unique needs of each learner.&lt;/p&gt;

&lt;p&gt;After setting the stage with a solid theoretical understanding, we leap into the world of practice.&lt;/p&gt;

&lt;p&gt;I begin by demonstrating Ansible installation on various operating systems using a step-by-step approach. After that, I demonstrate the installation in my working environment. Finally, the participants perform the installation on their own in their working environment.&lt;/p&gt;

&lt;p&gt;This method of ‘show-and-practice,’ which involves learners performing each step after its explanation and demonstration, is a constant presence throughout the session. It encourages immediate application of concepts, internalizes knowledge, and promptly addresses arising questions or issues.&lt;/p&gt;

&lt;p&gt;I can quickly gather feedback and see whether all participants have understood everything.&lt;/p&gt;

&lt;p&gt;When questions arise, even I, as an experienced user, occasionally reach the limits of my knowledge. However, if this happens, I can give the participants valuable methods on how to get to the solution of their questions, be it links to the right place in the documentation or the right search terms. It is important to show that lack of knowledge or the mistakes I make myself while using Ansible is not a bad thing and to live an open error culture.&lt;/p&gt;

&lt;p&gt;I like to answer questions by showing and explaining code examples from practice. This way, the participants can immediately see that what they learn really has a practical use.&lt;/p&gt;

&lt;p&gt;At the end of the training, there is still a summary of the best practices as well as the opportunity to clarify open questions or repeat things. After the training, I send the presentation to the participants and leave the working environment for a few more days so that the participants can once again independently apply and deepen the content they have learned and save their written code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5cQKhMgv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://vg08.met.vgwort.de/na/cdaf40a1a3b944418fc3acdfe6e6561b" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5cQKhMgv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://vg08.met.vgwort.de/na/cdaf40a1a3b944418fc3acdfe6e6561b" alt="" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ansible</category>
      <category>teaching</category>
    </item>
    <item>
      <title>TIL how to create Azure Prometheus datasources with Ansible</title>
      <dc:creator>rndmh3ro</dc:creator>
      <pubDate>Mon, 03 Jul 2023 08:00:00 +0000</pubDate>
      <link>https://dev.to/rndmh3ro/til-how-to-create-azure-prometheus-datasources-with-ansible-3o62</link>
      <guid>https://dev.to/rndmh3ro/til-how-to-create-azure-prometheus-datasources-with-ansible-3o62</guid>
      <description>&lt;p&gt;Since I spent some time today on this, I’d rather write it down. Creating a Prometheus datasource that uses Azure Authentication was not straight forward.&lt;/p&gt;

&lt;p&gt;Here’s the end result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Create a datasource in Grafana
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Create prometheus datasource
      community.grafana.grafana_datasource:
        name: prometheus_test
        ds_type: prometheus
        ds_url: https://example.westeurope.prometheus.monitor.azure.com
        url: "https://example.com"
        url_username: foo
        url_password: bar
        enforce_secure_data: true
        additional_json_data:
          azureCredentials:
            authType: clientsecret
            azureCloud: AzureCloud
            clientId: "{{ lookup('cloud.terraform.tf_output', 'clientid', project_path=playbook_dir + '../terraform/') }}"
            tenantId: "{{ lookup('cloud.terraform.tf_output', 'tenant_id', project_path=playbook_dir + '../terraform/') }}"
        additional_secure_json_data:
          azureClientSecret: "{{ lookup('cloud.terraform.tf_output', 'password', project_path=playbook_dir + '../terraform/') }}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Bonus: I lookup the client and tenant ID from Terraform state.)&lt;/p&gt;

&lt;p&gt;How did I get to this? By creating the datasource by hand and then querying it via the Grafana API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; curl -s 'https://example.com/api/datasources/7' | jq .
{
  "id": 7,
  "uid": "3E8CgP2Vk",
  "orgId": 1,
  "name": "Prometheus",
  "type": "prometheus",
  "typeLogoUrl": "",
  "access": "proxy",
  "url": "https://example.com.westeurope.prometheus.monitor.azure.com",
  "user": "",
  "database": "",
  "basicAuth": false,
  "withCredentials": false,
  "isDefault": false,
  "jsonData": {
    "azureCredentials": {
      "authType": "clientsecret",
      "azureCloud": "AzureCloud",
      "clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
      "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
    },
    "httpMethod": "POST"
  },
  "secureJsonFields": {
    "azureClientSecret": true,
    "basicAuthPassword": true
  },
  "version": 10,
  "readOnly": false
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There you get the &lt;code&gt;jsonData&lt;/code&gt; and &lt;code&gt;secureJsonFields&lt;/code&gt;. These are the special, required fields that you have to pass to Ansible to get exactly what you want.&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>ansible</category>
      <category>azure</category>
      <category>grafana</category>
    </item>
  </channel>
</rss>
