<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jose Roman Martin Gil</title>
    <description>The latest articles on DEV Community by Jose Roman Martin Gil (@rmarting).</description>
    <link>https://dev.to/rmarting</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rmarting"/>
    <language>en</language>
    <item>
      <title>📛 Improving a GitHub Repo (II)!</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Tue, 20 Jun 2023 07:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/improving-a-github-repo-ii-17nb</link>
      <guid>https://dev.to/rmarting/improving-a-github-repo-ii-17nb</guid>
      <description>&lt;p&gt;My first post of &lt;a href="https://blog.jromanmartin.io/2023/06/12/Improving-a-gh-repository.html"&gt;📛 Improving a GitHub Repo&lt;/a&gt; describes many good things to add in any GitHub repository to be more productive and professional. However, that stuff can be hard to do every time a new repository is created, and we can forget to add something great. I found a way to accelerate this process and also do not forget to add anything great: &lt;a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-template-repository"&gt;GitHub Repository templates&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;GitHub template repository is the best way to replicate an standard structure, including folders, documentation, workflows, branches, and any file required to set up a new project. Using this pattern we can homogenize the structure of any repository of your organization, or also your own projects, easily and saving a lot of time. If you need to standardize your projects, or you need to create many projects on demand, definitely a template repository is your tool.&lt;/p&gt;

&lt;p&gt;As summary, the most great benefits of using repository template I found are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⌛ Spend less time repeating code&lt;/li&gt;
&lt;li&gt;🌟 Focus on building new things&lt;/li&gt;
&lt;li&gt;🦾 Less manual configuration&lt;/li&gt;
&lt;li&gt;📝 Sharing boilerplate code across the code base&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And, the main features of a repository template are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the entire repository files to a brand new repository&lt;/li&gt;
&lt;li&gt;Every template has a new url endpoint called &lt;code&gt;/generate&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Share repository template through your organization or other GitHub users&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My own repository template
&lt;/h2&gt;

&lt;p&gt;I create mw own GitHub Template repository in here: &lt;a href="https://github.com/rmarting/gh-repo-template"&gt;https://github.com/rmarting/gh-repo-template&lt;/a&gt; including all the things described in my previous &lt;a href="https://blog.jromanmartin.io/2023/06/12/Improving-a-gh-repository.html"&gt;post&lt;/a&gt;, or new things added along the time.&lt;/p&gt;

&lt;p&gt;My template repository includes things such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initial content files aligned with the common patterns in any Open Source project: Contribution guide, Code of Conduct, contributors, …&lt;/li&gt;
&lt;li&gt;GitHub templates to report issues or open Pull Requests.&lt;/li&gt;
&lt;li&gt;Standard badges to summarize the repository.&lt;/li&gt;
&lt;li&gt;Standard workflows to release versions, or to implement Continuous Integration pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, creating a new repository and setting up it takes few seconds and steps. Amazing!!!&lt;/p&gt;

&lt;p&gt;Do you have ideas, or comments about how to improve a template repository? Looking forward to hearing you with more contributions into my template repo, or adding comments in this post.&lt;/p&gt;

&lt;p&gt;🤖🚩 Happy creation of new projects!!! 🤖🚩&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/bitmoji/happy-coding.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nGbPhloC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/bitmoji/happy-coding.avif" alt="" title="Happy coding!!!" width="398" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>community</category>
      <category>github</category>
      <category>git</category>
      <category>productivity</category>
    </item>
    <item>
      <title>🖭 How to resize a virtual disk</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Fri, 16 Jun 2023 07:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/how-to-resize-a-virtual-disk-3a6g</link>
      <guid>https://dev.to/rmarting/how-to-resize-a-virtual-disk-3a6g</guid>
      <description>&lt;p&gt;If you work with VMs it is very common that sometimes you need more space, but your VMs were defined with an estimated size. I started to use Virtual Machine Manager to manage my VMs when I joined to Red Hat (sorry but in my previous life I usually used Oracle VM VirtualBox) and sometimes I need to resize my image files but I didn’t know how to do it.&lt;/p&gt;

&lt;p&gt;Thanks to &lt;a href="https://github.com/oarribas"&gt;Oscar Arribas Arribas&lt;/a&gt; I learned to do it using a few&lt;code&gt;virt-xxx&lt;/code&gt; commands. It is very possible to do it using other commands/steps/alternatives however this way is good for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 0️⃣ - Checking current VM disk size
&lt;/h2&gt;

&lt;p&gt;Inside of your VM you can check the size of the each disk with the &lt;code&gt;df&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[rhmw@f38mw01 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 977M 0 977M 0% /dev/shm
tmpfs 391M 1.3M 390M 1% /run
/dev/vda3 19G 4.7G 14G 26% /
tmpfs 977M 40K 977M 1% /tmp
/dev/vda3 19G 4.7G 14G 26% /home
/dev/vda2 974M 257M 650M 29% /boot
tmpfs 196M 56K 196M 1% /run/user/42
tmpfs 196M 40K 196M 1% /run/user/1000

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the &lt;code&gt;home&lt;/code&gt; has 20G allocated. I would like to extend it to 40G.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1️⃣ - Creating a new disk image
&lt;/h2&gt;

&lt;p&gt;Your VM must be stopped before starting to resize it using a new disk image with the desired size.&lt;/p&gt;

&lt;p&gt;We can create a new disk using the &lt;code&gt;qemu-img&lt;/code&gt; tool, something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ qemu-img create -f qcow2 f38mw01-resized.qcow2 40G
Formatting 'f38mw01-resized.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or creating the new image file by the Storage tab in the &lt;code&gt;virt-manager&lt;/code&gt; Connection Details option (Edit -&amp;gt; Connection Details):&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/vm/vm-resize.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KZUy4LCH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/vm/vm-resize.avif" alt="" title="New disk image with more space" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2️⃣ - Renaming the old disk image
&lt;/h2&gt;

&lt;p&gt;Rename the old image file as a backup file (it could be needed to use in a roll-back case):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mv f38mw01.qcow2 f38mw01.qcow2.backup

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also describe the file systems in the old image file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ sudo virt-filesystems --long -h --all -a f38mw01.qcow2.backup
Name Type VFS Label MBR Size Parent
/dev/sda1 filesystem unknown - - 1.0M -
/dev/sda2 filesystem ext4 - - 973M -
/dev/sda3 filesystem btrfs fedora_localhost-live - 19G -
btrfsvol:/dev/sda3/home filesystem btrfs fedora_localhost-live - - -
btrfsvol:/dev/sda3/root filesystem btrfs fedora_localhost-live - - -
btrfsvol:/dev/sda3/root/var/lib/machines filesystem btrfs fedora_localhost-live - - -
/dev/sda1 partition - - - 1.0M /dev/sda
/dev/sda2 partition - - - 1.0G /dev/sda
/dev/sda3 partition - - - 19G /dev/sda
/dev/sda device - - - 20G -

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3️⃣ - Truncating the new disk image
&lt;/h2&gt;

&lt;p&gt;Truncate the old image file and resize the new image file with the new space:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ sudo truncate -r f38mw01.qcow2.backup f38mw01-resized.qcow2
on 🎩 ❯ sudo truncate -s +20G f38mw01-resized.qcow2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4️⃣ - Expanding the new disk image
&lt;/h2&gt;

&lt;p&gt;Expand the new image file using as base the old image file. In this step I am expanding the physical disk mounted for the &lt;code&gt;home&lt;/code&gt; folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ sudo virt-resize --expand /dev/sda3 f38mw01.qcow2.backup f38mw01-resized.qcow2
[0.0] Examining f38mw01.qcow2.backup
**********

Summary of changes:

virt-resize: /dev/sda1: This partition will be left alone.

virt-resize: /dev/sda2: This partition will be left alone.

virt-resize: /dev/sda3: This partition will be resized from 19.0G to 39.0G. 
 The filesystem btrfs on /dev/sda3 will be expanded using the 
‘btrfs-filesystem-resize’ method.

**********
[2.6] Setting up initial partition table on f38mw01-resized.qcow2
[13.4] Copying /dev/sda1
[13.4] Copying /dev/sda2
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ --:--
[15.8] Copying /dev/sda3
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
[48.0] Expanding /dev/sda3 using the ‘btrfs-filesystem-resize’ method

virt-resize: Resize operation completed with no errors. Before deleting 
the old disk, carefully check that the resized disk boots and works 
correctly.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5️⃣ - Starting the VM with the new disk image
&lt;/h2&gt;

&lt;p&gt;Rename the new disk image as the original one used by the VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ mv f38mw01-resized.qcow2 f38mw01.qcow2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the VM and the check that our &lt;code&gt;home&lt;/code&gt; has more space:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[rhmw@f38mw01 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 977M 0 977M 0% /dev/shm
tmpfs 391M 1.3M 390M 1% /run
/dev/vda3 39G 4.7G 34G 13% /
tmpfs 977M 40K 977M 1% /tmp
/dev/vda2 974M 257M 650M 29% /boot
/dev/vda3 39G 4.7G 34G 13% /home
tmpfs 196M 56K 196M 1% /run/user/42
tmpfs 196M 40K 196M 1% /run/user/1000

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;/dev/vda3&lt;/code&gt; now is 34G (in the step 0, the size was 19G). Great!!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Track 💡 - Resizing Microsoft Windows VMs
&lt;/h2&gt;

&lt;p&gt;I know, I know what you are thinking 🤔 … this stuff works because I am using a Linux OS 😇. However, this process also works for Windows VMs.&lt;/p&gt;

&lt;p&gt;Here an example of a Windows 10 with a hard disk of 40G to extend to 50G:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/vm/vm-win10-40g.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KyLHjNte--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/vm/vm-win10-40g.avif" alt="" title="40G in my hard disk!" width="359" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The process is exactly the same:&lt;/p&gt;

&lt;p&gt;Create new disk image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ qemu-img create -f qcow2 win10-resized.qcow2 50G
Formatting 'win10-resized.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=53687091200 lazy_refcounts=off refcount_bits=16

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Back the original disk image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ mv win10.qcow2 win10.qcow2.backup

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the file systems of the old image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ sudo virt-filesystems --long -h --all -a win10.qcow2.backup 
Name Type VFS Label MBR Size Parent
/dev/sda1 filesystem ntfs System Reserved - 579M -
/dev/sda2 filesystem ntfs - - 39G -
/dev/sda1 partition - - 07 579M /dev/sda
/dev/sda2 partition - - 07 39G /dev/sda
/dev/sda device - - - 40G -

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Truncate the new disk image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ sudo truncate -r win10.qcow2.backup win10-resized.qcow2 
on 🎩 ❯ sudo truncate -s +10G win10-resized.qcow2 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expand the new disk image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ sudo virt-resize --expand /dev/sda2 win10.qcow2.backup win10-resized.qcow2 
[0.0] Examining win10.qcow2.backup
**********

Summary of changes:

virt-resize: /dev/sda1: This partition will be left alone.

virt-resize: /dev/sda2: This partition will be resized from 39.4G to 49.4G. 
 The filesystem ntfs on /dev/sda2 will be expanded using the 
‘ntfsresize’ method.

**********
[1.9] Setting up initial partition table on win10-resized.qcow2
[2.8] Copying /dev/sda1
[3.6] Copying /dev/sda2
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
[55.7] Expanding /dev/sda2 using the ‘ntfsresize’ method

virt-resize: Resize operation completed with no errors. Before deleting 
the old disk, carefully check that the resized disk boots and works 
correctly.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rename the new disk using the original name&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on 🎩 ❯ mv win10-resized.qcow2 win10.qcow2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the VM and check the new disk size:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/vm/vm-win10-50g.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f7AjOzVU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/vm/vm-win10-50g.avif" alt="" title="Now 50G in my hard disk!" width="366" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚩 Happy resizing!!! 🤖&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/bitmoji/the-end.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MuoFnar_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/bitmoji/the-end.avif" alt="" title="That's all!!!" width="398" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>community</category>
      <category>productivity</category>
      <category>tools</category>
      <category>howto</category>
    </item>
    <item>
      <title>📛 Improving a GitHub Repo!</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Mon, 12 Jun 2023 07:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/improving-a-github-repo-2o55</link>
      <guid>https://dev.to/rmarting/improving-a-github-repo-2o55</guid>
      <description>&lt;p&gt;I have been using &lt;a href="https://github.com"&gt;GitHub&lt;/a&gt; for a long time and I spent time on a daily basis reviewing repos in the Open Source space. One of the most important things , from my point of view, is to get a good overview of the repository, a good documentation, but also good highlights, such as releases, status of the project, Changelogs, Contribution Guides, emojis (&lt;a href="https://blog.jromanmartin.io/2020/09/28/why-i-use-emoji-in-my-git-commits.html"&gt;why not?&lt;/a&gt;) … so I can get faster a good summary of the repository. This is not easy and there are many different ways to do it, but I found some of them very easy to add in any repository.&lt;/p&gt;

&lt;p&gt;This post covers two of these mechanisms to improve any GitHub Repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📛 Repository Badges&lt;/li&gt;
&lt;li&gt;✅ Changelogs and 🤖🚩automatic releasing process&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📛 Repository Badges
&lt;/h2&gt;

&lt;p&gt;How does it look like a repo with badges? Something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/github/gh-repo-badges.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HJcw_vfJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/github/gh-repo-badges.avif" alt="" title="GitHub repo with badges" width="461" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice 🫶, right?&lt;/p&gt;

&lt;p&gt;Badges are an easy way to summarize a repo with information about topics such as building, test results, license, pipelines or workflows, versions, … This information provides quality metadata coming from many different resources. So meanwhile you are browsing, you get all this information in a single view. Incredible!&lt;/p&gt;

&lt;p&gt;I found a simple way to integrate almost any badge in my repository …&lt;a href="https://shields.io/"&gt;Shields.io&lt;/a&gt;. It is a service providing badges in different formats to integrate in GitHub readme files. This service supports a bunch of continuous integration services, package registries, distributions, app stores, social networks, code coverage services, and code analysis services (anything else? 🤷🏽‍♀️).&lt;/p&gt;

&lt;p&gt;In short, using the web site you can customize your badge to your own requisites and 3rd party services, getting a code to add in your GitHub readme easily.&lt;/p&gt;

&lt;p&gt;For example, the previous image is rendered using the next entries in my readme file of my Blog site repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;![License](https://img.shields.io/github/license/rmarting/rmarting.github.io?style=plastic)
![Main Lang](https://img.shields.io/github/languages/top/rmarting/rmarting.github.io)
![Languages](https://img.shields.io/github/languages/count/rmarting/rmarting.github.io)
![Last Commit](https://img.shields.io/github/last-commit/rmarting/rmarting.github.io)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Easy ✅, and powerful 💪! So, don’t forget to add your badges in your repo to help me, and others 🤗!&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Changelogs and 🤖🚩automatic releasing process
&lt;/h2&gt;

&lt;p&gt;Changelog, as a comprehensive and up-to-date file, is crucial for effective project management and collaboration. A changelog serves as a documented record of all the notable changes, enhancements, and bug fixes made to your software over time. It not only provides transparency and accountability but also facilitates communication among team members and external contributors. This file enables users and developers to easily track the evolution of the project, understand the latest features and improvements, and quickly identify any potential issues or compatibility concerns.&lt;/p&gt;

&lt;p&gt;Getting all these benefits require a regular updating of that file, usually after releasing a new version or iteration of our software. But, how to track all the changes between versions? Who should do it? When? … It seems that it could be tedious every time if we have to do it manually … we can forget something to add, or we can forget to update it at all.&lt;/p&gt;

&lt;p&gt;As fan of …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.redbubble.com/i/sticker/AUTOMATE-ALL-THE-THINGS-by-antonwadstrom/29760692.EJUG5"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ckbtS9A2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/github/automate-all-the-things.avif" alt="" title="Automate all the things (Sticker)" width="795" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a way to automatically update the changelog file every time a new version is released. This blog post summarizes this process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1️⃣ - Create your Changelog file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file, usually called &lt;code&gt;CHANGELOG.md&lt;/code&gt;, with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To delve deeper into the significance of changelog files and learn about best practices for creating and maintaining them, I recommend checking out &lt;a href="https://keepachangelog.com/"&gt;Keep a Changelog&lt;/a&gt;. This resource offers a comprehensive guide and industry-accepted standards for crafting informative and well-structured changelogs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2️⃣ - Use a Release workflow to publish new releases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/marketplace/actions/release-drafter"&gt;Release Drafter GitHub Action&lt;/a&gt; is an incredible GitHub action to automate a new release of the repository. The action is initially designed to draft a new release, but it is also valid to release automatically the version. In my case, I will automatically release the version as soon as a new tag is pushed.&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;release-drafter.yml&lt;/code&gt; file inside of the &lt;code&gt;.github/workflows&lt;/code&gt; folder will publish a new release after a new tag is pushed into the repository. The tag must be aligned with the&lt;a href="https://semver.org"&gt;Semantic Versioning&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: 🔖 Release Drafter 🔖

on:
  push:
    tags:
      - v[0-9]+.[0-9]+.[0-9]+

permissions:
  contents: read

jobs:
  update_release_draft:
    name: Release drafter
    runs-on: ubuntu-latest
    permissions:
      # write permission is required to create a github release
      contents: write

    steps:
      - name: Update Release Draft
        uses: release-drafter/release-drafter@v5
        with:
          publish: true
          prerelease: false
        env:
          # Instead of GITHUB_TOKEN Ref: https://github.com/stefanzweifel/changelog-updater-action/discussions/30
          GITHUB_TOKEN: $

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;publish: true&lt;/code&gt; attribute publishes the release as final, just because the &lt;code&gt;prerelease&lt;/code&gt; attribute is marked as &lt;code&gt;false&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3️⃣ - Format the Release content&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The content of the release will include information coming from the different pull request, issues, and commits. This information can be included automatically into the release notes using different patterns. These patterns are described in the &lt;code&gt;release-drafter.yml&lt;/code&gt; file inside &lt;code&gt;.github&lt;/code&gt; folder:&lt;/p&gt;

&lt;p&gt;The following example is a full example using different categories of information to add into the release notes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This release drafter follows the conventions from https://keepachangelog.com

name-template: 'v$RESOLVED_VERSION'
tag-template: 'v$RESOLVED_VERSION'
template: |
  ## What Changed 👀

  $CHANGES

  **Full Changelog** : https://github.com/$OWNER/$REPOSITORY/compare/$PREVIOUS_TAG...v$RESOLVED_VERSION
categories:
  - title: 🚀 Features
    labels:
      - feature
      - enhancement
  - title: 🐛 Bug Fixes
    labels:
      - fix
      - bug
  - title: ⚠️ Changes
    labels:
      - changed
  - title: ⛔️ Deprecated
    labels:
      - deprecated
  - title: 🗑 Removed
    labels:
      - removed
  - title: 🔐 Security
    labels:
      - security
  - title: 📄 Documentation
    labels:
      - docs
      - documentation      
  - title: 🧩 Dependency Updates
    labels:
      - deps
      - dependencies
    collapse-after: 5

change-template: '* $TITLE (#$NUMBER)'
change-title-escapes: '\&amp;lt;*_&amp;amp;' # You can add # and @ to disable mentions, and add ` to disable code blocks.

exclude-labels:
  - skip-changelog

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4️⃣ - Update Changelog file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After a new version is released, we want to update the changelog with the latest changes, as we are doing with the release notes. We can automate it using another amazing GitHub Action - &lt;a href="https://github.com/marketplace/actions/changelog-updater"&gt;Changelog Updater&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This action can be integrated into another workflow (i.e: &lt;code&gt;update-changelog.yml&lt;/code&gt; inside of the &lt;code&gt;.github/workflows&lt;/code&gt; folder). This workflow can be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: 📄 Update Changelog 📄

on:
  release:
    types: [released]

jobs:
  update:
    name: Update Changelog
    runs-on: ubuntu-latest
    permissions:
      # Give the default GITHUB_TOKEN write permission to commit and push the 
      # updated CHANGELOG back to the repository.
      # https://github.blog/changelog/2023-02-02-github-actions-updating-the-default-github_token-permissions-to-read-only/
      contents: write    

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Update Changelog
        uses: stefanzweifel/changelog-updater-action@v1
        with:
          latest-version: $
          release-notes: $

      - name: Commit updated Changelog
        uses: stefanzweifel/git-auto-commit-action@v4
        with:
          branch: main
          commit_message: '🔖 Update changelog'
          file_pattern: CHANGELOG.md

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, this workflow will start when a new release is released (&lt;code&gt;types: [released]&lt;/code&gt;), including the changes from previous release and committing the change into the &lt;code&gt;main&lt;/code&gt; branch of our repo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5️⃣ - Linking release and update changelog workflows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is an issue reported &lt;a href="https://github.com/stefanzweifel/changelog-updater-action/discussions/30"&gt;here&lt;/a&gt;about how to automatically trigger the update changelog workflow from the release workflow. The workaround to fix it requires adding a new secret (i.e: &lt;code&gt;PERSONAL_ACCESS_TOKEN&lt;/code&gt;) into your repo:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/github/gh-secrets.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8YgAVbKW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/github/gh-secrets.avif" alt="" title="GitHub Repo secrets" width="797" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6️⃣ - Release a new version&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, it is very simple, just follow your development workflow, using your pull-request life cycle, add the labels of your own repository, and then tag a new version when you are ready to do it.&lt;/p&gt;

&lt;p&gt;Push it into your repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git tag v1.2.1 -m "Version 1.2.1"
git push origin v1.2.1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflows run as expected:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/github/gh-actions.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--md-3NGo8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/github/gh-actions.avif" alt="" title="Workflows executed" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;… a new release is created, including the notes:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/github/gh-new-release.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G8pTwVn6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/github/gh-new-release.avif" alt="" title="New GitHub Release" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;… and the changelog is updated too:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/2023/06/github/gh-changelog.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UydO1M8P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/2023/06/github/gh-changelog.avif" alt="" title="Changelog updated" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is …&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/bitmoji/super-awesome.avif"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S5I3Yde9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://blog.jromanmartin.io/images/bitmoji/super-awesome.avif" alt="" title="Super Awesome" width="398" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🤖🚩 Happy automating releasing!!! 🤖🚩&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;This blog post is my own summary about this process, but it was based from the content and experience of others, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://tiagomichaelsousa.dev/articles/stop-writing-your-changelogs-manually"&gt;Stop writing your changelogs manually&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/marketplace/actions/release-drafter"&gt;Release Drafter GitHub Action&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/marketplace/actions/changelog-updater"&gt;Changelog Update GitHub Action&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My kudos ❤️ to all of them!!!&lt;/p&gt;

</description>
      <category>community</category>
      <category>github</category>
      <category>git</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Cloud Native CICD Pipelines in OpenShift</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Fri, 01 Apr 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/cloud-native-cicd-pipelines-in-openshift-4eeg</link>
      <guid>https://dev.to/rmarting/cloud-native-cicd-pipelines-in-openshift-4eeg</guid>
      <description>&lt;h2&gt;
  
  
  Cloud Native CICD Pipelines in OpenShift
&lt;/h2&gt;

&lt;p&gt;My first &lt;a href="https://openpracticelibrary.com/practice/continuous-integration/"&gt;Continuous Integration&lt;/a&gt; and&lt;a href="https://openpracticelibrary.com/practice/continuous-delivery/"&gt;Continuous Delivery&lt;/a&gt; pipelines (from now CICD) were created with &lt;a href="https://en.wikipedia.org/wiki/Hudson_(software)"&gt;Hudson&lt;/a&gt;(I know, I know !! I am very old 👴 on this space), and after that with &lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt;for longer time. During this long period I used it (and others similar) to build, test, package, and deploy many different kind of applications (Monolith, SOA Services, Microservices, standalone apps, …) into many different kind of platforms (&lt;a href="https://tomcat.apache.org/"&gt;Tomcat&lt;/a&gt;, &lt;a href="https://www.redhat.com/en/technologies/jboss-middleware/application-platform"&gt;Red Hat JBoss Enterprise Applications&lt;/a&gt;,&lt;a href="https://www.oracle.com/es/java/weblogic/"&gt;WebLogic&lt;/a&gt;, …) and of course on containers platform such as &lt;a href="https://www.redhat.com/en/technologies/cloud-computing/openshift"&gt;Red Hat OpenShift&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, in cloud environments with cloud native applications in some cases I found so much complexity not easy to deal with it. Basically these tools were defined to run on Virtual Machines, required IT operations for maintenance, conflicts between teams or projects with shared plugins or extensions, no native interoperability with Kubernetes resources, …&lt;/p&gt;

&lt;p&gt;… and nowadays I found a new player in this scenario to improve my CICD pipelines in the new Cloud Native World, with containers, Kubernetes, and OpenShift. This player is &lt;a href="https://tekton.dev/"&gt;Tekton&lt;/a&gt;, or&lt;a href="https://cloud.redhat.com/learn/topics/ci-cd"&gt;Red Hat OpenShift Pipelines&lt;/a&gt; as the enterprise version for OpenShift.&lt;/p&gt;

&lt;p&gt;Tekton is a cloud-native solution for building CICD systems, providing a set of building blocks, components and an extended catalog (&lt;a href="https://hub.tekton.dev/"&gt;Tekton Hub&lt;/a&gt;) with great resources to use, making it a complete ecosystem. It is part of the &lt;a href="https://cd.foundation/"&gt;CD Foundation&lt;/a&gt; with a great community, very active.&lt;/p&gt;

&lt;p&gt;As Tekton is installed as a &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Kubernetes Operator&lt;/a&gt;, providing &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;Custom Resources Definitions&lt;/a&gt;to define the building blocks, it is very easy to create, and reuse them in the pipelines. As other Kubernetes or OpenShift objects, Tekton CRDs are first-citizens, so many of the processes uses to manage your OpenShift platform are valid for them. For example, as a fan of &lt;a href="https://openpracticelibrary.com/practice/everything-as-code/"&gt;Everything as Code&lt;/a&gt;practice, I can define my CICD pipelines as code and store them in a Git repository.&lt;/p&gt;

&lt;p&gt;Tekton uses the services provides by the OpenShift, so it is designed for containers, and scalability. It means that the pipelines and tasks are executed on-demand with containers, so it is easy to scale them. We, as CICD designers, don’t need to deal with the platform, or infrastructure, as OpenShift provides us the services, and Tekton the objects to design the flow of our CICD pipeline.&lt;/p&gt;

&lt;p&gt;In that integration with OpenShift services, the building images processes are now really native and we could use any of the technologies available, such as &lt;a href="https://github.com/openshift/source-to-image"&gt;source-to-image&lt;/a&gt;,&lt;a href="https://buildah.io/"&gt;buildah&lt;/a&gt;, &lt;a href="https://github.com/GoogleContainerTools/kaniko"&gt;kaniko&lt;/a&gt;,&lt;a href="https://github.com/GoogleContainerTools/jib"&gt;jib&lt;/a&gt;, … Not more needed creating a custom image for a Jenkins-agent to build our application.&lt;/p&gt;

&lt;p&gt;The same to integrate the deployment processes of your application, as you can interact natively with the platform … but in this scenario I am more fan to move the Continuous Delivery following the &lt;a href="https://openpracticelibrary.com/practice/gitops/"&gt;GitOps&lt;/a&gt; approach with another amazing tool as &lt;a href="https://github.com/argoproj/argo-cd"&gt;ArgoCD&lt;/a&gt; (but that is another story, and other blog-post 😉).&lt;/p&gt;

&lt;p&gt;At last, but not least, Tekton provides a set of amazing tooling to use on your favorite IDE, command-line tool and so on, and accelerate the adoption by your side, and make your life easier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://tekton.dev/docs/cli/"&gt;&lt;code&gt;tkn&lt;/code&gt; command line interface&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/redhat-developer/vscode-tekton"&gt;Tekton Pipelines Extension for VSCode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://plugins.jetbrains.com/plugin/14096-tekton-pipelines-by-red-hat"&gt;Tekton Pipelines by Red Hat for IntelliJ&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, let’s go through across the main components of this amazing project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tekton Components
&lt;/h2&gt;

&lt;p&gt;Tekton provides a set of different components to design and build your pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tasks&lt;/li&gt;
&lt;li&gt;Pipelines&lt;/li&gt;
&lt;li&gt;Workspaces&lt;/li&gt;
&lt;li&gt;Triggers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are others too, but these ones are the base.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tasks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://tekton.dev/docs/pipelines/tasks/"&gt;&lt;code&gt;Tasks&lt;/code&gt;&lt;/a&gt; is a collection of &lt;code&gt;Steps&lt;/code&gt; that you define and arrange in a specific order of execution as part of your continuous integration flow.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Tasks&lt;/code&gt; can have more than one &lt;code&gt;step&lt;/code&gt;, allowing to specialize the task with more detailed steps. The steps will run in the order in which they are defined in the steps array.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;Task&lt;/code&gt; is available within a specific namespace, while a &lt;code&gt;ClusterTask&lt;/code&gt; is available across the entire cluster.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;Task&lt;/code&gt; is executed as a Pod on your OpenShift cluster.&lt;/p&gt;

&lt;p&gt;This is the typical &lt;code&gt;Hello World&lt;/code&gt; Task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: hello-task
spec:
  steps:
    - name: say-hello
      image: registry.redhat.io/ubi7/ubi-minimal
      command: ['/bin/bash']
      args: ['-c', 'echo Hello World']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Meanwhile a &lt;code&gt;Task&lt;/code&gt; is a definition, the execution of the task with the results and outputs is a &lt;code&gt;TaskRun&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;An execution of previous task should be similar to (&lt;em&gt;simplified and omitted some fields&lt;/em&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  generateName: hello-task-run-
  name: hello-task-run-9d8hs
  uid: f3c8d81b-3e8d-4ad5-a01b-bf9b147485f6
  creationTimestamp: '2022-03-31T16:34:41Z'
  namespace: pipelines-demo
  labels:
    app.kubernetes.io/managed-by: tekton-pipelines
    tekton.dev/task: hello-task
spec:
  taskRef:
    kind: Task
    name: hello-task
status:
  completionTime: '2022-03-31T16:34:47Z'
  conditions:
    - lastTransitionTime: '2022-03-31T16:34:47Z'
      message: All Steps have completed executing
      reason: Succeeded
      status: 'True'
      type: Succeeded
  podName: hello-task-run-9d8hs-pod-kqcjv
  startTime: '2022-03-31T16:34:41Z'
  steps:
    - container: step-say-hello
      imageID: &amp;gt;-
        registry.redhat.io/ubi7/ubi-minimal@sha256:700ec6f27ae8380ca1a3fcab19b5630d5af397c980628fa1a207bf9704d88eb0
      name: say-hello
      terminated:
        containerID: cri-o://346b671912a63a98b310f0f06f0bcd9d9e3fab3b24a75246aed4921863b1d146
        exitCode: 0
        finishedAt: '2022-03-31T16:34:46Z'
        reason: Completed
        startedAt: '2022-03-31T16:34:46Z'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OpenShift provides a great dashboard to browse and inspect the Tasks and TasksRun&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/ocp-pipelines/ocp-tasks-dashboard.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C64qtOda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/images/ocp-pipelines/ocp-tasks-dashboard.png" alt="" title="OpenShift Tasks Dashboard" width="880" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pipelines
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://tekton.dev/docs/pipelines/pipelines/"&gt;&lt;code&gt;Pipelines&lt;/code&gt;&lt;/a&gt; are a collection of &lt;code&gt;Tasks&lt;/code&gt; that you define and arrange in a specific order of execution as part of your continuous integration flow. In fact, tasks should do one single thing so you can reuse them across pipelines or even within a single pipeline.&lt;/p&gt;

&lt;p&gt;You can configure various execution conditions to fit your business needs.&lt;/p&gt;

&lt;p&gt;This &lt;a href="https://github.com/rmarting/ocp-pipelines-demo/blob/main/05-say-things-in-order-pipeline.yaml"&gt;example&lt;/a&gt; could give you a general view of a pipeline. This pipeline should be represented as:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/ocp-pipelines/ocp-pipeline-flow.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kkJaBZKs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/images/ocp-pipelines/ocp-pipeline-flow.png" alt="" title="Pipeline Flow" width="428" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each &lt;code&gt;Task&lt;/code&gt; in a &lt;code&gt;Pipeline&lt;/code&gt; executes as a &lt;code&gt;Pod&lt;/code&gt; on your OpenShift cluster.&lt;/p&gt;

&lt;p&gt;Meanwhile a &lt;code&gt;Pipeline&lt;/code&gt; is a definition, the execution of the pipeline with the results and outputs is a &lt;code&gt;PipelineRun&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;OpenShift provides a great dashboard to browse and inspect the Tasks and TasksRun&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/ocp-pipelines/ocp-pipelines-dashboard.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_U2_Egzb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/images/ocp-pipelines/ocp-pipelines-dashboard.png" alt="" title="OpenShift Pipelines Dashboard" width="880" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Workspaces
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://tekton.dev/docs/pipelines/workspaces/"&gt;&lt;code&gt;Workspaces&lt;/code&gt;&lt;/a&gt; allow &lt;code&gt;Tasks&lt;/code&gt; to declare parts of the filesystem that need to be provided at runtime by &lt;code&gt;TaskRuns&lt;/code&gt;. The main use cases are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage of inputs and/or outputs&lt;/li&gt;
&lt;li&gt;Sharing data among &lt;code&gt;Tasks&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Mount points for configurations held in &lt;code&gt;Secrets&lt;/code&gt; or &lt;code&gt;ConfigMaps&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A cache of build artifacts that speed up jobs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Workspaces&lt;/code&gt; are similar to &lt;code&gt;Volumes&lt;/code&gt; except that they allow a &lt;code&gt;Task&lt;/code&gt; author to defer to users and their &lt;code&gt;TaskRuns&lt;/code&gt; when deciding which class of storage to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Triggers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://tekton.dev/docs/triggers/"&gt;&lt;code&gt;Triggers&lt;/code&gt;&lt;/a&gt; are the components ready to detect and extract information from events from a variety of sources and execute &lt;code&gt;Tasks&lt;/code&gt; or &lt;code&gt;Pipelines&lt;/code&gt; to respond them.&lt;/p&gt;

&lt;p&gt;Triggers are a set of different objects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;EventListener&lt;/code&gt;: listens for events at a specified port on your OpenShift cluster. Specifies one or more &lt;code&gt;Triggers&lt;/code&gt; or &lt;code&gt;TriggerTemplates&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Trigger&lt;/code&gt;: specifies what happens when the &lt;code&gt;EventListener&lt;/code&gt; detects an event. It is defined with a &lt;code&gt;TriggerTemplate&lt;/code&gt;, a &lt;code&gt;TriggerBinding&lt;/code&gt;, and optionally, an &lt;a href="https://tekton.dev/docs/triggers/interceptors/"&gt;Interceptor&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TriggerTemplate&lt;/code&gt;: specifies a blueprint for the resource, such as a &lt;code&gt;TaskRun&lt;/code&gt; or &lt;code&gt;PipelineRun&lt;/code&gt;, that you want to instantiate and/or execute when your &lt;code&gt;EventListener&lt;/code&gt; detects an event.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TriggerBinding&lt;/code&gt;: specifies the fields in the event payload from which you want to extract data and the fields in your corresponding &lt;code&gt;TriggerTemplate&lt;/code&gt; to populate with the extracted values. You can then use the populated fields in the &lt;code&gt;TriggerTemplate&lt;/code&gt; to populate fields in the associated &lt;code&gt;TaskRun&lt;/code&gt; or &lt;code&gt;PipelineRun&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most common use case of the &lt;code&gt;Triggers&lt;/code&gt; and &lt;code&gt;EventListeners&lt;/code&gt; is integrated with Git repositories through the use of WebHooks. A Git mechanism to get data from any change in a Git repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Show me the code
&lt;/h2&gt;

&lt;p&gt;But, you could do more things with OpenShift Pipelines, to design you cloud native pipelines. This is a small briefing of the main characteristics and features. But if you want to play with this new toy, I created a sample &lt;a href="https://github.com/rmarting/ocp-pipelines-demo"&gt;GitHub repository&lt;/a&gt; with a demo of tasks, pipelines and triggers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rmarting/ocp-pipelines-demo"&gt;https://github.com/rmarting/ocp-pipelines-demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, only your imagination, use cases and Tekton could help you to create amazing pipelines in a easy, descriptive and simple way.&lt;/p&gt;

&lt;p&gt;And if you want to dive deeper, don't miss to check the following references:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://tekton.dev/docs/"&gt;Tekton Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pipelinesascode.com/"&gt;Pipelines as Code&lt;/a&gt;, an opinionated CI based on OpenShift Pipelines / Tekton.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tektoncd/chains"&gt;Tekton Chains&lt;/a&gt; for supply chain security.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy cloud-native pipelining 😃!!!&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>openshift</category>
      <category>operators</category>
      <category>tekton</category>
    </item>
    <item>
      <title>Monorepo, GitOps, CICD and beyond</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Fri, 25 Mar 2022 10:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/monorepo-gitops-cicd-and-beyond-26a8</link>
      <guid>https://dev.to/rmarting/monorepo-gitops-cicd-and-beyond-26a8</guid>
      <description>&lt;h1&gt;
  
  
  GitOps Product Monorepo Sample
&lt;/h1&gt;

&lt;p&gt;Developing cloud native products following Agile, and DevOps practices could require to use different approaches, patterns and processes to do it in a fast pace. Many of the most common patterns in this space are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product Monorepo&lt;/li&gt;
&lt;li&gt;Trunk-based development&lt;/li&gt;
&lt;li&gt;GitOps&lt;/li&gt;
&lt;li&gt;Cloud Native Application&lt;/li&gt;
&lt;li&gt;Continuous Integration, Continuous Delivery and Continuous Deployment&lt;/li&gt;
&lt;li&gt;Sealed Secrets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use all of them at the same time could be a challenge for a new team, but it is possible getting the best benefits from each of them. I have working in many different use cases and products for a long time, many times using these patterns in somehow, learning from the pitfalls, and getting the best benefits. This repo represents an &lt;em&gt;opinionated&lt;/em&gt; manner to do it for a new product team, combining all these practices in the same place.&lt;/p&gt;

&lt;p&gt;Hoping this approach could help in your use case. Use carefully all the assess, and land into your specif use case.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;product&lt;/code&gt; is the software solution created for a business scenario, adding the value and solution to achieve the goals of the business. We build software to resolve business scenarios.&lt;/p&gt;

&lt;p&gt;❗ Soon a blog post will try to clarify many of the aspects related with this repo. Stay tunned!❗&lt;/p&gt;

&lt;h2&gt;
  
  
  Product Monorepo
&lt;/h2&gt;

&lt;p&gt;Product &lt;a href="https://en.wikipedia.org/wiki/Monorepo"&gt;Monorepo&lt;/a&gt; means to have everything related with a software product in one single place, shared with the product team, to implement the full Software Delivery Life cycle of the product.&lt;/p&gt;

&lt;p&gt;One of the most important topics for a Product Monorepo is to have a clear folder structure, otherwise you could get a chaotic layout with more pains that gains.&lt;/p&gt;

&lt;p&gt;This repo is organized as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="//./bootstrap/README.md"&gt;bootstrap&lt;/a&gt; folder includes the initial components to setup the environment, mainly tools to define the base of the rest of the solution.&lt;/li&gt;
&lt;li&gt;
&lt;a href="//./apps/"&gt;apps&lt;/a&gt; folder includes the list of applications or components of the product.&lt;/li&gt;
&lt;li&gt;
&lt;a href="//./charts/README.md"&gt;charts&lt;/a&gt; folder includes the Helm Charts to accelerate the deployment of any component, tool or application related with the product.&lt;/li&gt;
&lt;li&gt;
&lt;a href="//./argocd/README.md"&gt;argocd&lt;/a&gt; folder includes the items related with ArgoCD and GitOps.&lt;/li&gt;
&lt;li&gt;
&lt;a href="//./tekton/README.md"&gt;tekton&lt;/a&gt; folder includes the items related with Tekton and Pipelines.&lt;/li&gt;
&lt;li&gt;
&lt;a href="//./e2e-test/README.md"&gt;e2e-test&lt;/a&gt; folder includes the end-to-end test suites of the product.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Trunk-based Development
&lt;/h2&gt;

&lt;p&gt;A Monorepo is a specific &lt;a href="https://trunkbaseddevelopment.com/"&gt;Trunk-Based Development&lt;/a&gt; implementation where the product team puts its source for all applications/services/libraries/frameworks into one repository and forces team members to commit together in that trunk - atomically.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;trunk&lt;/code&gt; branch is defined basically as productive and any change merged there is candidate to be deployed in any environment, including production. There are not any other starting point to promote changes, as everything is integrated in the trunk. Other branches are considered ephemeral, to manage short pieces of changes, with a review process (by &lt;a href="https://openpracticelibrary.com/practice/pair-programming/"&gt;pair programming&lt;/a&gt; or by a Pull Request) before to be merged into the trunk.&lt;/p&gt;

&lt;p&gt;This method allows the team members to develop fast small chunks (many times as &lt;a href="https://openpracticelibrary.com/practice/feature-toggles/"&gt;features flags&lt;/a&gt;or not), with sooner integrations cycles, reducing merge conflicts, and promoting changes to others faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://openpracticelibrary.com/practice/gitops/"&gt;GitOps&lt;/a&gt; defines the source of truth a Git repository, where everything starts from there and defining the desired state of our product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Native Application
&lt;/h2&gt;

&lt;p&gt;Designing our product as a &lt;a href="https://en.wikipedia.org/wiki/Cloud_native_computing"&gt;Cloud Native&lt;/a&gt; solution will bring us many benefits in cloud environments. Most of the cases a Microservice architecture following the &lt;a href="https://12factor.net/"&gt;Twelve-Factor App&lt;/a&gt;methodology is the right starting point. Our product will implement that methodology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration, Continuous Delivery and Continuous Deployment
&lt;/h2&gt;

&lt;p&gt;Any product in the new automated era must be done without the most well-known benefits of&lt;a href="https://openpracticelibrary.com/practice/continuous-integration/"&gt;Continuous Integration&lt;/a&gt;,&lt;a href="https://openpracticelibrary.com/practice/continuous-delivery/"&gt;Continuous Delivery&lt;/a&gt; and&lt;a href="https://openpracticelibrary.com/practice/continuous-deployment/"&gt;Continuous Deployment&lt;/a&gt;. Otherwise you are failing from the beginning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sealed Secrets
&lt;/h2&gt;

&lt;p&gt;GitOps means &lt;strong&gt;“if it’s not in Git, it’s NOT REAL”&lt;/strong&gt; , so it is a challenge to store sensitive data, like credentials, in Git repositories, where many people can access?. OpenShift provides a good way to manage sensitive data in the platform, but we need to extend it with other great tools to store sensitive data in Git without any break of security.&lt;/p&gt;

&lt;p&gt;Here &lt;a href="https://github.com/bitnami-labs/sealed-secrets"&gt;Sealed Secrets&lt;/a&gt; arrives to help us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Playing with our Product Monorepo
&lt;/h2&gt;

&lt;p&gt;Now, it is time to play 🎲.&lt;/p&gt;

&lt;p&gt;This repository defines a sample product with the following applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A sample Angular application as frontend for the final users. Details &lt;a href="//./apps/sample-frontend/README.md"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A sample Quarkus application as backend to manage the &lt;em&gt;business logic&lt;/em&gt; of our product. Details &lt;a href="//./apps/sample-backend/README.md"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eM3TrOfB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/2022/03/25/img/product-deployment-topology.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eM3TrOfB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/2022/03/25/img/product-deployment-topology.png" alt="Product Monorepo Topology" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;This repository had been developed and tested in the following environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Red Hat OpenShift Container Platform 4.10&lt;/li&gt;
&lt;li&gt;Red Hat OpenShift GitOps 1.4.3 (ArgoCD)&lt;/li&gt;
&lt;li&gt;Red Hat OpenShift Pipelines 1.6.2 (Tekton)&lt;/li&gt;
&lt;li&gt;Sealed Secrets Helm Chart 1.16.1&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bootstrapping Red Hat OpenShift Container Platform
&lt;/h3&gt;

&lt;p&gt;To prepare your OCP environment, review and follow the &lt;a href="//./bootstrap/README.md"&gt;bootstrap instructions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything goes fine, your environment should look like as:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YRLWqwFE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/2022/03/25/img/cicd-tools-deployment-topology.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YRLWqwFE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/2022/03/25/img/cicd-tools-deployment-topology.png" alt="CICD Tools Deployment Topology" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  GitOps with ArgoCD
&lt;/h3&gt;

&lt;p&gt;To prepare the GitOps scenario with ArgoCD, review and follow the &lt;a href="//./argocd/README.md"&gt;instructions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If everything goes fine, your ArgoCD should look like as:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xvSvh6sz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/2022/03/25/img/argocd-deployment-topology.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xvSvh6sz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/2022/03/25/img/argocd-deployment-topology.png" alt="ArgoCD Deployment Topology" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CICD with Tekton Pipelines
&lt;/h3&gt;

&lt;p&gt;This Product Monorepo has a set of different pipelines to cover the Software Delivery Life cycle, integrated in our GitOps approach. The pipelines are described &lt;a href="//./tekton/README.md"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feedback, Comments, and improvements
&lt;/h2&gt;

&lt;p&gt;As this is an &lt;em&gt;opinionated&lt;/em&gt; approach, from my field experience in real scenarios and use cases, I am always open to learn from other experiences and use cases. Feel free to comment, improve or change my mind with your great ideas. Don’t forget to review our &lt;a href="//./CONTRIBUTING.md"&gt;Contribution Guide&lt;/a&gt; do it in many different ways (issues, pull-request, comments, …), don’t miss the chance to do it.&lt;/p&gt;

&lt;p&gt;I also open to share this approach, techniques and tools in community, meetup or simple group of colleagues around topics such as DevOps, Agile, GitOps, Cloud Native, … If you think that I can participate, please, let me know it.&lt;/p&gt;

&lt;p&gt;If you are here, thank you so much. 😄 🎉&lt;/p&gt;

</description>
      <category>howto</category>
      <category>quarkus</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Lessons learned migrating Spring Boot to Quarkus</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Fri, 03 Dec 2021 09:15:00 +0000</pubDate>
      <link>https://dev.to/rmarting/lessons-learned-migrating-spring-boot-to-quarkus-1ccb</link>
      <guid>https://dev.to/rmarting/lessons-learned-migrating-spring-boot-to-quarkus-1ccb</guid>
      <description>&lt;p&gt;This blog post describes a set of lessons learned from my personal experience migrating Spring Boot applications to Quarkus. The article does not cover all the topics, approaches, architectures or designs to keep in mind for an enterprise full migration project, but it includes a set of conclusions from a personal perspective.&lt;/p&gt;

&lt;p&gt;Cloud Native Applications, Microservices Architectures, Event-Driven Architectures, Serverless, … are the most common patterns, designs and topics used by Enterprise, Start-ups, and Software Companies to design and deploy new applications in this new Cloud Era (a.k.a &lt;a href="https://www.infoq.com/articles/microservices-post-kubernetes/"&gt;Kubernetes Era&lt;/a&gt;). To build these kinds of new applications exists a list of different technologies, frameworks and languages (Go, Node.JS, Java, …) however one of the most extended and used is Spring Boot.&lt;/p&gt;

&lt;p&gt;Spring Boot is a well-known framework, with a large community of developers, long history and friendly for many developers. However Spring Boot has other behaviors that could not fit well in a Cloud Native environment (resources consumption, startup time, response time, development lifecycle, …).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://quarkus.io/"&gt;Quarkus&lt;/a&gt; is the new player in the playground to design new applications under these paradigms.&lt;/p&gt;

&lt;p&gt;Quarkus is a full-stack, Kubernetes-native Java framework made for Java Virtual Machines and native compilation. Quarkus is crafted from best-of-breed Java libraries and standards with amazingly fast boot times and incredibly low memory in container orchestration platforms like Kubernetes.&lt;/p&gt;

&lt;p&gt;Quarkus has a clear vision based in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/container-first"&gt;Container First&lt;/a&gt;: Optimized for low memory usage and fast startup times.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/continuum"&gt;Imperative and Reactive&lt;/a&gt;: Designed with this new world in mind and provides first-class support for these different paradigms.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/developer-joy"&gt;Developer Joy&lt;/a&gt;: Designed to make happy and fun the developer’s life.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/standards"&gt;Community and Standards&lt;/a&gt;: No need to learn new technologies, designed on top of proven standards (Eclipse MicroProfile, JAX-RS, JPA, …)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/kubernetes-native"&gt;Kube-native&lt;/a&gt;: Providing tools optimized for Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Houston, we have a problem!&lt;/strong&gt; Quarkus versus Spring Boot? Quarkus? Spring Boot? Which one?&lt;/p&gt;

&lt;p&gt;In many migration projects (frameworks, application servers, jdk, …) the effort to adapt the source code to the new platform was one key. Refactoring code implies a range effort between trivial to epic and it could decline the decision to go or not to go. Refactoring from Spring Boot to Quarkus could be another one.&lt;/p&gt;

&lt;p&gt;This article describes the following migration approaches from Spring Boot to Quarkus:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrating to Quarkus Extensions for Spring Boot&lt;/li&gt;
&lt;li&gt;Refactoring to Standard Libraries and Quarkus Extensions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🚨🚨 &lt;strong&gt;Disclaimer and Spoiler Alert&lt;/strong&gt; This article is not an official migration guide. 🚨🚨&lt;/p&gt;

&lt;p&gt;Any kind of migration requires analyzing different things to answer questions such as why?, who?, when?, how?, where?. These questions are not easy to analyze and describe in a single article because they involve a large number of aspects like: processes, people, management, testing, … but who does not listen to sentences such as: &lt;em&gt;Java is dead now&lt;/em&gt;; &lt;em&gt;XX framework is better for cloud containers&lt;/em&gt;, &lt;em&gt;Java consumes a lot of resources&lt;/em&gt;, … Well, &lt;strong&gt;Quarkus changes many of those sentences&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This article only wants to focus on some aspects of how to code/refactor the source code from Spring Boot to Quarkus. It does not cover all the features or capabilities of both frameworks but it summarizes a set of lessons learned from my personal experience. Quarkus is a highly active community so some of these lessons could change in the future (maybe in a few days).&lt;/p&gt;

&lt;h2&gt;
  
  
  Application to migrate
&lt;/h2&gt;

&lt;p&gt;This blog post is based on a Spring Boot application with the following modules or components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spring Boot 2&lt;/li&gt;
&lt;li&gt;REST Endpoints based in Spring Web&lt;/li&gt;
&lt;li&gt;Apache Kafka as messaging system&lt;/li&gt;
&lt;li&gt;Apicurio Service Registry as API schema registry&lt;/li&gt;
&lt;li&gt;Avro schemas&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is a base to start to analyze this migration, where some other common features are not included (e.g: Persistence in Databases) to reduce the scope of this migration.&lt;/p&gt;

&lt;p&gt;The original code of this application is available &lt;a href="https://github.com/rmarting/kafka-clients-sb-sample"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The original application starts in 5 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2021-12-03 09:33:56.724 INFO 1 --- [main] com.rmarting.kafka.Application : Started Application in 5.132 seconds (JVM running for 5.76)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Migrating to Quarkus Extensions for Spring Boot
&lt;/h2&gt;

&lt;p&gt;This approach is focused on reducing the number of changes and reuse as much code as possible. This approach is available thanks to a set of Quarkus Extensions for Spring Boot, designed to provide a compatibility layer for Spring Boot. At the time of writing this article the following Quarkus Extensions for Spring are available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;spring-di&lt;/strong&gt; : Compatibility layer for Spring dependency injection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring-web&lt;/strong&gt; : Compatibility layer for Spring Web.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring-boot-properties&lt;/strong&gt; : Compatibility layer to set up your Spring Boot using @ConfigurationProperties annotations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring-security&lt;/strong&gt; : Compatibility layer for Spring Security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring-cache&lt;/strong&gt; : Compatibility layer for Spring Cache annotations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring-data-jpa&lt;/strong&gt; : Compatibility layer for Spring Data JPA repositories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring-scheduled&lt;/strong&gt; : Compatibility layer for Spring Scheduled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spring-cloud-config-client&lt;/strong&gt; : Compatibility layer to read configuration properties at runtime from the Spring Cloud Config Server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : Some extensions are considered preview, so backward compatibility and presence in the ecosystem is not guaranteed.&lt;/p&gt;

&lt;p&gt;The main list of changes done to migrate the application is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quarkus requires JDK 11.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;spring-di&lt;/code&gt;, &lt;code&gt;spring-web&lt;/code&gt; and &lt;code&gt;spring-boot-properties&lt;/code&gt; extensions are basically &lt;em&gt;mandatory&lt;/em&gt; for any Spring Boot applications. We could say that they are the base for any migration to Quarkus Spring.&lt;/li&gt;
&lt;li&gt;These extensions provide the base features of Spring Boot such as: Dependency Injection, Web, Configuration.&lt;/li&gt;
&lt;li&gt;Although the compatibility layer supports most of the Spring DI capabilities, some arcane features may not be supported.&lt;/li&gt;
&lt;li&gt;Spring Web Annotations could be maintained exactly equals.&lt;/li&gt;
&lt;li&gt;Swagger annotations must be refactored to use MicroProfile OpenAPI. This refactor basically needs to use classes from&lt;code&gt;org.eclipse.microprofile.openapi.annotations&lt;/code&gt; package.&lt;/li&gt;
&lt;li&gt;OpenAPI and Swagger capabilities are provided now by the&lt;code&gt;quarkus-smallrye-openapi&lt;/code&gt; extension, not needed by other OpenAPI or Swagger dependencies.&lt;/li&gt;
&lt;li&gt;Health checks (actuators provided by &lt;code&gt;spring-boot-starter-actuator&lt;/code&gt;) are now provided by &lt;code&gt;quarkus-smallrye-health&lt;/code&gt; extensions. It requires new liveness and readiness proves in your deployment in Kubernetes.&lt;/li&gt;
&lt;li&gt;Integration with Kafka using the Kafka Producer and Consumer API (provided by the Kafka Clients) only requires the use of &lt;code&gt;quarkus-kafka-client&lt;/code&gt; extension.&lt;/li&gt;
&lt;li&gt;Spring Kafka has not an equivalent compatible Spring extension, however Quarkus provides &lt;code&gt;quarkus-smallrye-reactive-messaging-kafka&lt;/code&gt; extension with a set of new annotations to consume, produce or data streaming with Apache Kafka. It means a small set of changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result of this migration is available &lt;a href="https://github.com/rmarting/kafka-clients-sb-sample/tree/feature/quarkus-edition"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This new application starts in less of 2 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dec 03, 2021 9:51:17 AM io.quarkus.bootstrap.runner.Timing printStartupTime
INFO: kafka-clients-sb-sample 3.0.0-SNAPSHOT on JVM (powered by Quarkus 1.13.7.Final) started in 1.887s. Listening on: http://0.0.0.0:8181

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Refactoring to Standard Libraries and Quarkus Extensions
&lt;/h2&gt;

&lt;p&gt;This approach implies a change in the mindset of your application, aligning with standards and refactoring your code. However you will get a full-compliant Quarkus application and then you are able to use all its empowerment.&lt;/p&gt;

&lt;p&gt;For this approach I need to map the Spring Boot features with the right Quarkus Extensions:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Spring Boot Feature&lt;/th&gt;
&lt;th&gt;Quarkus Extension&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;spring-di&lt;/td&gt;
&lt;td&gt;&lt;a href="https://quarkus.io/guides/cdi"&gt;Quarkus CDI&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;spring-boot-starter-web&lt;/td&gt;
&lt;td&gt;&lt;a href="https://quarkus.io/guides/rest-json"&gt;JAX-RS Services&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;spring-boot-starter-actuator&lt;/td&gt;
&lt;td&gt;&lt;a href="https://quarkus.io/guides/smallrye-health"&gt;MicroProfile Health&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;springdoc-openapi-ui&lt;/td&gt;
&lt;td&gt;&lt;a href="https://quarkus.io/guides/openapi-swaggerui"&gt;OpenAPI and Swagger UI&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;spring-kafka&lt;/td&gt;
&lt;td&gt;&lt;a href="https://quarkus.io/guides/kafka"&gt;Kafka with Reactive Messaging&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The main list of changes done to migrate the application is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quarkus requires JDK 11.&lt;/li&gt;
&lt;li&gt;Refactor &lt;code&gt;@Service&lt;/code&gt;, &lt;code&gt;@Component&lt;/code&gt;, &lt;code&gt;@Autowired&lt;/code&gt; Spring annotations to &lt;code&gt;@Singleton&lt;/code&gt;, &lt;code&gt;@ApplicationScoped&lt;/code&gt;, &lt;code&gt;@Inject&lt;/code&gt; CDI annotations.&lt;/li&gt;
&lt;li&gt;Refactor &lt;code&gt;@Configuration&lt;/code&gt;, &lt;code&gt;@Value&lt;/code&gt; Spring annotations to&lt;code&gt;@ApplicationScoped&lt;/code&gt;, &lt;code&gt;@ConfigProperty&lt;/code&gt; Quarkus annotations&lt;/li&gt;
&lt;li&gt;Refactor Spring Web Annotations to JAX-RS Annotations. This guide includes a &lt;a href="https://quarkus.io/guides/spring-web#conversion-table"&gt;conversion table&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Swagger annotations must be refactored to use MicroProfile OpenAPI. This refactor basically needs to use classes from&lt;code&gt;org.eclipse.microprofile.openapi.annotations&lt;/code&gt; package.&lt;/li&gt;
&lt;li&gt;OpenAPI and Swagger capabilities are provided now by &lt;code&gt;quarkus-smallrye-openapi&lt;/code&gt; extension, not needed by other OpenAPI or Swagger dependencies.&lt;/li&gt;
&lt;li&gt;Health checks (actuators provided by &lt;code&gt;spring-boot-starter-actuator&lt;/code&gt;) are now provided by &lt;code&gt;quarkus-smallrye-health&lt;/code&gt; extension. It requires new liveness and readiness proves in your deployment in Kubernetes.&lt;/li&gt;
&lt;li&gt;Integration with Kafka using the Kafka Producer and Consumer API (provided by the Kafka Clients) only requires the use of &lt;code&gt;quarkus-kafka-client&lt;/code&gt; extension.&lt;/li&gt;
&lt;li&gt;Spring Kafka has not an equivalent compatible Spring extension, however Quarkus provides &lt;code&gt;quarkus-smallrye-reactive-messaging-kafka&lt;/code&gt; extension with a set of new annotations to consume, produce or data streaming with Apache Kafka. It means a small set of changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result of this refactoring is available &lt;a href="https://github.com/rmarting/kafka-clients-quarkus-sample"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This application starts in 1.4 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dec 03, 2021 10:21:17 AM io.quarkus.bootstrap.runner.Timing printStartupTime
INFO: kafka-clients-sb-sample 3.0.0-SNAPSHOT on JVM (powered by Quarkus 1.13.7.Final) started in 1.387s. Listening on: http://0.0.0.0:8181

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Both migration experiences could be summarized in the next lessons learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migration to Quarkus has been feasible with less effort migration. Both approaches did not require a full investment or hard work.&lt;/li&gt;
&lt;li&gt;Quarkus Extensions for Spring Boot are enough for most common modules (&lt;code&gt;di&lt;/code&gt;, &lt;code&gt;web&lt;/code&gt;, &lt;code&gt;jpa&lt;/code&gt;, &lt;code&gt;security&lt;/code&gt;, …) but maybe not cover yours (&lt;code&gt;jta&lt;/code&gt;, &lt;code&gt;web-services&lt;/code&gt;, complex injection references, …). Analyzing the application is &lt;strong&gt;mandatory&lt;/strong&gt; to get the gap.&lt;/li&gt;
&lt;li&gt;Quarkus Extensions for Spring Boot could not cover all Spring features, then a refactor to Quarkus could be needed (Health Endpoints, OpenAPI, Swagger UI, Messaging Integration). However, the effort to refactor it was not so hard.&lt;/li&gt;
&lt;li&gt;Refactoring to Quarkus involves moving your code to standard libraries (that hopefully you already know like for instance JAX-RS), so the learning curve to start with Quarkus is minimal.&lt;/li&gt;
&lt;li&gt;Refactor to Quarkus will give you the full-power of Quarkus and its extensions.&lt;/li&gt;
&lt;li&gt;Bugs and issues could appear in both migrations. Quarkus is stable and it is growing up so fast, resolving them and adding new features. You can check it in the &lt;a href="https://github.com/quarkusio/quarkus/releases"&gt;releases page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Full Quarkus migrated application was the faster one after completing the refactoring, as you can see in the following chart. It is not a complete performance test, but it might give you an idea about the performance capabilities of Quarkus:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;th&gt;Startup&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Spring Boot&lt;/td&gt;
&lt;td&gt;5 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quarkus extensions for Spring&lt;/td&gt;
&lt;td&gt;1.7 seconds 🚀&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quarkus&lt;/td&gt;
&lt;td&gt;1.4 seconds 🚀&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quarkus Native&lt;/td&gt;
&lt;td&gt;0.042 seconds 🚀🚀&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;There is a list of other components not covered (e.g: testing, JPA, Cloud integration, security, …) in this blog post because it will extend the scope and length of this article. Quarkus is a highly active community where every day new features, issues and extensions are moving so some of these lessons learned could be covered, resolved or fixed soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting starting my migration
&lt;/h2&gt;

&lt;p&gt;Migrating Spring Boot to Quarkus requires an effort to identify the best approach from the current state (AS-IS) to the final state (TO-BE) . There is not a silver bullet to migrate applications but there are some tools and references that could help:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://developers.redhat.com/products/mta/overview"&gt;Red Hat Migration Toolkit for Applications&lt;/a&gt;: This tool could analyze your code to identify the main migration issues. The latest version includes a set of rules to check your code and identify the main issues to migrate to Quarkus.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/guides/"&gt;Quarkus Guides&lt;/a&gt; are a great resource for getting started in Quarkus.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/quarkusio/quarkus-quickstarts"&gt;Quarkus QuickStarts&lt;/a&gt; is a large repository of code with many samples.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/blog/"&gt;Quarkus Blog&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dzone.com/articles/migrating-a-spring-boot-application-to-quarkus-cha"&gt;Migrating SpringBoot PetClinic REST to Quarkus&lt;/a&gt; by Jonathan Vila as another migration reference from Spring Boot.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.redhat.com/blog/2020/04/10/migrating-a-spring-boot-microservices-application-to-quarkus"&gt;Migrating a Spring Boot microservices application to Quarkus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.redhat.com/blog/2020/07/17/migrating-spring-boot-tests-to-quarkus"&gt;Migrating Spring Boot tests to Quarkus&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🎉 Enjoy your journey to Quarkus. 🎉&lt;/p&gt;

</description>
      <category>howto</category>
      <category>quarkus</category>
      <category>springboot</category>
      <category>migration</category>
    </item>
    <item>
      <title>Migrating Kafka clusters with MirrorMaker2 and Strimzi</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Fri, 19 Nov 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/migrating-kafka-clusters-with-mirrormaker2-and-strimzi-577h</link>
      <guid>https://dev.to/rmarting/migrating-kafka-clusters-with-mirrormaker2-and-strimzi-577h</guid>
      <description>&lt;p&gt;In this article we (my colleague &lt;a href="https://www.linkedin.com/in/manuel-schindler-aa0397118/"&gt;Manuel Schindler&lt;/a&gt;&lt;br&gt;
and I) would like to focus on the most common challenges to migrate Apache&lt;br&gt;
Kafka clusters between different OpenShift platforms, and how to overcome&lt;br&gt;
them by using Apache MirrorMaker and &lt;a href="https://strimzi.io/"&gt;Strimzi Operators&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🗞️ ℹ️ LATEST NEWS ℹ️ 🗞️&lt;/strong&gt; This article has&lt;br&gt;
been accepted and published in the &lt;a href="https://strimzi.io/blog/2021/11/22/migrating-kafka-with-mirror-maker2/"&gt;Strimzi Blog&lt;/a&gt;&lt;br&gt;
community 🎉 &lt;a href=""&gt;here&lt;/a&gt;. We are glad 🤩 to help and contribute in the Strimzi Community with our&lt;br&gt;
experience and knowledge of this amazing Open Source project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you so much Strimzi Community&lt;/strong&gt;!!! 💪 🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  Greetings
&lt;/h3&gt;

&lt;p&gt;Many thanks to some great colleagues for help us reviewing this content:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hugo Guerrero &lt;a href="https://twitter.com/hguerreroo"&gt;&lt;/a&gt; &lt;a href="https://www.linkedin.com/in/hugoguerrero/"&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Jakub Scholz &lt;a href="https://twitter.com/scholzj"&gt;&lt;/a&gt; &lt;a href="https://www.linkedin.com/in/scholzj/"&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Rafael Yanez &lt;a href="https://www.linkedin.com/in/ryanezillescas/"&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Paolo Patierno &lt;a href="https://twitter.com/ppatierno"&gt;&lt;/a&gt; &lt;a href="https://www.linkedin.com/in/paolopatierno/"&gt;&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>strimzi</category>
      <category>mirrormaker2</category>
      <category>migration</category>
      <category>apachekafka</category>
    </item>
    <item>
      <title>Integrating Quarkus with Apicurio Service Registry</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Fri, 18 Dec 2020 09:15:00 +0000</pubDate>
      <link>https://dev.to/rmarting/integrating-quarkus-with-apicurio-service-registry-196i</link>
      <guid>https://dev.to/rmarting/integrating-quarkus-with-apicurio-service-registry-196i</guid>
      <description>&lt;p&gt;Step by step most of the new cloud native applications and microservices designs are based in&lt;a href="https://developers.redhat.com/topics/event-driven" rel="noopener noreferrer"&gt;event-driven architecture (EDA)&lt;/a&gt; to respond to real-time information based on sending and receiving of information about individual events. This kind of architecture is based on asynchronous non-blocking communication between event producers and consumers through a event streaming backbone, such as Apache Kafka running on top of Kubernetes. In these scenarios, where a large number of different events are managed, is very important to define a governance model where each event could be defined as an API to allow producers and consumers to produce and consume checked and validated events. A Service Registry will help us.&lt;/p&gt;

&lt;p&gt;From my field experience with many projects I found that the most typical landscape is based in the most well-know next components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://strimzi.io/" rel="noopener noreferrer"&gt;Strimzi&lt;/a&gt; to deploy Apache Kafka clusters as streaming backbone.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.apicur.io/" rel="noopener noreferrer"&gt;Apicurio Service Registry&lt;/a&gt; as datastore for events API.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.openshift.com/" rel="noopener noreferrer"&gt;OpenShift Container Platform&lt;/a&gt; to deploy and run the different components.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quarkus.io/" rel="noopener noreferrer"&gt;Quarkus&lt;/a&gt; as framework to develop client applications.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://avro.apache.org/" rel="noopener noreferrer"&gt;Avro&lt;/a&gt; as data serialization system to declare schemas as events API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article describes how easy is to integrate your Quarkus applications with Apicurio Service Registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤖 Apicurio Service Registry
&lt;/h2&gt;

&lt;p&gt;Service Registry is a datastore for sharing standard event schemas and API designs across API and event-driven architectures. Service Registry decouples the structure of your data from your client applications, and to share and manage your data types and API descriptions at runtime. Decouples your data structure from your client applications reduces costs by decreasing overall message size, creates efficiencies by increasing consistent reuse of schemas and API designs across your organization.&lt;/p&gt;

&lt;p&gt;Some of the most common uses cases where Service Registry helps us are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client applications can dynamically push or pull the latest schema updates to or from Service Registry at runtime without needing to redeploy.&lt;/li&gt;
&lt;li&gt;Developer teams can query the registry for existing schemas required for services already deployed in production.&lt;/li&gt;
&lt;li&gt;Developer teams can register new schemas required for new services in development or rolling to production.&lt;/li&gt;
&lt;li&gt;Store schemas used to serialize and deserialize messages, which can then be referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apicurio is a open source project that provides a Service Registry ready to be involved in this scenario with the following main features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support for multiple payload formats for standard event schemas and API specifications.&lt;/li&gt;
&lt;li&gt;Pluggable storage options including AMQ Streams, embedded Infinispan, or PostgreSQL database.&lt;/li&gt;
&lt;li&gt;Registry content management using a web console, REST API command, Maven plug-in, or Java client.&lt;/li&gt;
&lt;li&gt;Rules for content validation and version compatibility to govern how registry content evolves over time.&lt;/li&gt;
&lt;li&gt;Full Apache Kafka schema registry support, including integration with Kafka Connect for external systems.&lt;/li&gt;
&lt;li&gt;Client serializer/deserializer (Serdes) to validate Kafka and other message types at runtime.&lt;/li&gt;
&lt;li&gt;Cloud-native Quarkus Java runtime for low memory footprint and fast deployment times.&lt;/li&gt;
&lt;li&gt;Compatibility with existing Confluent schema registry client applications.&lt;/li&gt;
&lt;li&gt;Operator-based installation of Service Registry on OpenShift.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💻 Client Applications Workflow
&lt;/h2&gt;

&lt;p&gt;The typical workflow when we introduce a Service Registry in our architecture is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declare the event schema using some of the most data formats like Apache Avro, JSON Schema, Google Protocol Buffers, OpenAPI, AsyncAPI, GraphQL, Kafka Connect schemas, WSDL or XML schemas (xsd).&lt;/li&gt;
&lt;li&gt;Register the schema as artifact in Service Registry through the Service Registry UI, API REST, Maven PlugIn or Java clients. From there client applications can then use that schema to validate that messages conform to the correct data structure at runtime.&lt;/li&gt;
&lt;li&gt;Kafka Producer applications use a serializer to encode messages that conform to a specific event schema.&lt;/li&gt;
&lt;li&gt;Kafka Consumer applications then use a deserializer to validate that messages have been serialized using that correct schema, based on a specific schema ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This workflow ensures consistent schema use and helps to prevent data errors at runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  📄 Avro Schemas into Service Registry
&lt;/h2&gt;

&lt;p&gt;Avro provides a &lt;a href="https://avro.apache.org/docs/current/spec.html#schemas" rel="noopener noreferrer"&gt;JSON schema specification&lt;/a&gt; to declare a large variety of data structures, such as our simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Message"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"namespace"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"io.jromanmartin.kafka.schema.avro"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"record"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"doc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Schema for a Message."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"fields"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"long"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"doc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Message timestamp."&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"doc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Message content."&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This schema will define a simple message event.&lt;/p&gt;

&lt;p&gt;Avro also provides a &lt;a href="https://avro.apache.org/docs/current/gettingstartedjava.html" rel="noopener noreferrer"&gt;Maven Plugin&lt;/a&gt; to autogenerate Java classes based in the schema definitions (&lt;code&gt;.avsc&lt;/code&gt; files)&lt;/p&gt;

&lt;p&gt;Now we could publish it into Service Registry to be used in runtime for our client applications. Apicurio Maven Plugin is an easy way to publish the schemas into the Service Registry with a simple definition in our &lt;code&gt;pom.xml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;plugin&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;io.apicurio&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;apicurio-registry-maven-plugin&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${apicurio.version}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;executions&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;execution&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;phase&amp;gt;&lt;/span&gt;generate-sources&lt;span class="nt"&gt;&amp;lt;/phase&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;goals&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;goal&amp;gt;&lt;/span&gt;register&lt;span class="nt"&gt;&amp;lt;/goal&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;/goals&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;configuration&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;registryUrl&amp;gt;&lt;/span&gt;${apicurio.registry.url}&lt;span class="nt"&gt;&amp;lt;/registryUrl&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;artifactType&amp;gt;&lt;/span&gt;AVRO&lt;span class="nt"&gt;&amp;lt;/artifactType&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;artifacts&amp;gt;&lt;/span&gt;
               &lt;span class="c"&gt;&amp;lt;!-- Schema definition for TopicIdStrategy strategy --&amp;gt;&lt;/span&gt;
               &lt;span class="nt"&gt;&amp;lt;messages-value&amp;gt;&lt;/span&gt;${project.basedir}/src/main/resources/schemas/message.avsc&lt;span class="nt"&gt;&amp;lt;/messages-value&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;/artifacts&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;/configuration&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;/execution&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;/executions&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/plugin&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using Apicurio Maven Plugin in the our application Maven Lifecycle could help us to define or extend our ALM (also our CI/CD pipelines) to publish or update the schemas every time we released new versions of them. It is not an objective of this article, but something that you could analyze more.&lt;/p&gt;

&lt;p&gt;As soon we published our schema into Service Registry we could manage from the UI as:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/images/apicurio-registry/apicurio-artifact-details.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.jromanmartin.io%2Fimages%2Fapicurio-registry%2Fapicurio-artifact-details.png" title="Apicurio Service Registry Artifact Details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Quarkus, Apache Kafka and Service Registry
&lt;/h2&gt;

&lt;p&gt;Quarkus provides a set of dependencies to allow in our application to produce and consume messages to and from Apache Kafka. It is very straight forward to use it onces we add the dependency in our &lt;code&gt;pom.xml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Kafka Clients --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;io.quarkus&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;quarkus-kafka-client&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;&amp;lt;!-- Reactive Messaging --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;io.quarkus&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;quarkus-smallrye-reactive-messaging-kafka&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect your application with the Apache Kafka cluster is so easy with the following property in your &lt;code&gt;application.properties&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kafka.bootstrap.servers = my-kafka-kafka-bootstrap:9092
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apicurio Service Registry provides Kafka client serializer/deserializer for Kafka producer and consumer applications. To add them into our application, you must add the next dependency:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Apicurio Serializer/Deserializer --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;io.apicurio&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;apicurio-registry-utils-serde&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${apicurio.version}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  📥 Producing Messages from Quarkus
&lt;/h3&gt;

&lt;p&gt;Quarkus provides a set of properties and beans to declare Kafka Producers to send messages (in our case Avro-schema instances) to Apache Kafka. The most important properties to set up are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;key.serializer&lt;/code&gt;: Identifies the serializer class to serialize the key of the Kafka record.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;value.serializer&lt;/code&gt;: Identifies the serializer class to serialize the value of the Kafka record.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here we have to add some specific values in these properties to allow the serialization process using Avro schemas registered in the Service Registry. Basically we need to identify the following concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serializer class to use Avro schemas, this class is provider by the Apicurio SerDe class: &lt;code&gt;io.apicurio.registry.utils.serde.AvroKafkaSerializer&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apicurio Service registry endpoint to validate schemas: &lt;code&gt;apicurio.registry.url&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apicurio Service strategy to look up the schema definition: &lt;code&gt;apicurio.registry.artifact-id&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So a sample definition for a producer bean to send messages could be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Produces&lt;/span&gt;
    &lt;span class="nd"&gt;@RequestScoped&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Producer&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;createProducer&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;Properties&lt;/span&gt; &lt;span class="n"&gt;props&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Properties&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

        &lt;span class="c1"&gt;// Kafka Bootstrap&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;BOOTSTRAP_SERVERS_CONFIG&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kafkaBrokers&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// Security&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AdminClientConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;SECURITY_PROTOCOL_CONFIG&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kafkaSecurityProtocol&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;SaslConfigs&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;SASL_MECHANISM&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"SCRAM-SHA-512"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;SaslConfigs&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;SASL_JAAS_CONFIG&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                &lt;span class="s"&gt;"org.apache.kafka.common.security.scram.ScramLoginModule required username=\""&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;kafkaUser&lt;/span&gt;
                        &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"\" password=\""&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;kafkaPassword&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"\";"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// Producer Client&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;putIfAbsent&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;CLIENT_ID_CONFIG&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;producerClientId&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"-"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;getHostname&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

        &lt;span class="c1"&gt;// Serializer for Keys and Values props.putIfAbsent(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;putIfAbsent&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;VALUE_SERIALIZER_CLASS_CONFIG&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;AvroKafkaSerializer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

        &lt;span class="c1"&gt;// Service Registry props.putIfAbsent(AbstractKafkaSerDe.REGISTRY_URL_CONFIG_PARAM, serviceRegistryUrl);&lt;/span&gt;
        &lt;span class="c1"&gt;// Topic Id Strategy (schema = topicName-(key|value)) - Default Strategy&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;putIfAbsent&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AbstractKafkaSerializer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;REGISTRY_ARTIFACT_ID_STRATEGY_CONFIG_PARAM&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;TopicIdStrategy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

        &lt;span class="c1"&gt;// Acknowledgement&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;putIfAbsent&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ACKS_CONFIG&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;acks&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;KafkaProducer&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you could finally send Messages (storing the artifact ID from Service Registry) to Apache Kafka:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Inject&lt;/span&gt;
    &lt;span class="nc"&gt;Producer&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;MessageDTO&lt;/span&gt; &lt;span class="nf"&gt;publishRawMessage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nd"&gt;@NotEmpty&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;topicName&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                                         &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nd"&gt;@NotNull&lt;/span&gt; &lt;span class="nc"&gt;MessageDTO&lt;/span&gt; &lt;span class="n"&gt;messageDTO&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                                         &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;async&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// ... &lt;/span&gt;
        &lt;span class="nc"&gt;RecordMetadata&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;send&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="c1"&gt;// ....&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The message will be serialized adding the global ID associated to the schema used for this record. That global ID will be very important to consume later the message by the Kafka Consumer applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This approach uses KafkaProducer API, however Quarkus includes the &lt;code&gt;Emitter&lt;/code&gt; class to send messages easily. I developed my sample with the first approach to check that Kafka API is still valid using Quarkus.&lt;/p&gt;

&lt;h3&gt;
  
  
  📤 Consuming Messages from Quarkus
&lt;/h3&gt;

&lt;p&gt;Quarkus also provides a set of properties and beans to declare Kafka Consumers to consume messages (in our case Avro-schema instances) from the Apache Kafka cluster. The most important properties to set up are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;key.deserializer&lt;/code&gt;: Identifies the deserializer class to deserialize the key of the Kafka record.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;value.deserializer&lt;/code&gt;: Identifies the deserializer class to deserialize the value of the Kafka record.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here we have to add some specific values in these properties to allow the deserialization process using Avro schemas registered in the Service Registry. Basically we need to identify the following concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deserializer class to use Avro schemas, this class is provider by the Apicurio SerDe class: &lt;code&gt;io.apicurio.registry.utils.serde.AvroKafkaDeserializer&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apicurio Service registry endpoint to get valid schemas: &lt;code&gt;apicurio.registry.url&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So a sample configuration for a consumer template could be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Configure the Kafka source (we read from it)
mp.messaging.incoming.messages.connector=smallrye-kafka
mp.messaging.incoming.messages.group.id=${app.consumer.groupId}-mp-incoming-channel
mp.messaging.incoming.messages.topic=messages.${app.bg.mode}
mp.messaging.incoming.messages.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.incoming.messages.value.deserializer=io.apicurio.registry.utils.serde.AvroKafkaDeserializer
mp.messaging.incoming.messages.properties.partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
mp.messaging.incoming.messages.apicurio.registry.url=${apicurio.registry.url}
mp.messaging.incoming.messages.apicurio.registry.avro-datum-provider=io.apicurio.registry.utils.serde.avro.ReflectAvroDatumProvider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could declare a listener to consume messages (based in our Message schema) as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Incoming&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"messages"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;CompletionStage&lt;/span&gt; &lt;span class="nf"&gt;handleMessages&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Message&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;IncomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;jromanmartin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;avro&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;incomingKafkaRecord&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
                &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;IncomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;jromanmartin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;avro&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;unwrap&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;IncomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="no"&gt;LOGGER&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Received record from Topic-Partition '{}-{}' with Offset '{}' -&amp;gt; Key: '{}' - Value '{}'"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;incomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getTopic&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;incomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getPartition&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;incomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getOffset&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;incomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getKey&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;incomingKafkaRecord&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getPayload&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

        &lt;span class="c1"&gt;// Commit message&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ack&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The schema is retrieved by the deserializer using the global ID written into the message being consumed from the Service Registry. &lt;strong&gt;Done!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📑 Summary
&lt;/h2&gt;

&lt;p&gt;Your Quarkus applications integrate easily with the Service Registry and Apache Kafka to build your event-driven architecture. Thanks all these components you could get the following main benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure consistent schema use between your client applications.&lt;/li&gt;
&lt;li&gt;Help to prevent data errors at runtime.&lt;/li&gt;
&lt;li&gt;Define a governance model in your data schemas (versions, rules, validations).&lt;/li&gt;
&lt;li&gt;Easy integration with client applications and components.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You could invest and analyse more to adapt or build your event-drive architecture with these components in the following reference links.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.apicur.io/registry/docs/apicurio-registry/1.3.3.Final/index.html" rel="noopener noreferrer"&gt;Getting started with Service Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.redhat.com/blog/2020/06/11/first-look-at-the-new-apicurio-registry-ui-and-operator/" rel="noopener noreferrer"&gt;First look at the new Apicurio Registry UI and Operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://quarkus.io/blog/kafka-avro/" rel="noopener noreferrer"&gt;How to Use Kafka, Schema Registry and Avro with Quarkus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://quarkus.io/blog/avro-native/" rel="noopener noreferrer"&gt;Using Avro in a native executable&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔬 Show me the code
&lt;/h2&gt;

&lt;p&gt;Everything seems great and cool, but you want to see how it is really working … then this &lt;a href="https://github.com/rmarting/kafka-clients-quarkus-sample" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; is your reference.&lt;/p&gt;

&lt;p&gt;Enjoying API eventing 😃!!!&lt;/p&gt;

&lt;h2&gt;
  
  
  🎉 Bonus Track
&lt;/h2&gt;

&lt;p&gt;Quarkus 1.10.5.Final includes the native compilation of the Avro schemas capability. This command will compile as native my sample application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ mvn package &lt;span class="nt"&gt;-Dquarkus&lt;/span&gt;.native.container-build&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;-Dquarkus&lt;/span&gt;.native.container-runtime&lt;span class="o"&gt;=&lt;/span&gt;podman &lt;span class="nt"&gt;-Pnative&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But this is other story to write in other post! 😎&lt;/p&gt;

</description>
      <category>howto</category>
      <category>quarkus</category>
      <category>apicurio</category>
      <category>strimzi</category>
    </item>
    <item>
      <title>Connecting Apicurio Registry with secured Strimzi clusters</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Tue, 08 Dec 2020 09:15:00 +0000</pubDate>
      <link>https://dev.to/rmarting/connecting-apicurio-registry-with-secured-strimzi-clusters-24ne</link>
      <guid>https://dev.to/rmarting/connecting-apicurio-registry-with-secured-strimzi-clusters-24ne</guid>
      <description>&lt;p&gt;&lt;a href="https://www.apicur.io/registry/"&gt;Apicurio Registry&lt;/a&gt; is a datastore for sharing standard event schemas and API designs across API and event-driven architectures. Apicurio Registry decouples the structure of your data from your client applications, and enables you to share and manage your data types and API descriptions at runtime. Decoupling your data structure from your client applications reduces costs by decreasing overall message size, creates efficiencies by increasing consistent re-use of schemas and API designs across your organization.&lt;/p&gt;

&lt;p&gt;Some of the most common uses cases where Apicurio Registry helps us are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client applications can dynamically push or pull the latest schema updates to or from Apicurio Registry at runtime without needing to redeploy.&lt;/li&gt;
&lt;li&gt;Developer teams can query the registry for existing schemas required for services already deployed in production.&lt;/li&gt;
&lt;li&gt;Developer teams can register new schemas required for new services in development or rolling to production.&lt;/li&gt;
&lt;li&gt;Store schemas used to serialize and deserialize messages, which can then be referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.apicur.io/"&gt;Apicurio&lt;/a&gt; provides a open sourced Schema Registry, ready to be involved in this scenario.&lt;/p&gt;

&lt;p&gt;Apicurio Registry includes a set of pluggable storage options to store the APIs, rules and validations. The &lt;a href="https://kafka.apache.org/"&gt;Kafka-based&lt;/a&gt; storage option, provided by &lt;a href="https://strimzi.io/"&gt;Strimzi&lt;/a&gt;, is suitable for production environments when persistent storage is configured for Kafka clusters running on &lt;a href="https://www.openshift.com/"&gt;OpenShift&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;In production environments security is not an option and it is something that must be provided by Strimzi for the different components connected to. Security will be defined by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt; to ensure a secure client connection to the Kafka cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authorization&lt;/strong&gt; to define which users have access to which resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This blog post describes the high level topics to keep in mind to connect Apicurio Registry to a secure Apache Kafka cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤖 OpenShift Operators
&lt;/h2&gt;

&lt;p&gt;Strimzi and Apicurio Registry provides a set of OpenShift Operators, available through &lt;a href="https://operatorhub.io/"&gt;Operator Hub&lt;/a&gt;, to manage the life cycle of each component. Operators are a method of packaging, deploying, and managing OpenShift applications. &lt;/p&gt;

&lt;p&gt;Strimzi Operators provide a set of Custom Resources Definitions (CRD) to describe the different components of the Kafka deployment (Zookeeper, Brokers, Users, Connect, …) These objects will provide the API to manage our Kafka cluster.&lt;/p&gt;

&lt;p&gt;Strimzi Operators manage the authentication, authorization and users life cycle with the following custom resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://strimzi.io/docs/operators/latest/using.html#type-Kafka-reference"&gt;Kafka Schema&lt;/a&gt;: Declares the Kafka topology and features to use. This object is managed by the &lt;a href="https://strimzi.io/docs/operators/latest/using.html#using-the-cluster-operator-str"&gt;Cluster Operator&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://strimzi.io/docs/operators/latest/using.html#type-KafkaUser-reference"&gt;KafkaUser Schema&lt;/a&gt;: Declares a user for an instance of Strimzi, including the authentication, authorization and quotas definitions. This object is managed by the &lt;a href="https://strimzi.io/docs/operators/latest/using.html#assembly-using-the-user-operator-str"&gt;User Operator&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.apicur.io/registry/docs/apicurio-registry/1.3.3.Final/getting-started/assembly-installing-registry-openshift.html#installing-registry-operatorhub"&gt;Apicurio Registry Operator&lt;/a&gt;provides a set of CRD to describe the different components of the Apicurio Registry deployment (storage, security, replicas, …) These objects will provide the API to manage our Apicurio Registry instance.&lt;/p&gt;

&lt;p&gt;Apicurio Registry Operator manages the Apicurio Registry life cycle with the following custom resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/Apicurio/apicurio-registry-operator/blob/master/deploy/crds/apicur.io_apicurioregistries_crd.yaml"&gt;ApicurioRegistry Schema&lt;/a&gt;: Declares the Apicurio Registry topology and main features to use. This object is managed by the Apicurio Operator.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  👮 Authentication
&lt;/h2&gt;

&lt;p&gt;Strimzi supports the following authentication mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SASL SCRAM-SHA-512&lt;/li&gt;
&lt;li&gt;TLS client authentication&lt;/li&gt;
&lt;li&gt;OAuth 2.0 token based authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These mechanisms are declared in the &lt;code&gt;authentication&lt;/code&gt; block in the &lt;code&gt;Kafka&lt;/code&gt; definition for each listener. Each listener will implement the authentication mechanism defined so the client applications must authenticate with the mechanism identified.&lt;/p&gt;

&lt;p&gt;How to activate each mechanism in the Strimzi cluster is described below.&lt;/p&gt;

&lt;p&gt;On the other hand we need to identify in the Apicurio Registry the authentication mechanism activated in the Strimzi cluster. Apicurio Registry only allows the following authentication mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SCRAM-SHA-512&lt;/li&gt;
&lt;li&gt;TLS&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Strimzi Authentication
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Using SCRAM-SHA-512&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;Kafka&lt;/code&gt; definition declares a Kafka cluster secured with &lt;code&gt;SCRAM-SHA-512&lt;/code&gt; authentication for the secured listener (TLS):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-kafka
spec:
  kafka:
    listeners:
      tls:
        authentication:
          type: scram-sha-512

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying this configuration will create a set of ㊙️ secrets where TLS certificates are stored. The secret we need to know to allow the secured connections is declared as &lt;code&gt;my-kafka-cluster-ca-cert&lt;/code&gt;. We will need this value later.&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;KafkaUser&lt;/code&gt; definition declares a Kafka user with &lt;code&gt;SCRAM-SHA-512&lt;/code&gt; authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: service-registry-scram
  labels:
    strimzi.io/cluster: my-kafka
spec:
  authentication:
    type: scram-sha-512

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying this configuration a new ㊙️ secret is created, with the same name of the user, where credentials are stored. This secret contains the generated password to authenticate to the Kafka cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc get secrets
NAME TYPE DATA AGE
service-registry-scram Opaque 1 4s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Using TLS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;Kafka&lt;/code&gt; definition declares a Kafka cluster secured with TLS authentication for the secured listener (TLS):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-kafka
spec:
  kafka:
    listeners:
      tls:
        authentication:
          type: tls

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying this configuration will create a set of ㊙️ secrets where TLS certificates are stored. The secret we need to know to allow the secured connections is declared as &lt;code&gt;my-kafka-cluster-ca-cert&lt;/code&gt;. We will need this value later.&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;KafkaUser&lt;/code&gt; definition declares a Kafka user with TLS authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: service-registry-tls
  labels:
    strimzi.io/cluster: my-kafka
spec:
  authentication:
    type: tls

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying this configuration will create a ㊙️ secret, with the same name of the user, where user credentials are stored. This secret contains the valid client certificates to authenticate to the Kafka cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc get secrets
NAME TYPE DATA AGE
Service-registry-tls Opaque 1 4s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Apicurio Registry Authentication
&lt;/h3&gt;

&lt;p&gt;Identifying the authentication mechanism activated in the Strimzi cluster, we need to deploy the Apicurio Registry defining accordingly the &lt;code&gt;ApicurioRegistry&lt;/code&gt; definition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : Apicurio Registry can only connect, at the time of writing this blog, to Strimzi_tls_ listener (normally in 9093 port), whatever the authentication mechanism is activated in that listener. It means that the &lt;code&gt;boostrapServers&lt;/code&gt; property in &lt;code&gt;ApicurioRegistry&lt;/code&gt; must point to that listener port:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apicur.io/v1alpha1
kind: ApicurioRegistry
metadata:
  name: service-registry
spec:
  configuration:
    persistence: "streams"
    streams:
      bootstrapServers: "my-kafka-kafka-bootstrap:9093"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Using SCRAM-SHA-512&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;ApicurioRegistry&lt;/code&gt; definition declares a secured connection with a user with &lt;code&gt;SCRAM-SHA-512&lt;/code&gt; authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apicur.io/v1alpha1
kind: ApicurioRegistry
metadata:
  name: service-registry
spec:
  configuration:
    persistence: "streams"
    streams:
      bootstrapServers: "my-kafka-kafka-bootstrap:9093"
      security:
        scram:
          user: service-registry-scram
          passwordSecretName: service-registry-scram
          truststoreSecretName: my-kafka-cluster-ca-cert

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The values that we need to identify in this object are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;user&lt;/strong&gt; : Name of the user to connect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;passwordSecretName&lt;/strong&gt; : Name of the secret where the password is saved.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;truststoreSecretName&lt;/strong&gt; : Name of secret with the CA certs of the Kafka cluster deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Using TLS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;ApicurioRegistry&lt;/code&gt; definition declares a secured connection with a user with TLS authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apicur.io/v1alpha1
kind: ApicurioRegistry
metadata:
  name: service-registry
spec:
  configuration:
    persistence: "streams"
    streams:
      bootstrapServers: "my-kafka-kafka-bootstrap:9093"
      security:
        tls:
          keystoreSecretName: service-registry-tls
          truststoreSecretName: my-kafka-cluster-ca-cert

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The values that we need to identify in this object are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;keystoreSecretName&lt;/strong&gt; : Name of the user secret with the client certificates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;truststoreSecretName&lt;/strong&gt; : Name of secret with the CA certs of the Kafka cluster deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛂 Authorization
&lt;/h2&gt;

&lt;p&gt;Strimzi supports authorization using &lt;code&gt;SimpleACLAuthorizer&lt;/code&gt; globally for all listeners used for client connections. This mechanism uses Access Control Lists (ACLs) to define which users have access to which resources.&lt;/p&gt;

&lt;p&gt;Denying is the default ACL if authorization is applied in the Kafka cluster. That requires to declare the different rules for each user that wants to operate with the Kafka cluster.&lt;/p&gt;

&lt;p&gt;The following &lt;code&gt;Kafka&lt;/code&gt; definition activates authorization in the Kafka cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-kafka
spec:
  kafka:
    authorization:
      type: simple

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ACLs are declared for each user in the &lt;code&gt;KafkaUser&lt;/code&gt; definition in the &lt;code&gt;acls&lt;/code&gt; section. That section includes a list of resources to declare, each of them as a new rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource Type&lt;/strong&gt; : Identifies the type of object managed in Kafka, objects such as topics, consumer groups, cluster, transaction ids and delegation tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Name of the resource&lt;/strong&gt; : Identifies the resource where to apply the rule. It could be defined as literal, to identify one resource, or a prefix pattern, to identify a list of resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operation&lt;/strong&gt; : Which kind of operation is allowed to do. A full list of operations available for each resource type is available &lt;a href="https://strimzi.io/docs/operators/latest/using.html#acl_rules"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To allow the Apicurio Registry user to work successfully with our secured Strimzi cluster we must declare the following list of rules to specify what the user is allowed to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read its own consumer group.&lt;/li&gt;
&lt;li&gt;Create, read, write and describe on global ids topic (&lt;em&gt;global-id-topic&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt;Create, read, write and describe on storage topic (&lt;em&gt;storage-topic&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt;Create, read, write and describe on its own local changelog topics.&lt;/li&gt;
&lt;li&gt;Describe and write transactional ids on its own local group.&lt;/li&gt;
&lt;li&gt;Read on consumer offset topic (&lt;em&gt;__consumer_offsets&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt;Read on transaction state topics (&lt;em&gt;__transaction_state&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt;Idempotently write on cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ACLs will be similar to the following definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    acls:
      # Group Id to consume information for the different topics used by the Service Registry.
      # Name equals to metadata.name property in ApicurioRegistry object
      - resource:
          type: group
          name: service-registry
        operation: Read
      # Rules for the Global global-id-topic
      - resource:
          type: topic
          name: global-id-topic
        operation: Read
      - resource:
          type: topic
          name: global-id-topic
        operation: Describe
      - resource:
          type: topic
          name: global-id-topic
        operation: Write
      - resource:
          type: topic
          name: global-id-topic
        operation: Create
      # Rules for the Global storage-topic
      - resource:
          type: topic
          name: storage-topic
        operation: Read
      - resource:
          type: topic
          name: storage-topic
        operation: Describe
      - resource:
          type: topic
          name: storage-topic
        operation: Write
      - resource:
          type: topic
          name: storage-topic
        operation: Create
      # Rules for the local topics created by our Service Registry instance
      # Prefix value equals to metadata.name property in ApicurioRegistry object
      - resource:
          type: topic
          name: service-registry-
          patternType: prefix
        operation: Read
      - resource:
          type: topic
          name: service-registry-
          patternType: prefix
        operation: Describe
      - resource:
          type: topic
          name: service-registry-
          patternType: prefix
        operation: Write
      - resource:
          type: topic
          name: service-registry-
          patternType: prefix
        operation: Create
      # Rules for the local transactionalsIds created by our Service Registry instance
      # Prefix equals to metadata.name property in ApicurioRegistry object
      - resource:
          type: transactionalId
          name: service-registry-
          patternType: prefix
        operation: Describe
      - resource:
          type: transactionalId
          name: service-registry-
          patternType: prefix
        operation: Write
      # Rules for internal Apache Kafka topics
      - resource:
          type: topic
          name: __consumer_offsets
        operation: Read
      - resource:
          type: topic
          name: __transaction_state
        operation: Read
      # Rules for Cluster objects
      - resource:
          type: cluster
        operation: IdempotentWrite

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Activating authorization in Strimzi has no effect in the &lt;code&gt;ApicurioRegistry&lt;/code&gt; definition, because this is only related to the correct ACL definitions in &lt;code&gt;KafkaUser&lt;/code&gt; objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  📑 Summary
&lt;/h2&gt;

&lt;p&gt;Apicurio Registry includes the security capabilities to connect to secure Strimzi clusters, so your production environment could complain about your security requirements easily. This blog post demonstrates that these technical components in your event-driven architecture are concerned with security requirements and allow you to apply successfully.&lt;/p&gt;

&lt;p&gt;To deeper understand and analysis, please, refer to the following references:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://strimzi.io/docs/operators/latest/using.html#assembly-using-the-user-operator-str"&gt;Using the User Operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://strimzi.io/docs/operators/latest/using.html#assembly-securing-kafka-brokers-str"&gt;Security options for Kafka&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.apicur.io/registry/docs/apicurio-registry/1.3.3.Final/getting-started/assembly-intro-to-the-registry.html"&gt;Introduction to Apicurio Registry&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enjoying API eventing 😃!!!&lt;/p&gt;

</description>
      <category>howto</category>
      <category>apicurio</category>
      <category>strimzi</category>
      <category>operators</category>
    </item>
    <item>
      <title>containers - The GNOME Shell Extension to manage podman containers</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Sun, 25 Oct 2020 08:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/containers-the-gnome-shell-extension-to-manage-podman-containers-3dp6</link>
      <guid>https://dev.to/rmarting/containers-the-gnome-shell-extension-to-manage-podman-containers-3dp6</guid>
      <description>&lt;p&gt;In my daily I usually start linux containers with &lt;a href="https://podman.io/"&gt;podman&lt;/a&gt; to have easily and quickly tools; such as databases, brokers or systems. This method allow me to avoid to install locally and administrate them locally.&lt;/p&gt;

&lt;p&gt;To manage these linux containers I have usually local scripts with the arguments, parameters and set up to my use cases (I forget very easily the commands … yes, I know! the &lt;code&gt;history&lt;/code&gt; command could help me but I am lazy 😁). These scripts include the typical options to start, stop, delete and so on. For many of my colleagues use these kind of commands are very common, however for me it is a little tedious and bored 😒 so to have a graphical tool will be great and better for me 😋.&lt;/p&gt;

&lt;p&gt;So here is when I found a great tool to integrate with my Fedora laptop …&lt;/p&gt;

&lt;h2&gt;
  
  
  containers - the GNOME shell extension to the rescue 🚑
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://extensions.gnome.org/extension/1500/containers/"&gt;Containers&lt;/a&gt; is a gnome-shell extension to manage linux containers, run by &lt;a href="https://podman.io/"&gt;podman&lt;/a&gt;. A simple menu allows us to execute the most typical actions 📋, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;start&lt;/li&gt;
&lt;li&gt;stop&lt;/li&gt;
&lt;li&gt;remove&lt;/li&gt;
&lt;li&gt;pause&lt;/li&gt;
&lt;li&gt;restart&lt;/li&gt;
&lt;li&gt;top resources: opens the &lt;code&gt;top&lt;/code&gt; command output 📈 (user,cpu,elapsed,time,command) in a new terminal.&lt;/li&gt;
&lt;li&gt;shell: opens a shell in a new terminal 💻.&lt;/li&gt;
&lt;li&gt;stats: open statistics 📈 (cpu,memory,networking,io) in a new terminal with updating live.&lt;/li&gt;
&lt;li&gt;logs: following logs in a new terminal 📄.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The menu also showed most of the inspect info ℹ️ of the container, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;status: running, stopped, exited, …&lt;/li&gt;
&lt;li&gt;id&lt;/li&gt;
&lt;li&gt;image&lt;/li&gt;
&lt;li&gt;command&lt;/li&gt;
&lt;li&gt;Created time&lt;/li&gt;
&lt;li&gt;Started time&lt;/li&gt;
&lt;li&gt;IP address&lt;/li&gt;
&lt;li&gt;ports&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A sample screenshot 📷 of this amazing tool is similar to:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/imagescontainers-gnome-shell-extension/containers-pod-view.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h_cljE9---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/images/containers-gnome-shell-extension/containers-pod-view.png" alt="" title="container view"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing containers
&lt;/h2&gt;

&lt;p&gt;The extension manages the current pods created in your local environment, basically from &lt;code&gt;podman ps -a&lt;/code&gt; command. So the first time to add containers you must to start them with the right arguments and setup for your use case.&lt;/p&gt;

&lt;p&gt;For example to start a local MongoDB instance, the command could be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ podman run -d -p 27017:27017 --name mongodb mongo
649cc435939a66537e11686c8d400c83250de5314b6735a8ade2a00a0a49b8b2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or to start a local MariaDB instance could be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ podman run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mypass --name mariadb mariadb:latest
744cfc9b4013f4f0db111aa96dd1ae4cf53bbd85f920c19470bef857b0836846
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;👉 Note:&lt;/strong&gt; The &lt;code&gt;-d&lt;/code&gt; argument starts detached the pod from the terminal (similar to execute in background).&lt;/p&gt;

&lt;p&gt;These commands will start two new pods as we could check with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
744cfc9b4013 docker.io/library/mariadb:latest mysqld 28 seconds ago Up 28 seconds ago 0.0.0.0:3306-&amp;gt;3306/tcp mariadb
649cc435939a docker.io/library/mongo:latest mongod 40 minutes ago Up 40 minutes ago 0.0.0.0:27017-&amp;gt;27017/tcp mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These pods will be showed in the shell-menu as:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/imagescontainers-gnome-shell-extension/containers-list.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ik7Guvd2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/images/containers-gnome-shell-extension/containers-list.png" alt="" title="containers list"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing pods
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/containers/podman-compose"&gt;podman-compose&lt;/a&gt; allows to start pods (a group of containers as an unit), very useful when you have to compose a set of containers in one place.&lt;/p&gt;

&lt;p&gt;For example, the following &lt;code&gt;docker-compose-kafka.yml&lt;/code&gt; file describes a pod definition to start an Apache Kafka topology instance (zookeeper + broker):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'

services:
  zookeeper:
    image: strimzi/kafka:0.20.0-kafka-2.5.0
    command: [
      "sh", "-c",
      "bin/zookeeper-server-start.sh config/zookeeper.properties"
    ]
    ports:
      - "2181:2181"
    environment:
      LOG_DIR: /tmp/logs

  kafka:
    image: strimzi/kafka:0.20.0-kafka-2.5.0
    command: [
      "sh", "-c",
      "bin/kafka-server-start.sh config/server.properties --override listeners=PLAINTEXT://0.0.0.0:9092 --override advertised.listeners=PLAINTEXT://localhost:9092 --override zookeeper.connect=zookeeper:2181"
    ]
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      LOG_DIR: /tmp/logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To start up this pod, we could use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ podman-compose -f docker-compose-kafka.yml -t 1podfw -p kafka up -d
podman pod create --name=kafka --share net -p 2181:2181 -p 9092:9092
e21fb80e3e5ce6f156253277b14fbb66b013afdcae9fec468c42c6afde3a6668
0
podman run --name=kafka_zookeeper_1 -d --pod=kafka --label io.podman.compose.config-hash=123 --label io.podman.compose.project=kafka --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=zookeeper -e LOG_DIR=/tmp/logs --add-host zookeeper:127.0.0.1 --add-host kafka_zookeeper_1:127.0.0.1 --add-host kafka:127.0.0.1 --add-host kafka_kafka_1:127.0.0.1 strimzi/kafka:0.20.0-kafka-2.5.0 sh -c bin/zookeeper-server-start.sh config/zookeeper.properties
c6eba2decd63be4f28366546c96e00842d84442b48fdcc97a691a058cccd46dd
0
podman run --name=kafka_kafka_1 -d --pod=kafka --label io.podman.compose.config-hash=123 --label io.podman.compose.project=kafka --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=kafka -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e LOG_DIR=/tmp/logs --add-host zookeeper:127.0.0.1 --add-host kafka_zookeeper_1:127.0.0.1 --add-host kafka:127.0.0.1 --add-host kafka_kafka_1:127.0.0.1 strimzi/kafka:0.20.0-kafka-2.5.0 sh -c bin/kafka-server-start.sh config/server.properties --override listeners=PLAINTEXT://0.0.0.0:9092 --override advertised.listeners=PLAINTEXT://localhost:9092 --override zookeeper.connect=zookeeper:2181
87a8e215f0ccf48a5e976444b5d4e7650879a4ab9e4a4c0ce5a88138e7eb99fe
0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have an Apache Kafka pod instance up and running 💪.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://blog.jromanmartin.io/imagescontainers-gnome-shell-extension/containers-pod-kafka-list.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bX74UkQI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://blog.jromanmartin.io/images/containers-gnome-shell-extension/containers-pod-kafka-list.png" alt="" title="Pod Apache Kafka list"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This amazing GNOME shell extension will help you to manage easily your local linux containers, and since I started to use it … I feel more productive 😆.&lt;/p&gt;

&lt;p&gt;Enjoy it!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://extensions.gnome.org/extension/1500/containers/"&gt;Containers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rgolangh/gnome-shell-extension-containers"&gt;gnome-shell-extension-containers GitHub repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://podman.io"&gt;podman&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>howto</category>
      <category>fedora32</category>
      <category>podman</category>
      <category>tools</category>
    </item>
    <item>
      <title>Blue / Green deployments in OpenShift with Eclipse JKube</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Fri, 23 Oct 2020 08:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/blue-green-deployments-in-openshift-with-eclipse-jkube-1i82</link>
      <guid>https://dev.to/rmarting/blue-green-deployments-in-openshift-with-eclipse-jkube-1i82</guid>
      <description>&lt;p&gt;&lt;a href="https://martinfowler.com/bliki/BlueGreenDeployment.html"&gt;Blue Green Deployment&lt;/a&gt; is an application release model very well-known that gradually transfers user traffic from a previous version of an application or microservice to a nearly identical new release—both of which are running in production.&lt;/p&gt;

&lt;p&gt;The old version of the application or microservice is identified with a color (e.g: Blue) while the new version is identified with the other color (e.g: Green). This model allows to manage the production traffic from one color (old version) to the other (new version), and the old version can standby in case of rollback or pulled from production and updated to become the template upon which the next update is made 💫.&lt;/p&gt;

&lt;p&gt;This model requires a Continuous Deployment model or pipeline 📨 to orchestrate the promotion between each color and manage the down times or rolling process. CD pipelines could be implemented using the capabilities provided by Eclipse JKube 🎇💡.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.eclipse.org/jkube/"&gt;Eclipse JKube&lt;/a&gt; is a collection of plugins that are used for building container images using Docker, JIB or S2I build strategies. In addition, Eclipse JKube generates and deploys Kubernetes/OpenShift manifests (YAML configuration files) at compile time too.&lt;/p&gt;

&lt;p&gt;This article describes an approach to integrate a Blue/Greendeployment with Eclipse JKube. This approach is based in&lt;a href="https://maven.apache.org/guides/introduction/introduction-to-profiles.html"&gt;Maven Profiles&lt;/a&gt;and filtering resources. Of course, this is only an integration sample and it is open to have other approaches. For example, I would like to dig into the &lt;a href="https://www.eclipse.org/jkube/docs/kubernetes-maven-plugin#profiles"&gt;Eclipse JKube Profiles&lt;/a&gt; as other approach 👍. However, feel free to add your ideas, comments or suggestions 💘.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identifying Blue / Green Version
&lt;/h2&gt;

&lt;p&gt;Every time we want to deploy a new version it is needed to identify the right color to be used to deploy it. This process should be done in the CD pipeline before to deploy our next application version.&lt;/p&gt;

&lt;p&gt;For example we could use a variable called &lt;code&gt;app.bg.version&lt;/code&gt; in the &lt;code&gt;pom.xml&lt;/code&gt; file. To declare each version a Maven Profile could help us. This variable will be used to filter some resources at deployment time.&lt;/p&gt;

&lt;p&gt;Sample Maven profile for Blue Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;profile&amp;gt;
    &amp;lt;id&amp;gt;blue&amp;lt;/id&amp;gt;
    &amp;lt;properties&amp;gt;
        &amp;lt;app.bg.version&amp;gt;blue&amp;lt;/app.bg.version&amp;gt;
    &amp;lt;/properties&amp;gt;
&amp;lt;/profile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sample Maven profile for Green Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;profile&amp;gt;
    &amp;lt;id&amp;gt;green&amp;lt;/id&amp;gt;
    &amp;lt;properties&amp;gt;
        &amp;lt;app.bg.version&amp;gt;green&amp;lt;/app.bg.version&amp;gt;
    &amp;lt;/properties&amp;gt;
&amp;lt;/profile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This value could also be used to define the version of the image built. A sample of that could be using the &lt;code&gt;jkube.generator.name&lt;/code&gt; property in the &lt;code&gt;pom.xml&lt;/code&gt; file as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;jkube.generator.name&amp;gt;${project.artifactId}-${app.bg.version}:${project.version}&amp;lt;/jkube.generator.name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Defining Blue/Green Objects
&lt;/h2&gt;

&lt;p&gt;Each Blue and Green version should use its own resources at deployment time to avoid replace or redeploy the other deployment. To do that we would use&lt;a href="https://www.eclipse.org/jkube/docs/kubernetes-maven-plugin#_resource_fragments"&gt;Eclipse JKube resource fragments&lt;/a&gt;to declare specific definitions of some Kubernetes or OpenShift objects.&lt;/p&gt;

&lt;p&gt;In our case we will declare a custom deployment object creating a &lt;code&gt;deployment.yml&lt;/code&gt; file in the &lt;code&gt;jkube&lt;/code&gt; folder. This custom deployment will override some properties to use the active version at deployment time.&lt;/p&gt;

&lt;p&gt;A sample of this file could be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;metadata:
  name: ${project.artifactId}-${app.bg.version}
  labels:
    group: ${project.groupId}
    project: ${project.artifactId}
    version: ${project.version}-${app.bg.version}
    provider: jkube
spec:
  template:
    spec:
      containers:
        - env:
          - name: APP_BG_VERSION
            value: ${app.bg.version}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To access the version deployed a service is needed to map it. This service should be aligned with the right version. A &lt;code&gt;service.yml&lt;/code&gt; file in the &lt;code&gt;jkube&lt;/code&gt; folder similar to the next one could be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;metadata:
  name: ${project.artifactId}-${app.bg.version}
  labels:
    group: ${project.groupId}
    project: ${project.artifactId}
    version: ${project.version}-${app.bg.version}
    provider: jkube
    expose: "true"
spec:
  selector:
    deploymentconfig: ${project.artifactId}-${app.bg.version}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying the Blue Version
&lt;/h2&gt;

&lt;p&gt;Now, we could deploy the Blue version easily using the Maven Profile. For example, using the &lt;a href="https://www.eclipse.org/jkube/docs/openshift-maven-plugin"&gt;OpenShift Maven Plugin&lt;/a&gt; the command will be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ mvn clean package oc:resource oc:build oc:apply -Pblue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying the Green Version
&lt;/h2&gt;

&lt;p&gt;On the other hand, we could deploy the Green version easily using the Maven Profile. For example, using the &lt;a href="https://www.eclipse.org/jkube/docs/openshift-maven-plugin"&gt;OpenShift Maven Plugin&lt;/a&gt;the command will be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ mvn clean package oc:resource oc:build oc:apply -Pgreen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Blue/Green Resources Deployed
&lt;/h2&gt;

&lt;p&gt;This process generates the following objects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image Streams 📷 for each version:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc get is
NAME IMAGE REPOSITORY TAGS UPDATED
kafka-clients-quarkus-sample-blue default-route-openshift-image-registry.apps-crc.testing/amq-streams-demo/kafka-clients-quarkus-sample-blue 1.0.0-SNAPSHOT,1.2.0-SNAPSHOT 38 hours ago
kafka-clients-quarkus-sample-green default-route-openshift-image-registry.apps-crc.testing/amq-streams-demo/kafka-clients-quarkus-sample-green 1.1.0-SNAPSHOT,1.3.0-SNAPSHOT 38 hours ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Build Config 👷 for each version:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc get bc
NAME TYPE FROM LATEST
kafka-clients-quarkus-sample-blue-s2i Source Binary 4
kafka-clients-quarkus-sample-green-s2i Source Binary 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Deployments ✨ for each version:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
kafka-clients-quarkus-sample-blue 7 1 1 config,image(kafka-clients-quarkus-sample-blue:1.0.0-SNAPSHOT)
kafka-clients-quarkus-sample-green 5 1 1 config,image(kafka-clients-quarkus-sample-green:1.1.0-SNAPSHOT)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Services 👀 for each version:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-clients-quarkus-sample-blue ClusterIP 172.25.180.88 &amp;lt;none&amp;gt; 8181/TCP 38h
kafka-clients-quarkus-sample-green ClusterIP 172.25.87.170 &amp;lt;none&amp;gt; 8181/TCP 38h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thanks of all these objects we could manage both versions at the same time, and in combination with a CD pipeline and other resources could manage the traffic to activate the right version (Blue or Green) to the final users.&lt;/p&gt;

&lt;p&gt;For example, we could have an OpenShift route to balance the application between each versions. That route could be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: kafka-clients-quarkus-sample-lb
  labels:
    app: kafka-clients-quarkus-sample
spec:
  to:
    kind: Service
    name: kafka-clients-quarkus-sample-blue
    weight: 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Rolling from Blue to Green
&lt;/h2&gt;

&lt;p&gt;To roll from &lt;strong&gt;Blue&lt;/strong&gt; to &lt;strong&gt;Green&lt;/strong&gt; version it is only to path the load balancer route (in the case of OpenShift) as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc patch route kafka-clients-quarkus-sample-lb --type=merge -p '{"spec": {"to": {"name": "kafka-clients-quarkus-sample-green"}}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Rolling from Green to Blue
&lt;/h2&gt;

&lt;p&gt;To roll from &lt;strong&gt;Green&lt;/strong&gt; to &lt;strong&gt;Blue&lt;/strong&gt; version it is only to path the load balancer route (in the case of OpenShift) as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ oc patch route kafka-clients-quarkus-sample-lb --type=merge -p '{"spec": {"to": {"name": "kafka-clients-quarkus-sample-blue"}}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Show me the code
&lt;/h2&gt;

&lt;p&gt;If you want to test and verify this approach, I developed a sample case in one of my favorite&lt;a href="https://github.com/rmarting/kafka-clients-quarkus-sample/tree/feature/b-g-deployment-strategy"&gt;GitHub repo&lt;/a&gt;. This repo includes amazing frameworks as &lt;a href="https://quarkus.io/"&gt;Quarkus&lt;/a&gt;, &lt;a href="https://avro.apache.org/"&gt;Schemas Avro&lt;/a&gt;and &lt;a href="https://kafka.apache.org/"&gt;Apache Kafka&lt;/a&gt; in a small &lt;a href="https://en.wikipedia.org/wiki/Event-driven_architecture"&gt;Event-Driven Architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Enjoy it! 💪&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.eclipse.org/jkube/"&gt;Eclipse JKube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/eclipse/jkube"&gt;Eclipse JKube GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment"&gt;What is Blue Green deployment?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Blue-green_deployment"&gt;Wikipedia - Blue-Green deployment&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>howto</category>
      <category>openshift</category>
      <category>kubernetes</category>
      <category>jkube</category>
    </item>
    <item>
      <title>New Minishift Cheat Sheet!</title>
      <dc:creator>Jose Roman Martin Gil</dc:creator>
      <pubDate>Mon, 05 Oct 2020 07:00:00 +0000</pubDate>
      <link>https://dev.to/rmarting/new-minishift-cheat-sheet-5555</link>
      <guid>https://dev.to/rmarting/new-minishift-cheat-sheet-5555</guid>
      <description>&lt;p&gt;Is OpenShift 3 “legacy” 👵 👴? Maybe, however in some organizations is already the current &lt;a href="https://en.wikipedia.org/wiki/Platform_as_a_service"&gt;Platform as a Service&lt;/a&gt; deployed and running in production.&lt;/p&gt;

&lt;p&gt;From time to time I have to work on that platform, reviewing deployments, architectures, or designing migration approaches to OpenShift 4 using operators. Whatever it is the cause, I need to run locally an OpenShift 3 cluster, and here&lt;a href="https://docs.okd.io/3.11/minishift/getting-started/index.html"&gt;minishift&lt;/a&gt; helps me a lot … so:&lt;/p&gt;

&lt;p&gt;🎉 I am pleasure to announce that a new Cheat Sheet is available 🎉&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/cheat-sheets/minishift"&gt;Minishift Cheat Sheet&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope this new cheat sheet helps you when you need to install, deploy and operate your own local OpenShift 3 cluster.&lt;/p&gt;

&lt;p&gt;My full list of Cheat Sheets are available for your records &lt;a href="https://dev.to/cheat-sheets"&gt;here&lt;/a&gt;. As usual, comments, ideas, PRs are welcomed!&lt;/p&gt;

&lt;p&gt;Happy reading !!! 📚&lt;/p&gt;

</description>
      <category>howto</category>
      <category>cheatsheet</category>
      <category>minishift</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
