<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thiago (Zozô) Ozores</title>
    <description>The latest articles on DEV Community by Thiago (Zozô) Ozores (@zozores).</description>
    <link>https://dev.to/zozores</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zozores"/>
    <language>en</language>
    <item>
      <title>Building multiarch images with Buildah</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Fri, 22 Sep 2023 19:00:00 +0000</pubDate>
      <link>https://dev.to/zozores/building-multiarch-images-with-buildah-5gnm</link>
      <guid>https://dev.to/zozores/building-multiarch-images-with-buildah-5gnm</guid>
      <description>&lt;p&gt;With the growth and popularization of platforms beyond amd64, especially the ARM platform, it becomes a concern for those maintaining container images to build them tailored for these platforms as well.&lt;/p&gt;

&lt;p&gt;In this article, I will show you how to build these images using &lt;a href="https://buildah.io/" rel="noopener noreferrer"&gt;Buildah&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Buildah is an open-source project led by RedHat (but receives contributions from other companies) that aims to simplify the construction of container images that adhere to and are compatible with the &lt;a href="https://opencontainers.org/" rel="noopener noreferrer"&gt;OCI (Open Container Initiative)&lt;/a&gt; standard.&lt;/p&gt;

&lt;p&gt;So let’s get to the steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Installation of the Buildah package&lt;/li&gt;
&lt;li&gt;Installation of the &lt;code&gt;qemu-user-static&lt;/code&gt; package (Ubuntu/RedHat) or &lt;code&gt;qemu-arch-extra&lt;/code&gt; package (Arch Linux);&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;sudo dnf install -y buildah qemu-user-static&lt;/code&gt; (Fedora and derivatives)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get install -y buildah qemu-user-static&lt;/code&gt; (Ubuntu/Debian and derivatives)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pacman -Sy buildah qemu-arch-extra&lt;/code&gt; (Arch Linux and derivatives)&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the images
&lt;/h2&gt;

&lt;p&gt;The workflow you need to follow to build the images is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the manifest
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export MANIFEST="multiarch-image" # manifest name, can be any name
export IMAGE="registry/repository/image:tag" # image name, e.g., docker.io/ozorest/example:latest
buildah manifest create "$MANIFEST"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the build for each architecture&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the directory with your Dockerfile, you can run the following loop for each architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for arch in amd64 arm64; do
 buildah build --arch $arch --tag "$IMAGE" --manifest "$MANIFEST" .
 # buildah build --arch $arch --tag "$IMAGE" --manifest "$MANIFEST" -f path_to_Dockerfile, in case the Dockerfile is not in the current directory
done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Push the manifest to the registry
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildah manifest push --all "$MANIFEST" "docker://$IMAGE"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it, folks! Stay tuned for more content!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Diagrams as Code</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Thu, 05 May 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/zozores/diagrams-as-code-445c</link>
      <guid>https://dev.to/zozores/diagrams-as-code-445c</guid>
      <description>&lt;p&gt;Come, come, come and find out how to generate diagrams using Python code!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://logs.zozo.dev.br/post/diagrama-como-codigo/" rel="noopener noreferrer"&gt;Em Português 🇧🇷 🇵🇹&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my day-to-day as an instructor, one of the jobs is to create diagrams that clearly illustrate the topic being presented to students, there are many good graphic tools online like &lt;a href="//https:/%20/draw.io"&gt;draw.io&lt;/a&gt;, &lt;a href="https://www.lucidchart.com" rel="noopener noreferrer"&gt;LucidChart&lt;/a&gt;, among others.&lt;/p&gt;

&lt;p&gt;But despite these tools being very intuitive and easy to use, when you need to scale the creation of diagrams, need to create diagrams bringing information from external tools or even create simple diagrams quickly, you end up running into problems with formatting options and lack of automation of these tools, which make it difficult to create a “factory” of diagrams.&lt;/p&gt;

&lt;p&gt;Thinking about this scenario, Python has a package that can be used to represent and generate diagrams as code, facilitating the creation of this “factory”.&lt;/p&gt;

&lt;p&gt;The package is called &lt;a href="https://diagrams.mingrammer.com" rel="noopener noreferrer"&gt;diagrams&lt;/a&gt; and it has a very interesting way of working, it makes use of Python’s operator overload to perform in a more intuitive way the connection that the nodes will have in diagram graphics. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;&amp;gt;&amp;gt;&lt;/code&gt; operator represents a right-to-left binding&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;&amp;lt;&amp;lt;&lt;/code&gt; operator represents a left-to-right binding&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;-&lt;/code&gt; operator represents a directionless binding&lt;/li&gt;
&lt;li&gt;And it’s still possible to make bidirectional bindings using the &lt;code&gt;Edge&lt;/code&gt; class&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re looking for a more programmatic way to generate diagrams, this package is worth checking out.&lt;/p&gt;

&lt;p&gt;Below are some examples I developed, which can be tested on &lt;a href="https://colab.research.google.com/drive/1MrlVVFXAAMvuQJ8m-qkyY05wnurAl5uQ?usp=sharing" rel="noopener noreferrer"&gt;Google Colab&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Github Actions: Sharing artifacts between jobs</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Sun, 28 Feb 2021 16:56:55 +0000</pubDate>
      <link>https://dev.to/zozores/github-actions-sharing-artifacts-between-jobs-318b</link>
      <guid>https://dev.to/zozores/github-actions-sharing-artifacts-between-jobs-318b</guid>
      <description>&lt;p&gt;This will be a quick tip.&lt;/p&gt;

&lt;p&gt;Some days ago, configuring a pipeline inside Github Actions I had the need to use a file generated by a job in another job, that are using distinct OS.&lt;/p&gt;

&lt;p&gt;It's pretty straight forward do that, the only thing that you need is the &lt;code&gt;upload-artifact&lt;/code&gt; and the &lt;code&gt;download-artifact&lt;/code&gt; actions&lt;/p&gt;

&lt;p&gt;Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;job1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

  &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v1&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mkdir -p dist&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo hello &amp;gt; dist/world.txt&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@master&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world-artifact&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dist/world.txt&lt;/span&gt;

&lt;span class="na"&gt;job2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;macos-latest&lt;/span&gt;

  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mkdir -p dist&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/download-artifact@master&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world-artifact&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dist/world.txt&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cat dist/world.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source code and documentation for the actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/actions/upload-artifact" rel="noopener noreferrer"&gt;upload-artifact&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/actions/download-artifact" rel="noopener noreferrer"&gt;download-artifact&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;BONUS TIP&lt;/strong&gt; &lt;br&gt;
I had the need to fetch a file from the Github Releases as well and to do that I used the third-party action &lt;code&gt;dsaltares/fetch-gh-release-asset@master&lt;/code&gt;, but the drawback is that action only runs on Linux (because of that I have to copy an artifact from one job to another one ;-) )&lt;/p&gt;

&lt;p&gt;Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dsaltares/fetch-gh-release-asset@master&lt;/span&gt;
&lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-user/your-repo"&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest"&lt;/span&gt;
  &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;package.zip"&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dist/package.zip"&lt;/span&gt;
  &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.YOUR_TOKEN }}&lt;/span&gt; &lt;span class="c1"&gt;# If your repo is private, you need to fill with the personal access token&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source code and documentation for the &lt;a href="https://github.com/dsaltares/fetch-gh-release-asset" rel="noopener noreferrer"&gt;dsaltares/fetch-gh-release-asset&lt;/a&gt; action&lt;/p&gt;

&lt;p&gt;That's all folks! Thank you and stay in tune for more tips.&lt;/p&gt;

</description>
      <category>github</category>
    </item>
    <item>
      <title>[PT-BR] Github Actions: Compartilhando artefatos entre jobs</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Sun, 28 Feb 2021 16:56:40 +0000</pubDate>
      <link>https://dev.to/zozores/pt-br-github-actions-compartilhando-artefatos-entre-jobs-20f1</link>
      <guid>https://dev.to/zozores/pt-br-github-actions-compartilhando-artefatos-entre-jobs-20f1</guid>
      <description>&lt;p&gt;Esta vai ser uma dica rápida.&lt;/p&gt;

&lt;p&gt;Outro dia configurando um pipeline no Github Actions, eu tive a necessidade de usar um arquivo gerado em um job em um outro job, que estavam usando diferentes sistemas operacionais.&lt;/p&gt;

&lt;p&gt;É bem simples fazer isso, tudo que você precisa é destas duas actions: &lt;code&gt;upload-artifact&lt;/code&gt; e &lt;code&gt;download-artifact&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Aqui está um exemplo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;job1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

  &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v1&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mkdir -p dist&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo hello &amp;gt; dist/world.txt&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@master&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world-artifact&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dist/world.txt&lt;/span&gt;

&lt;span class="na"&gt;job2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;macos-latest&lt;/span&gt;

  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mkdir -p dist&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/download-artifact@master&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world-artifact&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dist/world.txt&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cat dist/world.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Código-fonte e documentação das actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/actions/upload-artifact" rel="noopener noreferrer"&gt;upload-artifact&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/actions/download-artifact" rel="noopener noreferrer"&gt;download-artifact&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DICA BÔNUS&lt;/strong&gt;&lt;br&gt;
Eu também tive a necessidade de fazer o download de um arquivo do Releases do Github, para isso eu usei esta action &lt;code&gt;dsaltares/fetch-gh-release-asset@master&lt;/code&gt; de terceiros, mas a desvantagem é que esta action apenas roda em Linux (por isso que eu precisei copiar um artefato de um job em outro ;-) )&lt;/p&gt;

&lt;p&gt;Aqui está um exemplo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dsaltares/fetch-gh-release-asset@master&lt;/span&gt;
&lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-user/your-repo"&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest"&lt;/span&gt;
  &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;package.zip"&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dist/package.zip"&lt;/span&gt;
  &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.YOUR_TOKEN }}&lt;/span&gt; &lt;span class="c1"&gt;# Se o seu repo é privado, você precisa do access token&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Código-fonte e documentação da action &lt;a href="https://github.com/dsaltares/fetch-gh-release-asset" rel="noopener noreferrer"&gt;dsaltares/fetch-gh-release-asset&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Isso é tudo, pessoal! Obrigado e fique sintonia para mais dicas.&lt;/p&gt;

</description>
      <category>github</category>
    </item>
    <item>
      <title>The Poor Man's Guide to Django deployment in the AWS</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Mon, 31 Aug 2020 18:26:16 +0000</pubDate>
      <link>https://dev.to/zozores/the-poor-man-s-guide-to-django-deployment-in-the-aws-42bd</link>
      <guid>https://dev.to/zozores/the-poor-man-s-guide-to-django-deployment-in-the-aws-42bd</guid>
      <description>&lt;p&gt;In this post I will talk about how to deploy a Django application (but easy to replicate to any other Python framework like Flask, for example) in the AWS saving a lot of money (depending of the traffic of your application and what services are you intended to use inside AWS, &lt;strong&gt;it could be free...forever!&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;AND THIS POST WILL BE A QUITE LONG, SORRY FOR THAT!&lt;/p&gt;

&lt;p&gt;In July, I finished the development of my educational project for children using Django and my first question when I finished was how to deploy this application in the cloud in a cheapest way possible.&lt;/p&gt;

&lt;p&gt;My first option was Heroku, because is quite easy to deploy and implement a continuous deployment process using them for free, but if I have to scale my application for any reason, Heroku could get a lot more expensive, so I decided to think better.&lt;/p&gt;

&lt;p&gt;In that process, I found a project that was a game changer in my quest. The project is &lt;a href="https://github.com/Miserlou/Zappa" rel="noopener noreferrer"&gt;Zappa&lt;/a&gt;, below the description of the project taken from they readme at Github:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Zappa makes it super easy to build and deploy server-less, event-driven Python applications (including, but not limited to, WSGI web apps) on AWS Lambda + API Gateway. That means &lt;strong&gt;infinite scaling, zero downtime, zero maintenance&lt;/strong&gt; - and at a fraction of the cost of your current deployments!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They accomplishes that, indeed, is quite easy deploy the app in the AWS, the hardest part of the deployment is configure the IAM roles and policies inside AWS (I think this is the hardest part for any deployment there :-) ). So, without further ado, let me show you how I managed to deploy my Django app using Zappa.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PS: It is out of scope of this article explain deeper how AWS services works, I'm assuming that you have some knowledge about it and I won't use the console to configure AWS services, I'll use the AWS CLI, so if you want to use it as well, &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html" rel="noopener noreferrer"&gt;check here how to setup CLI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PS 2: Setup AWS CLI to use some user with Administrator access but NEVER EVER NEVER USE ROOT ACCOUNT! And keep the Access and Secret keys safe, for example using a password manager&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;PS 3: All commands and examples were made using a Linux workstation&lt;/em&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  1. Creating the required AWS S3 Bucket
&lt;/h4&gt;

&lt;p&gt;Zappa will use the S3 bucket to upload the Lambda-compatible archive generated by the deploy command which I'll show further&lt;br&gt;
To create the bucket using the CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api create-bucket &lt;span class="nt"&gt;--bucket&lt;/span&gt; name_of_the_bucket &lt;span class="nt"&gt;--region&lt;/span&gt; region_of_your_choose &lt;span class="nt"&gt;--create-bucket-configuration&lt;/span&gt; &lt;span class="nv"&gt;LocationConstraint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;region_of_your_choose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The region parameter and the &lt;code&gt;LocationConstraint&lt;/code&gt; configuration are required if you will create a bucket outside &lt;code&gt;us-east-1&lt;/code&gt; region, if you choose this region you can remove both from the command.&lt;/p&gt;

&lt;p&gt;Remember that bucket names are unique globally, so if you receive an error like &lt;code&gt;BucketAlreadyExists&lt;/code&gt;, you have to choose a new name.&lt;/p&gt;

&lt;p&gt;If everything goes well, you should receive this return from the command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Location"&lt;/span&gt;: &lt;span class="s2"&gt;"http://name_of_your_bucket.s3.amazonaws.com/"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;em&gt;PS 4: If your application uses SQLite as database, I recommend you to create one more bucket for that and if your application uses static and media files, I recommend also you to create another bucket for that, we will see later how to manage them inside AWS&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  2. Configuring needed IAM policies, roles, group and user
&lt;/h4&gt;

&lt;p&gt;For a better security (and also a best practice), we need to setup some IAM objects exclusive for Zappa to restrict which AWS resources it will have access.&lt;/p&gt;
&lt;h5&gt;
  
  
  2.1 Creating the role
&lt;/h5&gt;

&lt;p&gt;Let's start creating the role that will be passed to the Lambda function which will be created by Zappa during the deployment process.&lt;/p&gt;

&lt;p&gt;For create the role using the CLI, we need a JSON file that will be represent which AWS services will be allowed to call other AWS services in the Zappa user behalf (the Assume Role Policy Document).&lt;/p&gt;

&lt;p&gt;Create the JSON file in some directory in your computer with this content:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;So, let's go back to the CLI for create the role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam create-role &lt;span class="nt"&gt;--role-name&lt;/span&gt; my-role &lt;span class="nt"&gt;--assume-role-policy-document&lt;/span&gt; file:///tmp/zappa_assume_role.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;my-role&lt;/code&gt; could be any name that you want.&lt;br&gt;
&lt;code&gt;file:///tmp/zappa_assume_role.json&lt;/code&gt; must match with the path of the file that you created before.&lt;/p&gt;

&lt;p&gt;This will be the return, if the command succeded:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Role"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"Path"&lt;/span&gt;: &lt;span class="s2"&gt;"/"&lt;/span&gt;,
        &lt;span class="s2"&gt;"RoleName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-role"&lt;/span&gt;,
        &lt;span class="s2"&gt;"RoleId"&lt;/span&gt;: &lt;span class="s2"&gt;"AROA3IDJ3HNEWIQZA5IQZ"&lt;/span&gt;,
        &lt;span class="s2"&gt;"Arn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::773316098889:role/my-role"&lt;/span&gt;,
        &lt;span class="s2"&gt;"CreateDate"&lt;/span&gt;: &lt;span class="s2"&gt;"2020-08-29T19:24:37Z"&lt;/span&gt;,
        &lt;span class="s2"&gt;"AssumeRolePolicyDocument"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"Principal"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"Service"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                            &lt;span class="s2"&gt;"apigateway.amazonaws.com"&lt;/span&gt;,
                            &lt;span class="s2"&gt;"lambda.amazonaws.com"&lt;/span&gt;,
                            &lt;span class="s2"&gt;"events.amazonaws.com"&lt;/span&gt;
                        &lt;span class="o"&gt;]&lt;/span&gt;
                    &lt;span class="o"&gt;}&lt;/span&gt;,
                    &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="s2"&gt;"sts:AssumeRole"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Save the value returned by the &lt;code&gt;Arn&lt;/code&gt; field in some place, we will need him afterwards.&lt;/strong&gt;&lt;/p&gt;
&lt;h5&gt;
  
  
  2.2 Attaching the policy to the role
&lt;/h5&gt;

&lt;p&gt;Now, it's time to join the policy with the role that we created before.&lt;/p&gt;

&lt;p&gt;The AWS policy can also be a JSON document that represents the permissions that will be granted for use the services inside AWS, for the role that we will attach it.&lt;/p&gt;

&lt;p&gt;For that again, you need create a JSON file in some directory in your computer with this content:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Again, let's go back to the CLI and execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam put-role-policy &lt;span class="nt"&gt;--role-name&lt;/span&gt; my-role &lt;span class="nt"&gt;--policy-name&lt;/span&gt; my-policy &lt;span class="nt"&gt;--policy-document&lt;/span&gt; file:///tmp/zappa_policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;my-role&lt;/code&gt; must match with the name of the role that we created before.&lt;br&gt;
&lt;code&gt;my-policy&lt;/code&gt; could be any name that you want.&lt;br&gt;
&lt;code&gt;file:///tmp/zappa_policy.json&lt;/code&gt; must match with the path of the file that you created before.&lt;/p&gt;

&lt;p&gt;If the command succeded, &lt;strong&gt;nothing will be returned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But if you want to confirm if everything goes well, you can run this command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam get-role-policy &lt;span class="nt"&gt;--role-name&lt;/span&gt; my-role &lt;span class="nt"&gt;--policy-name&lt;/span&gt; my-policy | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The return should be:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"RoleName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-role"&lt;/span&gt;,
    &lt;span class="s2"&gt;"PolicyName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-policy"&lt;/span&gt;,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  2.3 Creating the group
&lt;/h5&gt;

&lt;p&gt;In the last 2 steps, we defined the role and policy that will be passed to the Lambda function which will be created by Zappa during the deployment.&lt;/p&gt;

&lt;p&gt;Now, we have to define the group, user and the policies that will allow Zappa create the resources inside AWS (Ex.: the Lambda function, create the API gateway, store the package in S3, etc).&lt;/p&gt;

&lt;p&gt;We will start creating the group, executing that following CLI command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam create-group &lt;span class="nt"&gt;--group-name&lt;/span&gt; my-group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will be the return, if the command succeded:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Group"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"Path"&lt;/span&gt;: &lt;span class="s2"&gt;"/"&lt;/span&gt;,
        &lt;span class="s2"&gt;"GroupName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-group"&lt;/span&gt;,
        &lt;span class="s2"&gt;"GroupId"&lt;/span&gt;: &lt;span class="s2"&gt;"AGPA3IDJ3HNEYAXDIXQ5K"&lt;/span&gt;,
        &lt;span class="s2"&gt;"Arn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::773316098889:group/my-group"&lt;/span&gt;,
        &lt;span class="s2"&gt;"CreateDate"&lt;/span&gt;: &lt;span class="s2"&gt;"2020-08-29T20:30:14Z"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  2.4 Attaching the policy for Zappa general permissions to the group
&lt;/h5&gt;

&lt;p&gt;We will do a similar job, that we did in the 2.2 step, but instead create just one JSON, we will create two JSON's, because we have to attach two policies for the group, one policy for general permissions and the other one specific for S3 permissions.&lt;/p&gt;

&lt;p&gt;Let's start with the general one, create the JSON file in some directory in your computer with this content below, but this time we need do a small change in the JSON before using inside the CLI.&lt;/p&gt;

&lt;p&gt;Find &lt;code&gt;full_arn_from_created_role&lt;/code&gt; inside the content and replace with the &lt;code&gt;Arn&lt;/code&gt; of the previous created role.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;After that let's go to the CLI and run the command that will attach the policy to the group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam put-group-policy &lt;span class="nt"&gt;--group-name&lt;/span&gt; my-group &lt;span class="nt"&gt;--policy-document&lt;/span&gt; file:///tmp/zappa_general_policy.json &lt;span class="nt"&gt;--policy-name&lt;/span&gt; my-general-policy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;my-group&lt;/code&gt; must match with the name of the group that we created before.&lt;br&gt;
&lt;code&gt;my-general-policy&lt;/code&gt; could be any name that you want, but cannot be the same from previous policy created in the step 2.2.&lt;br&gt;
&lt;code&gt;file:///tmp/zappa_general_policy.json&lt;/code&gt; must match with the path of the file that you created before.&lt;/p&gt;

&lt;p&gt;If the command succeded, &lt;strong&gt;nothing will be returned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But if you want to confirm if everything goes well, you can run this command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam get-group-policy &lt;span class="nt"&gt;--group-name&lt;/span&gt; my-group &lt;span class="nt"&gt;--policy-name&lt;/span&gt; my-general-policy | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The return should be:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"GroupName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-group"&lt;/span&gt;,
    &lt;span class="s2"&gt;"PolicyName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-general-policy"&lt;/span&gt;,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  2.5 Attaching the policy for Zappa specific S3 permissions to the group
&lt;/h5&gt;

&lt;p&gt;Now, we will attach the S3 specific policy to the group, we will repeat the same steps of the previous one.&lt;br&gt;
But the content of the JSON to apply will be different and again we have to do a small change in the JSON before using inside the CLI.&lt;/p&gt;

&lt;p&gt;Find &lt;code&gt;full_arn_from_s3_bucket&lt;/code&gt; inside the content and replace with the &lt;code&gt;Arn&lt;/code&gt; of the S3 bucket created in the step 1.&lt;/p&gt;

&lt;p&gt;The ARN of the S3 bucket follow this pattern: &lt;code&gt;arn:aws:s3:::name_of_your_bucket&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you created more than one bucket in the step 1, you must add the ARN of those as well.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;After that let's go repeat the same commands from the previous step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam put-group-policy &lt;span class="nt"&gt;--group-name&lt;/span&gt; my-group &lt;span class="nt"&gt;--policy-document&lt;/span&gt; file:///tmp/zappa_s3_policy.json &lt;span class="nt"&gt;--policy-name&lt;/span&gt; my-s3-policy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;my-group&lt;/code&gt; must match with the name of the group that we created before.&lt;br&gt;
&lt;code&gt;my-s3-policy&lt;/code&gt; could be any name that you want, but cannot be the same from previous policies created in the steps 2.2 and 2.4.&lt;br&gt;
&lt;code&gt;file:///tmp/zappa_s3_policy.json&lt;/code&gt; must match with the path of the file that you created before.&lt;/p&gt;

&lt;p&gt;If the command succeded, &lt;strong&gt;nothing will be returned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But if you want to confirm if everything goes well, you can run this command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam get-group-policy &lt;span class="nt"&gt;--group-name&lt;/span&gt; my-group &lt;span class="nt"&gt;--policy-name&lt;/span&gt; my-s3-policy | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The return should be:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"GroupName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-group"&lt;/span&gt;,
    &lt;span class="s2"&gt;"PolicyName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-s3-policy"&lt;/span&gt;,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  2.6 Creating the user, attach it to the group and create the secret key
&lt;/h5&gt;

&lt;p&gt;At last, we have to create the user that will be used exclusive by Zappa, attach it to the group created previously and generate the access key of the user.&lt;/p&gt;

&lt;p&gt;Let's start creating the user:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam create-user &lt;span class="nt"&gt;--user-name&lt;/span&gt; my-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The return should be:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"User"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"UserName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-user"&lt;/span&gt;,
        &lt;span class="s2"&gt;"Path"&lt;/span&gt;: &lt;span class="s2"&gt;"/"&lt;/span&gt;,
        &lt;span class="s2"&gt;"CreateDate"&lt;/span&gt;: &lt;span class="s2"&gt;"2013-06-08T03:20:41.270Z"&lt;/span&gt;,
        &lt;span class="s2"&gt;"UserId"&lt;/span&gt;: &lt;span class="s2"&gt;"AIDAIOSFODNN7EXAMPLE"&lt;/span&gt;,
        &lt;span class="s2"&gt;"Arn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::123456789012:user/Bob"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, we will attach this user to the group:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam add-user-to-group &lt;span class="nt"&gt;--user-name&lt;/span&gt; my-user &lt;span class="nt"&gt;--group-name&lt;/span&gt; my-group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If the command succeded, &lt;strong&gt;nothing will be returned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But if you want to confirm if everything goes well, you can run this command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam get-group &lt;span class="nt"&gt;--group-name&lt;/span&gt; my-group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The return should be:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Users"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Path"&lt;/span&gt;: &lt;span class="s2"&gt;"/"&lt;/span&gt;,
            &lt;span class="s2"&gt;"UserName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-user"&lt;/span&gt;,
            &lt;span class="s2"&gt;"UserId"&lt;/span&gt;: &lt;span class="s2"&gt;"AIDA3IDJ3HNEVBCYBTCVB"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Arn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::773316098889:user/my-user"&lt;/span&gt;,
            &lt;span class="s2"&gt;"CreateDate"&lt;/span&gt;: &lt;span class="s2"&gt;"2020-08-29T21:55:40Z"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"Group"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"Path"&lt;/span&gt;: &lt;span class="s2"&gt;"/"&lt;/span&gt;,
        &lt;span class="s2"&gt;"GroupName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-group"&lt;/span&gt;,
        &lt;span class="s2"&gt;"GroupId"&lt;/span&gt;: &lt;span class="s2"&gt;"AGPA3IDJ3HNEYAXDIXQ5K"&lt;/span&gt;,
        &lt;span class="s2"&gt;"Arn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::773316098889:group/my-group"&lt;/span&gt;,
        &lt;span class="s2"&gt;"CreateDate"&lt;/span&gt;: &lt;span class="s2"&gt;"2020-08-29T20:30:14Z"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, finally let's generate the user access key:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam create-access-key &lt;span class="nt"&gt;--user-name&lt;/span&gt; my-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The return should be:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"AccessKey"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"UserName"&lt;/span&gt;: &lt;span class="s2"&gt;"my-user"&lt;/span&gt;,
        &lt;span class="s2"&gt;"Status"&lt;/span&gt;: &lt;span class="s2"&gt;"Active"&lt;/span&gt;,
        &lt;span class="s2"&gt;"CreateDate"&lt;/span&gt;: &lt;span class="s2"&gt;"2015-03-09T18:39:23.411Z"&lt;/span&gt;,
        &lt;span class="s2"&gt;"SecretAccessKey"&lt;/span&gt;: &lt;span class="s2"&gt;"wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY"&lt;/span&gt;,
        &lt;span class="s2"&gt;"AccessKeyId"&lt;/span&gt;: &lt;span class="s2"&gt;"AKIAIOSFODNN7EXAMPLE"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Save the &lt;code&gt;SecretAccessKey&lt;/code&gt; and &lt;code&gt;AccessKeyId&lt;/code&gt; in a safe place (it is a best practice do that), we will need them further.&lt;/p&gt;
&lt;h4&gt;
  
  
  3. Considerations before install and configure Zappa
&lt;/h4&gt;

&lt;p&gt;There are two things that Zappa won't help you inside AWS: how to handle your static and media files and how to connect to a database.&lt;/p&gt;

&lt;p&gt;So, if you want to use AWS for that also, you will have to handle that inside your Python app. &lt;/p&gt;

&lt;p&gt;In my deployment, to keep things simple, I chosen SQLite as database and store the database file in a S3 bucket and I also chosen store the static and media in another S3 bucket (because of that I warmed you to create some extra buckets in the step 1).&lt;/p&gt;

&lt;p&gt;Other options are available, for example Aurora or RDS as database (but if you use them, you should have to update the IAM policies that we created before).&lt;/p&gt;

&lt;p&gt;To setup Django to use S3 both for store the SQLite database and store the static/media files, we need to install two python packages that will be responsible for that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember to activate your virtualenv, if you are using that or use &lt;code&gt;pipenv&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;django-s3-storage &lt;span class="c"&gt;# Will handle the static/media files&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;django-s3-sqlite &lt;span class="c"&gt;# Will handle the SQLite database&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Example of how to configure Django &lt;code&gt;settings.py&lt;/code&gt; to manage the static/media files and the SQLite database using S3:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Installing and configuring Zappa
&lt;/h4&gt;

&lt;p&gt;Now we will install and configure Zappa, for install we will use pip.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember to activate your virtualenv, if you are using that or use &lt;code&gt;pipenv&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;zappa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For configure Zappa, you need to create the file &lt;code&gt;zappa_settings.json&lt;/code&gt; inside the root of your Django application repository (the same place of the &lt;code&gt;manage.py&lt;/code&gt;), below follow an example: &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;&lt;code&gt;dev&lt;/code&gt; is the name of the stage that will be created inside the API Gateway.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;django_settings&lt;/code&gt; must be &lt;code&gt;name_of_your_django_project.settings&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;s3_bucket&lt;/code&gt; must be the bucket that we created in the step 1&lt;/p&gt;

&lt;p&gt;You can check &lt;a href="https://github.com/ozorest/bingo-silabas" rel="noopener noreferrer"&gt;my project repo at Github&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Deploying the application
&lt;/h4&gt;

&lt;p&gt;Finally, we got to what matters :-)&lt;/p&gt;

&lt;p&gt;Before run the command to finally deploy the application inside AWS, we need to define the &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; environment variables with the access key and secret key of the user that we created in the step 2.6&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember to activate your virtualenv before, if you are using that or if you are using pipenv, run &lt;code&gt;pipenv shell&lt;/code&gt; before&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;access_key_of_the_user_created_before
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret_key_of_the_user_created_before
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can run the command to deploy our application for first time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zappa deploy dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;dev&lt;/code&gt; must be the name of the stage that you defined in the &lt;code&gt;zappa_settings.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If everything goes well, the return should be something similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;...
Deploying API Gateway...
Deployment &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;: https://wf31r9h75a.execute-api.us-west-2.amazonaws.com/dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you can use the above URL to test if your application is running properly, if not you can use this command to check the logs and check what happened:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zappa &lt;span class="nb"&gt;tail &lt;/span&gt;dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And to do some update in your application, you don't need run the deploy command again, just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zappa update dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, that's all folks, below I'm leaving some links with additional information:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying a Flask App&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://camo.githubusercontent.com/be05103c626a5afe18dc4b1208a4b465dbd9e731/687474703a2f2f692e696d6775722e636f6d2f6631504a7843512e676966" class="article-body-image-wrapper"&gt;&lt;img src="https://camo.githubusercontent.com/be05103c626a5afe18dc4b1208a4b465dbd9e731/687474703a2f2f692e696d6775722e636f6d2f6631504a7843512e676966" alt="Deploying a Flask App" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Miserlou/Zappa#ssl-certification" rel="noopener noreferrer"&gt;Deploying using custom domain and SSL certificate&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.agiliq.com/blog/2019/01/complete-serverless-django/" rel="noopener noreferrer"&gt;Deploying using Aurora as database&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks a lot for the patience!&lt;br&gt;
And have a nice day!&lt;/p&gt;

</description>
      <category>python</category>
      <category>django</category>
      <category>flask</category>
      <category>aws</category>
    </item>
    <item>
      <title>[Fedora 32] How to solve docker internal network issue</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Fri, 12 Jun 2020 02:54:49 +0000</pubDate>
      <link>https://dev.to/zozores/fedora-32-how-to-solve-docker-internal-network-issue-22me</link>
      <guid>https://dev.to/zozores/fedora-32-how-to-solve-docker-internal-network-issue-22me</guid>
      <description>&lt;p&gt;Recently I upgraded to Fedora 32 with a fresh install and started to face an issue using docker-compose and docker in general, containers weren't able to talk each other.&lt;/p&gt;

&lt;p&gt;After some googling I found that default backend for firewalld was changed &lt;a href="https://fedoraproject.org/wiki/Changes/firewalld_default_to_nftables" rel="noopener noreferrer"&gt;from iptables to nftables&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I tried to do the proposed fixes for Docker described in the link above, but without success, so the way to solve the issue for me was put back iptables as firewalld backend.&lt;/p&gt;

&lt;p&gt;With those commands below, I was able to solve the issue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/g' /etc/firewalld/firewalld.conf

sudo systemctl restart firewalld docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's all folks, thanks in advance and stay in tune.&lt;/p&gt;

</description>
      <category>fedora</category>
      <category>docker</category>
      <category>firewalld</category>
    </item>
    <item>
      <title>Mirroring repositories at Gitlab.com</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Tue, 09 Jun 2020 14:07:08 +0000</pubDate>
      <link>https://dev.to/zozores/mirroring-repositories-at-gitlab-com-3g3e</link>
      <guid>https://dev.to/zozores/mirroring-repositories-at-gitlab-com-3g3e</guid>
      <description>&lt;p&gt;I'm migrating most part of my repositories to the &lt;a href="https://gitlab.com" rel="noopener noreferrer"&gt;Gitlab.com&lt;/a&gt;, because they ultimately have a nice set of features, but I'm not intended to left &lt;a href="https://github.com" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, because it has another nice set of features and also they have a certain charm ;-)&lt;/p&gt;

&lt;p&gt;And with that on mind, I was wondering how to sync my repos from Github to Gitlab and also the reverse. I checked some options to do that, but the simplest option that I've found was to use an awesome feature available at Gitlab.com, called &lt;strong&gt;mirroring repositories&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It is possible to mirror any internet acessible git repository in the both ways (push and pull), so if you update in the Gitlab the other repo is updated as well and also the inverse. &lt;/p&gt;

&lt;p&gt;So, I synchronized my github repos with gitlab in that way, follow the steps to configure it, below:&lt;/p&gt;

&lt;p&gt;1) Go to the project page inside Gitlab&lt;/p&gt;

&lt;p&gt;2) In the left sidebar, look for "Settings" option&lt;/p&gt;

&lt;p&gt;3) That option will open a set of other options, click in the "Repository" option&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsf0f6s3ncqpl49dq2bp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsf0f6s3ncqpl49dq2bp.png" alt="Gitlab screen showing Settings and Repository options" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) Expand the "Mirroring repositories" option&lt;/p&gt;

&lt;p&gt;5) Fill the "Git repository URL" with the address of your repo, if you use https, you must fill the URL together with the user. Ex.: &lt;a href="https://user@github.com/user/repo.git" rel="noopener noreferrer"&gt;https://user@github.com/user/repo.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6) In the combo "Mirror direction" choose if you will do a Push or a Pull (you can configure both at the same time)&lt;/p&gt;

&lt;p&gt;7) After choose the authentication method, password if you will use https, public key if you will use ssh&lt;/p&gt;

&lt;p&gt;8) If you chose password, you must fill the textbox below with the password of your source repo, if you chose public key you must add a new ssh key.&lt;/p&gt;

&lt;p&gt;9) If you want you can check "Keep divergent refs" or "Only mirror protected branches" (I usually don't do that)&lt;/p&gt;

&lt;p&gt;10) At last, click in the beautiful green button called "Mirror repository" and that's it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxwdbcgwp46w4zj433a4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxwdbcgwp46w4zj433a4.png" alt="Gitlab screen showing the " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bihldc90d0vzneep0cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bihldc90d0vzneep0cn.png" alt="Detailed view of " width="357" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, that's it for now.&lt;/p&gt;

</description>
      <category>git</category>
      <category>gitlab</category>
      <category>github</category>
    </item>
    <item>
      <title>Local NLB for Kubernetes</title>
      <dc:creator>Thiago (Zozô) Ozores</dc:creator>
      <pubDate>Tue, 02 Jun 2020 18:02:45 +0000</pubDate>
      <link>https://dev.to/zozores/local-nlb-for-kubernetes-54h9</link>
      <guid>https://dev.to/zozores/local-nlb-for-kubernetes-54h9</guid>
      <description>&lt;p&gt;I decided to build a Kubernetes infra from scratch using some virtual machines inside my laptop, because I think it's the better way to study for CKA and to know better how Kubernetes works (and since I also don't want to pay a few dollars to a cloud provider, because exchange rates have skyrocketed in Brazil :-) )&lt;/p&gt;

&lt;p&gt;The baddest thing to use this kind of infra instead cloud providers is try to expose a service using the LoadBalancer type. Inside the cloud provider such thing is quite easy.&lt;/p&gt;

&lt;p&gt;But since I discovered the &lt;a href="https://metallb.universe.tf/" rel="noopener noreferrer"&gt;MetalLB&lt;/a&gt; project I don't have such problem anymore.&lt;/p&gt;

&lt;p&gt;MetalLB provides a network load balancer implementation for bare metal kubernetes clusters, using standard routing protocols (since Kubernetes doesn't offer one), in other words, I'm able to load balancing requests using the kubernetes way (the right way ;-) ), not using an external nginx to loadbalancing nodeport services.&lt;/p&gt;

&lt;p&gt;Below the steps which I did in my environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 libvirt VM's w/ CentOS 7 (provisioned with Vagrant)&lt;/li&gt;
&lt;li&gt;1 master node&lt;/li&gt;
&lt;li&gt;3 workers nodes&lt;/li&gt;
&lt;li&gt;Kubernetes 1.18&lt;/li&gt;
&lt;li&gt;MetalLB 0.9.3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Preparation&lt;/strong&gt;&lt;br&gt;
First I needed to check, as documentation of MetalLB asks for, if I need to enable the strict ARP mode for kube-proxy. Since Kubernetes 1.14.2 for kube-proxy in ipvs mode, we need to enable strict ARP.&lt;/p&gt;

&lt;p&gt;To check it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe cm -n kube-system kube-proxy | grep -i strictarp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you get this return:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;strictARP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need to enable strictARP, to do that use those commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get cm kube-proxy -n kube-system -o json &amp;gt; strict.json &amp;amp;&amp;amp; sed 's/strictARP: false/strictARP: true/g' strict.json &amp;amp;&amp;amp; kubectl replace -f strict.json; rm -f strict.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Installation&lt;/strong&gt;&lt;br&gt;
Now we are able to install, the documentation provide to us two ways to install, the manifest way and the kustomize way, I chose apply the manifests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to generate a secret also (used to encrypt the communication between the speakers)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that according the docs those components will be deployed:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are:&lt;/p&gt;

&lt;p&gt;The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.&lt;br&gt;
The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.&lt;br&gt;
Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At last we need to configure MetalLB defining and deploying a configmap, I used the simplest mode (according the docs, but in my case was true), the Layer 2 mode:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;metallb-config.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;address-pools:&lt;/span&gt;
    &lt;span class="s"&gt;- name: default&lt;/span&gt;
      &lt;span class="s"&gt;protocol: layer2&lt;/span&gt;
      &lt;span class="s"&gt;addresses:&lt;/span&gt;
      &lt;span class="s"&gt;- 192.168.122.2-192.168.122.10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f metallb-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that MetalLB was able to allocate those IP's (they are not random, they are one part of my available IP's from the virtual network builded by libvirt) as a load balancer.&lt;/p&gt;

&lt;p&gt;And now we can test.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>metallb</category>
      <category>loadbalancer</category>
    </item>
  </channel>
</rss>
