<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Xavier Geerinck</title>
    <description>The latest articles on DEV Community by Xavier Geerinck (@xaviergeerinck).</description>
    <link>https://dev.to/xaviergeerinck</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/xaviergeerinck"/>
    <language>en</language>
    <item>
      <title>Automating PyTorch ARM Builds with Docker BuildX for Nvidia CUDA and Python &gt; 3.6</title>
      <dc:creator>Xavier Geerinck</dc:creator>
      <pubDate>Thu, 25 Nov 2021 11:19:12 +0000</pubDate>
      <link>https://dev.to/xaviergeerinck/automating-pytorch-arm-builds-with-docker-buildx-for-nvidia-cuda-and-python-36-h31</link>
      <guid>https://dev.to/xaviergeerinck/automating-pytorch-arm-builds-with-docker-buildx-for-nvidia-cuda-and-python-36-h31</guid>
      <description>&lt;h3&gt;
  
  
  My Workflow
&lt;/h3&gt;

&lt;p&gt;For a use case I wanted to utilize the Nvidia Jetson for edge inference. One of the bottlenecks here was that my software required a Python version that is greater than 3.6. When looking at the &lt;a href="https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch"&gt;Nvidia Jetson packages for PyTorch&lt;/a&gt; it was seen that this was only created for version 3.6.&lt;/p&gt;

&lt;p&gt;Searching around, I could find quite a lot of people struggling with this (&lt;a class="mentioned-user" href="https://dev.to/todo"&gt;@todo&lt;/a&gt;: PROOF) which made me to look into &lt;strong&gt;a solution&lt;/strong&gt; that would help me automate the building of ARM Wheel of &lt;strong&gt;PyTorch that can run on Nvidia Devices&lt;/strong&gt; (e.g. Nvidia Jetson Nano) and thus &lt;strong&gt;support CUDA&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The entire process above took me around ~11 full days, starting of with figuring out how to build the Dockerfile and finally automating the CI process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: I also decided to utilize this article as an entry for a running Hackathon by &lt;a href="https://dev.to/devteam/join-us-for-the-2021-github-actions-hackathon-on-dev-4hn4"&gt;Dev.to for GitHub Actions&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You would wonder why we want to automate this? Well when compiling this initially on my Nvidia Jetson Nano, I couldn't get it to compile past ~80% when using 16Gb of Swap Space (the device only has 2 Gb). After building it on my personal PC, this took ~6h to get it to compile completely. Making it large and long enough to automate it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The final source code can be found on &lt;a href="https://github.com/XavierGeerinck/Jetson-Linux-PyTorch"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;DIY Deployments, Interesting IoT&lt;/p&gt;

&lt;h3&gt;
  
  
  Contributions
&lt;/h3&gt;

&lt;p&gt;In any kind of project of this size, there are specific contributions that were made. In my project I believe to have made the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install CUDA on non-GPU devices&lt;/li&gt;
&lt;li&gt;Compile PyTorch with CUDA enabled on non-GPU devices&lt;/li&gt;
&lt;li&gt;Compile PyTorch for Python &amp;gt; 3.6&lt;/li&gt;
&lt;li&gt;Build for ARM with CI through Docker Buildx&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Project Outline
&lt;/h3&gt;

&lt;p&gt;As a best practice, I always love to include the project outline of how I tackled the issue above (to share my thought process):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Dockerfile that builds the Wheel

&lt;ul&gt;
&lt;li&gt;How do I build for CUDA? (Hardest part)&lt;/li&gt;
&lt;li&gt;How do I cross build?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create GitHub Action

&lt;ul&gt;
&lt;li&gt;How do I cross build? Can I run for ARM specifically?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Dockerfile Creation - Building PyTorch for ARM and Python &amp;gt; 3.6
&lt;/h3&gt;

&lt;p&gt;The hardest part BY FAR is how to compile PyTorch for ARM and Python &amp;gt; 3.6 with CUDA enabled. Because we are going to run on a non-GPU device, thus CUDA is not available on there. I splitted up the Dockerfile into 3 specific sections that can be ran in parallel:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up CUDA&lt;/li&gt;
&lt;li&gt;Set up PyTorch (the cloning takes a while)&lt;/li&gt;
&lt;li&gt;Set up Python 3.9&lt;/li&gt;
&lt;li&gt;Compile PyTorch

&lt;ul&gt;
&lt;li&gt;I have Nvidia Jetson optimisations included here, thanks &lt;a href="https://qengineering.eu/install-pytorch-on-jetson-nano.html"&gt;QEngineering&lt;/a&gt;!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Dockerfile Result&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below you can find the explanation of the respective steps. For the final &lt;code&gt;Dockerfile&lt;/code&gt; optimizations where made to decrease the docker layer sizes (by grouping RUN in 1 command)&lt;/p&gt;

&lt;h4&gt;
  
  
  Setting up CUDA
&lt;/h4&gt;

&lt;p&gt;For CUDA, we do not have the CUDA libraries, nor do we have access to them! There is however a trick that allows us to get CUDA loaded in. We copy over the public key of NVIDIA Jetson and we authorize ourselves to the repository. Then we use our package manager to install them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;V_CUDA_DASH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10-2

&lt;span class="c"&gt;# Add the public key&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Builder] Adding the Jetson Public Key"&lt;/span&gt;
curl https://repo.download.nvidia.com/jetson/jetson-ota-public.asc &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apt/trusted.gpg.d/jetson-ota-public.asc
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://repo.download.nvidia.com/jetson/common &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;L4T&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://repo.download.nvidia.com/jetson/t186 &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;L4T&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/nvidia-l4t-apt-source.list

&lt;span class="c"&gt;# Install the CUDA Libraries&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Builder] Installing CUDA System"&lt;/span&gt;
apt-get update
apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    cuda-libraries-&lt;span class="nv"&gt;$V_CUDA_DASH&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    cuda-libraries-dev-&lt;span class="nv"&gt;$V_CUDA_DASH&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    cuda-nvtx-&lt;span class="nv"&gt;$V_CUDA_DASH&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    cuda-minimal-build-&lt;span class="nv"&gt;$V_CUDA_DASH&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    cuda-license-&lt;span class="nv"&gt;$V_CUDA_DASH&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    cuda-command-line-tools-&lt;span class="nv"&gt;$V_CUDA_DASH&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    libnvvpi1 vpi1-dev

&lt;span class="c"&gt;# Link CUDA to /usr/local/cuda&lt;/span&gt;
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/local/cuda-&lt;span class="nv"&gt;$CUDA&lt;/span&gt; /usr/local/cuda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we eventually start compiling, we will see CUDA enabled in our CMAKE output 🥳&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#   USE_CUDA              : ON&lt;/span&gt;
&lt;span class="c"&gt;#     Split CUDA          : OFF&lt;/span&gt;
&lt;span class="c"&gt;#     CUDA static link    : OFF&lt;/span&gt;
&lt;span class="c"&gt;#     USE_CUDNN           : OFF&lt;/span&gt;
&lt;span class="c"&gt;#     USE_EXPERIMENTAL_CUDNN_V8_API: OFF&lt;/span&gt;
&lt;span class="c"&gt;#     CUDA version        : 10.2&lt;/span&gt;
&lt;span class="c"&gt;#     CUDA root directory : /usr/local/cuda&lt;/span&gt;
&lt;span class="c"&gt;#     CUDA library        : /usr/local/cuda/lib64/stubs/libcuda.so&lt;/span&gt;
&lt;span class="c"&gt;#     cudart library      : /usr/local/cuda/lib64/libcudart.so&lt;/span&gt;
&lt;span class="c"&gt;#     cublas library      : /usr/local/cuda/lib64/libcublas.so&lt;/span&gt;
&lt;span class="c"&gt;#     cufft library       : /usr/local/cuda/lib64/libcufft.so&lt;/span&gt;
&lt;span class="c"&gt;#     curand library      : /usr/local/cuda/lib64/libcurand.so&lt;/span&gt;
&lt;span class="c"&gt;#     nvrtc               : /usr/local/cuda/lib64/libnvrtc.so&lt;/span&gt;
&lt;span class="c"&gt;#     CUDA include path   : /usr/local/cuda/include&lt;/span&gt;
&lt;span class="c"&gt;#     NVCC executable     : /usr/local/cuda/bin/nvcc&lt;/span&gt;
&lt;span class="c"&gt;#     NVCC flags          : &amp;lt;CUT&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;#     CUDA host compiler  : /usr/bin/clang&lt;/span&gt;
&lt;span class="c"&gt;#     NVCC --device-c     : OFF&lt;/span&gt;
&lt;span class="c"&gt;#     USE_TENSORRT        : OFF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Setting up PyTorch
&lt;/h4&gt;

&lt;p&gt;In a separate docker step, we set up PyTorch and clone it to the working repository (in our case &lt;code&gt;/build/pytorch&lt;/code&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;V_PYTORCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v1.10.0

&lt;span class="c"&gt;# Downloads PyTorch to /build/pytorch&lt;/span&gt;
git clone &lt;span class="nt"&gt;--recursive&lt;/span&gt; &lt;span class="nt"&gt;--branch&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTORCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; http://github.com/pytorch/pytorch /build/pytorch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Setting up Python 3.9
&lt;/h4&gt;

&lt;p&gt;We configure our Python version through the &lt;code&gt;deadsnakes&lt;/code&gt; ppa and link it as the default one.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Best practice we should have a &lt;code&gt;venv&lt;/code&gt; but since I am running it in a Docker container this should suffice.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Setting up Python 3.9&lt;/span&gt;
RUN add-apt-repository ppa:deadsnakes/ppa &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get update &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-dev&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-venv&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON_MAJOR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-tk&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; /usr/bin/python &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; /usr/bin/python3 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;which python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; /usr/bin/python &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;which python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; /usr/bin/python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON_MAJOR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;--silent&lt;/span&gt; &lt;span class="nt"&gt;--show-error&lt;/span&gt; https://bootstrap.pypa.io/get-pip.py | python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Compiling PyTorch
&lt;/h4&gt;

&lt;p&gt;The last step in the Dockerfile is to compile PyTorch. For this we set the correct environment variabels to enable CUDA and to optimise the building process by disabling some other parts (e.g. MKLDNN, NNPACK, XNNPACK, ... to be turned off).&lt;/p&gt;

&lt;p&gt;We also configure it to use &lt;code&gt;clang&lt;/code&gt; as the Nvidia Jetson has &lt;a href="https://qengineering.eu/install-pytorch-on-jetson-nano.html#imTextObject_80_462"&gt;NEON registers&lt;/a&gt; and clang supports those (GCC doesn't).&lt;/p&gt;

&lt;p&gt;For our source, we utilize the other layer we created and just copy it from there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=downloader-pytorch /build/pytorch /build/pytorch&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /build/pytorch&lt;/span&gt;

&lt;span class="c"&gt;# PyTorch - Build - Prerequisites&lt;/span&gt;
&lt;span class="c"&gt;# Set clang as compiler&lt;/span&gt;
&lt;span class="c"&gt;# clang supports the ARM NEON registers&lt;/span&gt;
&lt;span class="c"&gt;# GNU GCC will give "no expression error"&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; CC=clang&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; CXX=clang++&lt;/span&gt;

&lt;span class="c"&gt;# Build&lt;/span&gt;
rm build/CMakeCache.txt || : \
sed -i -e "/^if(DEFINED GLIBCXX_USE_CXX11_ABI)/i set(GLIBCXX_USE_CXX11_ABI 1)" CMakeLists.txt \
pip install -r requirements.txt
python setup.py bdist_wheel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Copying the result as an Artifact
&lt;/h3&gt;

&lt;p&gt;Docker Buildx is amazing in the sense that we can utilize the &lt;code&gt;--output type-local,dest=.&lt;/code&gt; command to output files to our local filesystem, making it such that docker builds and we can export the build result as an artifact.&lt;/p&gt;

&lt;p&gt;To achieve this, we pull from the &lt;code&gt;scratch&lt;/code&gt; image and copy over our result to it from the other docker layer. Our &lt;code&gt;/&lt;/code&gt; path will then contain all the build wheels of PyTorch (e.g. &lt;code&gt;torch-1.10.0a0+git36449ea-cp39-cp39-linux_aarch64.whl&lt;/code&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; scratch as artifact&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /pytorch/dist/* /&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Dockerfile Result
&lt;/h4&gt;

&lt;p&gt;Finally, the full Dockerfile will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Setup Nvidia CUDA for Jetson&lt;/span&gt;
&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:18.04 as cuda-devel&lt;/span&gt;

&lt;span class="c"&gt;# Configuration Arguments&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_CUDA_MAJOR=10&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_CUDA_MINOR=2&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_L4T_MAJOR=32&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_L4T_MINOR=6&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; V_CUDA=${V_CUDA_MAJOR}.${V_CUDA_MINOR}&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; V_CUDA_DASH=${V_CUDA_MAJOR}-${V_CUDA_MINOR}&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; V_L4T=r${V_L4T_MAJOR}.${V_L4T_MINOR}&lt;/span&gt;

&lt;span class="c"&gt;# Expose environment variables everywhere&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; CUDA=${V_CUDA_MAJOR}.${V_CUDA_MINOR}&lt;/span&gt;

&lt;span class="c"&gt;# Accept default answers for everything&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DEBIAN_FRONTEND=noninteractive&lt;/span&gt;

&lt;span class="c"&gt;# Fix CUDA info&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; DPKG_STATUS&lt;/span&gt;

&lt;span class="c"&gt;# Add NVIDIA repo/public key and install VPI libraries&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DPKG_STATUS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /var/lib/dpkg/status &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Builder] Installing Prerequisites"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get update &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; ca-certificates software-properties-common curl gnupg2 apt-utils &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Builder] Installing CUDA Repository"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl https://repo.download.nvidia.com/jetson/jetson-ota-public.asc &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apt/trusted.gpg.d/jetson-ota-public.asc &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://repo.download.nvidia.com/jetson/common &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_L4T&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/nvidia-l4t-apt-source.list &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://repo.download.nvidia.com/jetson/t186 &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_L4T&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/nvidia-l4t-apt-source.list &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Builder] Installing CUDA System"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get update &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cuda-libraries-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_CUDA_DASH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cuda-libraries-dev-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_CUDA_DASH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cuda-nvtx-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_CUDA_DASH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cuda-minimal-build-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_CUDA_DASH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cuda-license-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_CUDA_DASH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    cuda-command-line-tools-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_CUDA_DASH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    libnvvpi1 vpi1-dev &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/local/cuda-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_CUDA&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; /usr/local/cuda &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Update environment&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; LIBRARY_PATH=/usr/local/cuda/lib64/stubs&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-fs&lt;/span&gt; /usr/share/zoneinfo/Europe/Brussels /etc/localtime

&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Create PyTorch Docker Layer&lt;/span&gt;
&lt;span class="c"&gt;# We do this seperately since else we need to keep rebuilding&lt;/span&gt;
&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; --platform=$BUILDPLATFORM ubuntu:18.04 as downloader-pytorch&lt;/span&gt;

&lt;span class="c"&gt;# Configuration Arguments&lt;/span&gt;
&lt;span class="c"&gt;# https://github.com/pytorch/pytorch&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_PYTORCH=v1.10.0&lt;/span&gt;
&lt;span class="c"&gt;# https://github.com/pytorch/vision&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_PYTORCHVISION=v0.11.1&lt;/span&gt;
&lt;span class="c"&gt;# https://github.com/pytorch/audio&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_PYTORCHAUDIO=v0.10.0&lt;/span&gt;

&lt;span class="c"&gt;# Install Git Tools&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; software-properties-common apt-utils git &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get clean

&lt;span class="c"&gt;# Accept default answers for everything&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DEBIAN_FRONTEND=noninteractive&lt;/span&gt;

&lt;span class="c"&gt;# Clone Source&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;git clone &lt;span class="nt"&gt;--recursive&lt;/span&gt; &lt;span class="nt"&gt;--branch&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTORCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; http://github.com/pytorch/pytorch

&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Build PyTorch for Jetson (with CUDA)&lt;/span&gt;
&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; cuda-devel as builder&lt;/span&gt;

&lt;span class="c"&gt;# Configuration Arguments&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_PYTHON_MAJOR=3&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; V_PYTHON_MINOR=9&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; V_PYTHON=${V_PYTHON_MAJOR}.${V_PYTHON_MINOR}&lt;/span&gt;

&lt;span class="c"&gt;# Accept default answers for everything&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DEBIAN_FRONTEND=noninteractive&lt;/span&gt;

&lt;span class="c"&gt;# Download Common Software&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; clang build-essential bash ca-certificates git wget cmake curl software-properties-common ffmpeg libsm6 libxext6 libffi-dev libssl-dev xz-utils zlib1g-dev liblzma-dev

&lt;span class="c"&gt;# Setting up Python 3.9&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /install&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;add-apt-repository ppa:deadsnakes/ppa &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get update &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-dev&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-venv&lt;/span&gt; python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON_MAJOR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-tk&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; /usr/bin/python &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; /usr/bin/python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON_MAJOR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;which python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; /usr/bin/python &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;which python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; /usr/bin/python&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;V_PYTHON_MAJOR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;--silent&lt;/span&gt; &lt;span class="nt"&gt;--show-error&lt;/span&gt; https://bootstrap.pypa.io/get-pip.py | python

&lt;span class="c"&gt;# PyTorch - Build - Source Code Setup &lt;/span&gt;
&lt;span class="c"&gt;# copy everything from the downloader-pytorch layer /torch to /torch on this one&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=downloader-pytorch /pytorch /pytorch&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /pytorch&lt;/span&gt;

&lt;span class="c"&gt;# PyTorch - Build - Prerequisites&lt;/span&gt;
&lt;span class="c"&gt;# Set clang as compiler&lt;/span&gt;
&lt;span class="c"&gt;# clang supports the ARM NEON registers&lt;/span&gt;
&lt;span class="c"&gt;# GNU GCC will give "no expression error"&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; CC=clang&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; CXX=clang++&lt;/span&gt;

&lt;span class="c"&gt;# Set path to ccache&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; PATH=/usr/lib/ccache:$PATH&lt;/span&gt;

&lt;span class="c"&gt;# Other arguments&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_CUDA=ON&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_CUDNN=ON&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; BUILD_CAFFE2_OPS=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_FBGEMM=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_FAKELOWP=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; BUILD_TEST=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_MKLDNN=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_NNPACK=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_XNNPACK=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_QNNPACK=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_PYTORCH_QNNPACK=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; TORCH_CUDA_ARCH_LIST="5.3;6.2;7.2"&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_NCCL=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_SYSTEM_NCCL=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_OPENCV=0&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; USE_DISTRIBUTED=0&lt;/span&gt;

&lt;span class="c"&gt;# Build&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /pytorch &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm &lt;/span&gt;build/CMakeCache.txt &lt;span class="o"&gt;||&lt;/span&gt; : &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"/^if(DEFINED GLIBCXX_USE_CXX11_ABI)/i set(GLIBCXX_USE_CXX11_ABI 1)"&lt;/span&gt; CMakeLists.txt &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; python setup.py bdist_wheel &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; ..

&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="c"&gt;# Prepare Artifact&lt;/span&gt;
&lt;span class="c"&gt;# ##################################################################################&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; scratch as artifact&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /pytorch/dist/* /&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  GitHub Action Creation
&lt;/h3&gt;

&lt;p&gt;Since PyTorch is now finally compiling! It is time to start automating this and publishing them to an Artifact in GitHub (this way we can always trigger it ourselves and kick of the building process). I want to start building automatically, as soon as a Release is published! So for our action, we have the following outline:&lt;/p&gt;

&lt;h4&gt;
  
  
  Workflow Outline
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;When a release is created trigger the action&lt;/li&gt;
&lt;li&gt;Clone the repository&lt;/li&gt;
&lt;li&gt;Setup Docker with Buildx&lt;/li&gt;
&lt;li&gt;Run our container&lt;/li&gt;
&lt;li&gt;Copy over the Built Wheel to an artifact on GitHub&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Used Actions
&lt;/h4&gt;

&lt;p&gt;As for actions, the following actions could be reused:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/docker/setup-buildx-action"&gt;docker/setup-buildx-action&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;I cross-compile for ARM on AMD64 machines in the pipeline&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/docker/setup-qemu-action"&gt;docker/setup-qemu-action&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Configure QEMU to be able to compile for ARM and install the QEMU static binaries&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/actions/checkout"&gt;actions/checkout&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Check out a repo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/actions/cache"&gt;actions/cache&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Allow us to cache the docker layers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/actions/upload-artifact"&gt;actions/upload-artifact&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Upload the output of a directory to GitHub artifacts&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Result
&lt;/h4&gt;

&lt;p&gt;Finally resulting in the following GitHub action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ci&lt;/span&gt;

&lt;span class="c1"&gt;# https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;main&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;created&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build_wheels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Code&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up QEMU&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-qemu-action@v1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Docker Buildx&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;buildx&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-buildx-action@v1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cache Docker layers&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache@v2&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/.buildx-cache&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ runner.os }}-buildx-${{ github.sha }}&lt;/span&gt;
      &lt;span class="na"&gt;restore-keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;${{ runner.os }}-buildx-&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build Docker Image&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;docker buildx build \&lt;/span&gt;
          &lt;span class="s"&gt;--platform=linux/arm64 \&lt;/span&gt;
          &lt;span class="s"&gt;--progress=plain \&lt;/span&gt;
          &lt;span class="s"&gt;--output type=local,dest=./wheels \&lt;/span&gt;
          &lt;span class="s"&gt;--file Dockerfile.jetson .&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload Artifacts&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v2&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wheels&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;wheels/*.whl&lt;/span&gt;

    &lt;span class="c1"&gt;# This ugly bit is necessary if you don't want your cache to grow forever&lt;/span&gt;
    &lt;span class="c1"&gt;# till it hits GitHub's limit of 5GB.&lt;/span&gt;
    &lt;span class="c1"&gt;# Temp fix&lt;/span&gt;
    &lt;span class="c1"&gt;# https://github.com/docker/build-push-action/issues/252&lt;/span&gt;
    &lt;span class="c1"&gt;# https://github.com/moby/buildkit/issues/1896&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Move cache&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;rm -rf /tmp/.buildx-cache&lt;/span&gt;
      &lt;span class="s"&gt;mv /tmp/.buildx-cache-new /tmp/.buildx-cache&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Yaml File or Link to Code
&lt;/h3&gt;

&lt;p&gt;The source code can be found on &lt;a href="https://github.com/XavierGeerinck/Jetson-Linux-PyTorch"&gt;GitHub&lt;/a&gt; with a build of the resulting wheel and including &lt;a href="https://github.com/XavierGeerinck/Jetson-Linux-PyTorch/blob/main/.github/workflows/build-wheel.yml"&gt;GitHub Action Workflow&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Future Work
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Extra Optimizations to be made
&lt;/h4&gt;

&lt;p&gt;Some support could still be added for the Nvidia Jetson Nano by adapting the source code, but this is currently out of scope of this project. These optimisations can be found in &lt;a href="https://qengineering.eu/install-pytorch-on-jetson-nano.html#imTextObject_80_425"&gt;QEngineering their post&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  GitHub Actions Improvements
&lt;/h4&gt;

&lt;p&gt;Currently, Build Arguments are included but not yet used. In theory, the following can be added to the &lt;code&gt;docker buildx&lt;/code&gt; command to build for other PyThon versions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python 3.8&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--platform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux/arm64 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--progress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;plain &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--build-arg&lt;/span&gt; &lt;span class="nv"&gt;PYTHON_MAJOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--build-arg&lt;/span&gt; &lt;span class="nv"&gt;PYTHON_MINOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--output&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;local&lt;/span&gt;,dest&lt;span class="o"&gt;=&lt;/span&gt;./wheels &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--file&lt;/span&gt; Dockerfile.jetson &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This project was definitely not easy, taking a long time between builds, figuring out where to build, automating it, ... by sharing this I hope to help the community utilize GPUs more easily with the latest versions.&lt;/p&gt;

&lt;p&gt;In a next article, I hope to show you how to run an actual AI model with CUDA enabled on the Nvidia Jetson nano and Python &amp;gt; 3.6 😉 &lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;All of the above was possible by some contributions of others:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/soerensen3/buildx-pytorch-jetson"&gt;https://github.com/soerensen3/buildx-pytorch-jetson&lt;/a&gt; helped me with some of the Dockerfile code (but didn't had CUDA support)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>actionshackathon21</category>
      <category>github</category>
      <category>showdev</category>
      <category>nvidia</category>
    </item>
    <item>
      <title>Monitoring the Kubernetes Nginx Ingress Controller with Prometheus and Grafana</title>
      <dc:creator>Xavier Geerinck</dc:creator>
      <pubDate>Sun, 20 Sep 2020 19:22:06 +0000</pubDate>
      <link>https://dev.to/xaviergeerinck/monitoring-the-kubernetes-nginx-ingress-controller-with-prometheus-and-grafana-35gi</link>
      <guid>https://dev.to/xaviergeerinck/monitoring-the-kubernetes-nginx-ingress-controller-with-prometheus-and-grafana-35gi</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft22ld8cbidyerzd6pidf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft22ld8cbidyerzd6pidf.png" alt="cover"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/posts/infrastructure/kubernetes-nginx-ingress-controller"&gt;In a previous article&lt;/a&gt; I explained how we can set-up an Nginx Kubernetes Ingress Controller, but how can we now monitor this? This is what I would like to tackle in this article, on how we are able to utilize Prometheus and Grafana to start visualizing what is happening on our Ingress Controller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;After following &lt;a href="https://dev.to/posts/infrastructure/kubernetes-nginx-ingress-controller"&gt;my previous article&lt;/a&gt; you will now have the following components running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Nginx Ingress controller deployed in the &lt;code&gt;ingress-nginx&lt;/code&gt; namespace&lt;/li&gt;
&lt;li&gt;A demo application that we can reach through a &lt;code&gt;ingress&lt;/code&gt; route&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you do not have these 2 components running, I would recommend you to go check it out! &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Prometheus
&lt;/h2&gt;

&lt;p&gt;When we open the reference of &lt;a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/monitoring.md" rel="noopener noreferrer"&gt;&lt;code&gt;ingress-nginx&lt;/code&gt; online&lt;/a&gt; we can see that it should be quite straightforward to install prometheus.&lt;/p&gt;

&lt;p&gt;Simply run the following to deploy and configure the Prometheus Server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;--kustomize&lt;/span&gt; github.com/kubernetes/ingress-nginx/deploy/prometheus/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done (~2 min) you will see some output of everything that was created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;serviceaccount/prometheus-server created
role.rbac.authorization.k8s.io/prometheus-server created
rolebinding.rbac.authorization.k8s.io/prometheus-server created
configmap/prometheus-configuration-hct76d4c56 created
service/prometheus-server created
deployment.apps/prometheus-server created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now on to the "hard" part - reconfiguring our Nginx.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: at the time of writing (20-SEP-2020) the documentation was incomplete, making me spend 5h on the issue of reconfiguring nginx...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Reconfiguring Nginx
&lt;/h2&gt;

&lt;p&gt;Reconfiguring Nginx to allow metrics to be sent to Prometheus might sound as an easy task, but actually it isn't... It appears that the official documentation is lacking on this point and that a &lt;a href="https://github.com/kubernetes/ingress-nginx/pull/6024" rel="noopener noreferrer"&gt;recent pull request&lt;/a&gt; requires the &lt;a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md" rel="noopener noreferrer"&gt;ServiceMonitor&lt;/a&gt; to be used.&lt;/p&gt;

&lt;p&gt;Installing the ServiceMonitor is a good option, but when checking this on the &lt;a href="https://github.com/prometheus-operator/prometheus-operator" rel="noopener noreferrer"&gt;official repository&lt;/a&gt; it shows that this is still in "Beta" so it's not a really a good option to configure this one.&lt;/p&gt;

&lt;p&gt;Luckily for us we can always check the values defined in our &lt;a href="https://github.com/kubernetes/ingress-nginx/blob/master/charts/ingress-nginx/values.yaml" rel="noopener noreferrer"&gt;helm chart&lt;/a&gt;, which shows that we can still utilize the &lt;a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="noopener noreferrer"&gt;prometheus annotations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Therefor we upgrade our Helm chart through the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade ingress-controller ingress-nginx/ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.metrics.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set-string&lt;/span&gt; controller.podAnnotations.&lt;span class="s2"&gt;"prometheus&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;io/scrape"&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set-string&lt;/span&gt; controller.podAnnotations.&lt;span class="s2"&gt;"prometheus&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;io/port"&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"10254"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we validate this through the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm get values ingress-controller &lt;span class="nt"&gt;--namespace&lt;/span&gt; ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that our values are now set as they should:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;controller:
  metrics:
    enabled: &lt;span class="nb"&gt;true
    &lt;/span&gt;service:
      annotations:
        prometheus.io/port: &lt;span class="s2"&gt;"10254"&lt;/span&gt;
        prometheus.io/scrape: &lt;span class="s2"&gt;"true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I now open the Prometheus dashboard (remember: &lt;code&gt;kubectl get nodes&lt;/code&gt; and get the external IP address, and open this with the prometheus port shown through &lt;code&gt;kubectl get svc -A&lt;/code&gt;) and start typing &lt;code&gt;ng&lt;/code&gt; it shows our metrics!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fprometheus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fprometheus.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Grafana
&lt;/h2&gt;

&lt;p&gt;From here on, it's a smooth ride to the finish line! We can simply follow the &lt;a href="https://kubernetes.github.io/ingress-nginx/user-guide/monitoring/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; and install grafana with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;--kustomize&lt;/span&gt; github.com/kubernetes/ingress-nginx/deploy/grafana/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, we do the same as for prometheus and open the grafana dashboard in our browser (&lt;code&gt;kubectl get nodes&lt;/code&gt;; &lt;code&gt;kubectl get svc -A&lt;/code&gt;) where we can use &lt;code&gt;admin:admin&lt;/code&gt; as our credentials and run the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click "Add data source"&lt;/li&gt;
&lt;li&gt;Select "Prometheus"&lt;/li&gt;
&lt;li&gt;Enter the details (note: I used &lt;code&gt;http://CLUSTER_IP_PROMETHEUS_SVC:9090&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Left menu (hover over +) -&amp;gt; Dashboard&lt;/li&gt;
&lt;li&gt;Click "Import"&lt;/li&gt;
&lt;li&gt;Enter the copy pasted json from &lt;a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/grafana/dashboards/nginx.json" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/grafana/dashboards/nginx.json&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click Import JSON&lt;/li&gt;
&lt;li&gt;Select the Prometheus data source&lt;/li&gt;
&lt;li&gt;Click "Import"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fgrafana.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fgrafana.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we learned how we can now start monitoring our earlier application its ingress controller. The goal of this is to in a next post be able to auto-scale the created infrastructure based on the incoming requests!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; feel free to check the original post at my blog &lt;a href="https://xaviergeerinck.com" rel="noopener noreferrer"&gt;https://xaviergeerinck.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>grafana</category>
      <category>prometheus</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Creating a Kubernetes Nginx Ingress Controller and create a rule to a sample application</title>
      <dc:creator>Xavier Geerinck</dc:creator>
      <pubDate>Sun, 20 Sep 2020 19:16:06 +0000</pubDate>
      <link>https://dev.to/xaviergeerinck/creating-a-kubernetes-nginx-ingress-controller-and-create-a-rule-to-a-sample-application-4and</link>
      <guid>https://dev.to/xaviergeerinck/creating-a-kubernetes-nginx-ingress-controller-and-create-a-rule-to-a-sample-application-4and</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvhrfc839gdajrqvsc940.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvhrfc839gdajrqvsc940.png" alt="cover"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whenever you are creating an application that you want to expose to the outside world, it's always smart to control the flow towards the application behind it. That's why Kubernetes has something called &lt;code&gt;Kubernetes Ingress&lt;/code&gt;. But what is it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Ingress
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Kubernetes Ingress&lt;/a&gt; allows you to expose HTTP and HTTPS routes from outside the cluster to services within the cluster. The traffic routing is then controlled by rules defined in the ingress sources.&lt;/p&gt;

&lt;p&gt;For this article, I will explain how you can get started on creating your own &lt;code&gt;Nginx Ingress Controller&lt;/code&gt;. Of course this is not the only possibility, &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="noopener noreferrer"&gt;so feel free to check other ingress controllers such as Istio, HAProxy, Traefik, ...&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some advantages of using an ingress controller:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rate limiting, Timeouts, ...&lt;/li&gt;
&lt;li&gt;Authentication&lt;/li&gt;
&lt;li&gt;Content based routing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sample Hello World application
&lt;/h2&gt;

&lt;p&gt;Before we create our controller, let's get started on creating a simple demo application. The only thing our application will do is process the HTTP request, wait a couple of seconds and return a "Hello World" response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating our sample app
&lt;/h3&gt;

&lt;p&gt;I decided to create this application in Node.js. So if you have &lt;code&gt;npm&lt;/code&gt; and &lt;code&gt;node&lt;/code&gt; installed, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm init &lt;span class="nt"&gt;-y&lt;/span&gt;
npm i express &lt;span class="nt"&gt;--save&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whereafter you can create an &lt;code&gt;index.js&lt;/code&gt; file with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Got request, waiting a bit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello World!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Example app listening at http://localhost:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;delay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;timeout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Packaging it as a container
&lt;/h3&gt;

&lt;p&gt;Since everything is created in terms of application code, we can package it all up into a Docker container by creating a Dockerfile:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:latest

WORKDIR /usr/src/app

# Install deps
RUN apt-get update

# Create Certificate
RUN apt-get install ca-certificates

# Install Package.json dependendencies
COPY package.json .
RUN npm install

# Copy Source Code
ADD . /usr/src/app

CMD [ "npm", "run", "start" ]
EXPOSE 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That we can build with (choose one for your use-case):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Local build (for local use)&lt;/span&gt;
&lt;span class="c"&gt;# Note: when using minikube, make sure to run `eval $(minikube docker-env)` to build images in minikube context&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="nb"&gt;local&lt;/span&gt;/node-sample-helloworld &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# Remote build (to push to docker repository)&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; thebillkidy/node-sample-helloworld &lt;span class="nb"&gt;.&lt;/span&gt;
docker push thebillkidy/node-sample-helloworld
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running it on Kubernetes
&lt;/h3&gt;

&lt;p&gt;Once it is build, we can now run it on our Kubernetes cluster. For that we create a Deployment YAML file:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kubernetes.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;d-node-sample-helloworld&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-sample-helloworld&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-sample-helloworld&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;thebillkidy/node-sample-helloworld:latest&lt;/span&gt; &lt;span class="c1"&gt;# if local, utilize local/node-sample-helloworld&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt; &lt;span class="c1"&gt;# if local, utilize Never&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That we can apply with &lt;code&gt;kubectl apply -f kubernetes.yaml&lt;/code&gt; and should now show the following after running &lt;code&gt;kubectl get deployments -A&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
d-node-sample-helloworld   1/1     1            1           37s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes is getting more popular everyday and it's no wonder why! When you are running applications on-premise or in-cloud, the possibility of having the applications in a portable way is a strong one! Removing the friction for scaling-out your application when you are ready for it, or even bursting scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nginx Ingress Controller
&lt;/h2&gt;

&lt;p&gt;We now have a simple Hello World application running, but it's only available internally! What we could do now is expose it through Kubernetes and a LoadBalancer, but let's actually utilize our Ingress Controller here! So let's get started creating this Ingress Controller.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;The first step that we should do is create the NGINX Ingress controller. For this we can follow these steps:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: before the &lt;code&gt;stable/nginx-ingress&lt;/code&gt; chart was utilized. But this is now deprecated!&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ref: https://github.com/kubernetes/ingress-nginx (repo)&lt;/span&gt;
&lt;span class="c"&gt;# ref: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx (chart)&lt;/span&gt;

&lt;span class="c"&gt;# 1. Create namespace&lt;/span&gt;
kubectl create namespace ingress-nginx

&lt;span class="c"&gt;# 2. Add the repository&lt;/span&gt;
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

&lt;span class="c"&gt;# 3. Update the repo&lt;/span&gt;
helm repo update

&lt;span class="c"&gt;# 4. Install nginx-ingress through Helm&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;ingress-controller ingress-nginx/ingress-nginx &lt;span class="nt"&gt;--namespace&lt;/span&gt; ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we ran the above, we should now be able to access the ingress controller by loading the external IP (&lt;code&gt;kubectl -n ingress-nginx get svc&lt;/code&gt;). &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; when working with Minikube or others, we can utilize &lt;code&gt;kubectl port-forward svc/ingress-controller-ingress-nginx-controller --namespace ingress-nginx --address 0.0.0.0 8000:80&lt;/code&gt; and access it on &lt;code&gt;http://localhost:8000&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We are now ready to expose our application!&lt;/p&gt;

&lt;h3&gt;
  
  
  Exposing our Application
&lt;/h3&gt;

&lt;p&gt;Once an ingress controller is created, we need to expose our application internally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl expose deployment d-node-sample-helloworld &lt;span class="nt"&gt;--name&lt;/span&gt; svc-node-sample-helloworld
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and configure our Ingress controller to route traffic to it as defined in the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="noopener noreferrer"&gt;Kubernetes Ingress API&lt;/a&gt;. By creating a YAML file:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ingress-node-sample-helloworld.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-node-sample-helloworld&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Target URI where the traffic must be redirected&lt;/span&gt;
    &lt;span class="c1"&gt;# More info: https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Uncomment the below to only allow traffic from this domain and route based on it&lt;/span&gt;
    &lt;span class="c1"&gt;# - host: my-host # your domain name with A record pointing to the nginx-ingress-controller IP&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt; &lt;span class="c1"&gt;# Everything on this path will be redirected to the rewrite-target&lt;/span&gt;
          &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc-node-sample-helloworld&lt;/span&gt; &lt;span class="c1"&gt;# the exposed svc name and port&lt;/span&gt;
            &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which we apply with &lt;code&gt;kubectl apply -f ingress-node-sample-helloworld.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now once this is applied, we should be able to execute a cURL request to access our application! So let's try this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Execute a GET request with the specified host and IP&lt;/span&gt;
&lt;span class="c"&gt;# Note: the Host should match what is written in spec.rules.host&lt;/span&gt;
curl &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; &lt;span class="s2"&gt;"GET"&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Host: my-host"&lt;/span&gt; http://YOUR_IP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or we can also open it in our browser and navigate to &lt;a href="http://YOUR_IP" rel="noopener noreferrer"&gt;http://YOUR_IP&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If this is not working, check &lt;code&gt;kubectl describe ing&lt;/code&gt; and make sure that the configuration is correct&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article a demonstration was made on how you can set up your own ingress controller for Kubernetes. This is of course a small step in the entire chain of use cases, where most often you want to do more such as rate limiting, or even monitoring it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/posts/infrastructure/kubernetes-nginx-ingress-controller-monitoring-prometheus"&gt;The next article&lt;/a&gt; will explain more in-depth how you are able to start monitoring what we have just set-up through Prometheus and visualize all of it in Grafana.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: feel free to check the original post at my blog &lt;a href="https://xaviergeerinck.com" rel="noopener noreferrer"&gt;https://xaviergeerinck.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>node</category>
    </item>
  </channel>
</rss>
