<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vicente J. Jiménez Miras</title>
    <description>The latest articles on DEV Community by Vicente J. Jiménez Miras (@vjjmiras).</description>
    <link>https://dev.to/vjjmiras</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vjjmiras"/>
    <language>en</language>
    <item>
      <title>Getting started with gVisor support in Falco</title>
      <dc:creator>Vicente J. Jiménez Miras</dc:creator>
      <pubDate>Fri, 16 Sep 2022 12:19:02 +0000</pubDate>
      <link>https://dev.to/vjjmiras/getting-started-with-gvisor-support-in-falco-akp</link>
      <guid>https://dev.to/vjjmiras/getting-started-with-gvisor-support-in-falco-akp</guid>
      <description>&lt;p&gt;In version 0.32.1, &lt;a href="https://falco.org/blog/falco-0-32-1/"&gt;Falco first introduced support&lt;/a&gt; for &lt;strong&gt;&lt;a href="https://gvisor.dev/"&gt;gVisor&lt;/a&gt;&lt;/strong&gt;. So, what is it and how can we use it?&lt;/p&gt;

&lt;p&gt;gVisor, quoting the &lt;a href="https://gvisor.dev/docs"&gt;official documentation&lt;/a&gt;, is an application kernel that provides an &lt;strong&gt;additional layer of isolation&lt;/strong&gt; between running applications and the host operating system. It delivers an additional security boundary for containers by &lt;strong&gt;intercepting and monitoring workload runtime instructions in user space&lt;/strong&gt; before they can reach the underlying host.&lt;/p&gt;

&lt;p&gt;Falco, on the other hand, works by &lt;a href="https://falco.org/docs/"&gt;monitoring runtime system calls&lt;/a&gt;, normally in kernel space via a kernel module or an eBPF probe, that are then evaluated against the flexible and powerful &lt;a href="https://falco.org/docs/rules/"&gt;Falco rule engine&lt;/a&gt; and so used to trigger security alerts.&lt;/p&gt;

&lt;p&gt;Before version 0.32.1, Falco could not work with gVisor-monitored sandboxes because it is not possible to install a kernel module or eBPF probe in such an environment. But wouldn't it be great to &lt;strong&gt;leverage the stream of system call information that gVisor collects through its powerful monitoring system directly in Falco&lt;/strong&gt;? This is exactly what became possible since gVisor released version 20220704.0 and Falco 0.32.1.&lt;/p&gt;

&lt;p&gt;In this article, you will learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✨ The magic that allows Falco and gVisor to work together&lt;/li&gt;
&lt;li&gt;🚀 How to run Falco with gVisor on your host with Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Falco and gVisor work together
&lt;/h2&gt;

&lt;p&gt;When running containers with gVisor, there are several components that interact with our workload:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zRcGQLco--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83dkeug5j8se60guuji3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zRcGQLco--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83dkeug5j8se60guuji3.png" alt="gVisor architechture" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Sentry is the gVisor component that implements all the application kernel functionalities. In particular, from the Falco perspective, the Sentry abstracts the system call layer and manages almost every syscall an application can ever execute. In other words, the Sentry could be seen as an alternative to our drivers: it is in the right position to put together the information contained in the events that our eBPF probe or kernel module usually generates. So, how do we turn the Sentry into a &lt;em&gt;new driver&lt;/em&gt; for Falco?&lt;/p&gt;

&lt;p&gt;The key observation here is that there is one Sentry process for each gVisor sandbox. If we want them to be able to communicate with Falco, we must set up a form of inter-process communication.&lt;/p&gt;

&lt;p&gt;We decided to use a UDS (Unix Domain Socket) to handle the communication between each Sentry and Falco. Falco is acting as a server, and it is responsible for setting up the socket and listening to connections. On the other hand, each Sentry process acts as a client and is configured to connect to the endpoint where Falco is listening.&lt;/p&gt;

&lt;p&gt;Whenever a syscall gets executed inside the sandboxed application, the Sentry will manage it as usual, plus it will send a message to Falco through the UDS. Messages are serialized through Protocol Buffers so that gVisor and Falco can communicate even if they are written in different programming languages.&lt;/p&gt;

&lt;p&gt;Once a message related to a syscall is received, Falco unpacks it and creates the corresponding event in a way that is consumable by our libraries. This way, it is possible to update necessary state information and trigger Falco rules whenever a match occurs!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vxq08yJO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7h2eeh9sl4d93cswqhe7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vxq08yJO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7h2eeh9sl4d93cswqhe7.png" alt="Falco monitors gVisor syscalls" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup gVisor sandbox monitoring with Falco
&lt;/h2&gt;

&lt;p&gt;First, &lt;a href="https://falco.org/docs/getting-started/installation/"&gt;install &lt;strong&gt;Falco&lt;/strong&gt;&lt;/a&gt; 0.32.1 or above and &lt;a href="https://gvisor.dev/docs/user_guide/install/"&gt;install the &lt;strong&gt;gVisor runsc tool 20220704.0&lt;/strong&gt;&lt;/a&gt; or above.&lt;/p&gt;

&lt;p&gt;You can check the version by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;falco &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;... which needs to report 0.32.1 or above and:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;runsc &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;... which needs to report &lt;code&gt;release-20220704.0&lt;/code&gt; or above.&lt;/p&gt;

&lt;p&gt;gVisor needs to be configured to send events to Falco. Download the appropriate configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;wget &lt;span class="nt"&gt;-O&lt;/span&gt; /etc/docker/runsc_falco_config.json &lt;span class="se"&gt;\&lt;/span&gt;
    https://falco.org/blog/intro-gvisor-falco/assets/config.json

&lt;span class="c"&gt;# Don't forget to protect this configuration&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;640 /etc/docker/runsc_falco_config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The easiest way to run a gVisor sandbox is by using Docker. For this reason, you need to first &lt;a href="https://gvisor.dev/docs/user_guide/quick_start/docker/"&gt;configure Docker to work with gVisor via &lt;code&gt;runsc install&lt;/code&gt;&lt;/a&gt;, and then we're going to update the &lt;code&gt;runsc&lt;/code&gt; pod init config configuration for our Docker containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; /etc/docker/daemon.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, add the &lt;code&gt;runtimeArgs&lt;/code&gt; key with the &lt;code&gt;--pod-init-config=&lt;/code&gt; parameter like in the example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"runtimes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"runsc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/usr/local/bin/runsc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Do&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;not&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;forget&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;comma&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;at&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;end&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;previous&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;line.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Add&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;following&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;not&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;these&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;instructions.&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt;

            &lt;/span&gt;&lt;span class="nl"&gt;"runtimeArgs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"--pod-init-config=/etc/docker/runsc_falco_config.json"&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;End&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;added&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;config.&lt;/span&gt;&lt;span class="w"&gt;                                     &lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Have&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;I&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;told&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;not&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;include&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;these&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;instructions.&lt;/span&gt;&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="err"&gt;---&lt;/span&gt;&lt;span class="w"&gt;

            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, restart the Docker daemon to let it use the new configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Runtime detection in action
&lt;/h2&gt;

&lt;p&gt;Now it's time to put everything together and see how to use Falco to monitor gVisor sandboxes!&lt;br&gt;
To start monitoring gVisor sandboxes, you can use the &lt;code&gt;-g&lt;/code&gt; or &lt;code&gt;--gvisor-config&lt;/code&gt; options, passing the path to the pod init config. Falco uses that config file for two main reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract the path of the UDS that needs to be created&lt;/li&gt;
&lt;li&gt;Create a trace session for all the already existing gVisor sandboxes. New ones will directly connect to the running Falco instance as we configured in the previous step.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Run Falco stand-alone
&lt;/h3&gt;

&lt;p&gt;Simply run Falco on the command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;falco &lt;span class="nt"&gt;--gvisor-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/docker/runsc_falco_config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You're now monitoring your gVisor sandboxes!&lt;/p&gt;

&lt;h3&gt;
  
  
  Example permament configuration with Systemd
&lt;/h3&gt;

&lt;p&gt;Alternatively, for a more permanent configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /etc/systemd/system/falco.service.d
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/systemd/system/falco.service.d/gvisor.conf
[Service]
ExecStartPre=
ExecStopPost=
ExecStart=
ExecStart=/usr/bin/falco --gvisor-config=/etc/docker/runsc_falco_config.json
PrivateTmp=false
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart falco
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The parameter PrivateTmp, set to false, inside the unit config file is needed since &lt;code&gt;/etc/docker/runsc_falco_config.json&lt;/code&gt; points to &lt;code&gt;/tmp/gvisor.sock&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Changing the &lt;code&gt;/tmp/gvisor.sock&lt;/code&gt; file to &lt;code&gt;/run/gvisor.sock&lt;/code&gt; would avoid that we have to use that parameter, and the temporary directory would remain private.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Falco will load the configuration indicating it with a line similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Thu Jul 21 15:41:58 2022: Enabled event collection from gVisor. Configuration path: /etc/docker/runsc_falco_config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can run any container with gVisor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker run &lt;span class="nt"&gt;--runtime&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;runsc &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If all goes well, the container will start properly configured to be monitored by Falco! To test the detection capabilities, try to trigger a simple rule like &lt;em&gt;Write below binary dir&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;touch&lt;/span&gt; /bin/foo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see Falco alerting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;07:47:42.173335167: Error File below a known binary directory opened for writing (user=root user_loginuid=0 command=touch touch /bin/foo file=/bin/foo parent=bash pcmdline=bash bash gparent=&amp;lt;NA&amp;gt; container_id=f6d77af4ee3d image=ubuntu) container=f6d77af4ee3d pid=8 tid=8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Falco and gVisor in action
&lt;/h3&gt;

&lt;p&gt;If you don't happen to have the time to try it right now, here is a short video showing every step to follow.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/zjK1FlSe1ow"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;And if you liked this step-by-step tutorial, don't miss the one that Google has published on the gVisor blog: &lt;a href="https://gvisor.dev/docs/tutorials/falco/"&gt;Configuring Falco with gVisor&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and syscall support
&lt;/h2&gt;

&lt;p&gt;Falco supports many &lt;a href="https://falco.org/docs/rules/supported-events/"&gt;system call events&lt;/a&gt;. For its first release, gVisor does not support all of them. Our focus was to ensure the most important events used in the default rulesets are covered and enough information flows about processes, file descriptors, and connections to maintain data consistency throughout the analysis and rule processing. To support an event, the gVisor Sentry needs to emit it and Falco needs to be able to parse and ingest it.&lt;/p&gt;

</description>
      <category>gvisor</category>
      <category>falco</category>
      <category>containers</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Manage Falco easier with Giant Swarm App Platform</title>
      <dc:creator>Vicente J. Jiménez Miras</dc:creator>
      <pubDate>Wed, 10 Aug 2022 13:00:33 +0000</pubDate>
      <link>https://dev.to/vjjmiras/manage-falco-easier-with-giant-swarm-app-platform-3mc9</link>
      <guid>https://dev.to/vjjmiras/manage-falco-easier-with-giant-swarm-app-platform-3mc9</guid>
      <description>&lt;p&gt;In this article, you will learn how Giant Swarm simplifies the maintenance of the software stack within Kubernetes clusters by using its App Platform technology. Additionally, we will show how customers can leverage this to easily deploy Falco, either individually or as part of Giant Swarm's Security Pack, to secure their managed Kubernetes service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Giant Swarm
&lt;/h3&gt;

&lt;p&gt;Having CoreOS, Fleet, and Docker as base technologies, &lt;a href="https://www.giantswarm.io/about"&gt;Giant Swarm&lt;/a&gt; was founded in 2014. In 2016, it chose Kubernetes to reinvent itself. And just a year later, in 2017, it became part of the founding members of the &lt;a href="https://linuxfoundation.org/press-release/cloud-native-computing-foundation-announces-first-kubernetes-certified-service-providers/"&gt;Kubernetes Certified Service Providers&lt;/a&gt;. Customers like &lt;a href="https://www.giantswarm.io/customers/adidas"&gt;Adidas&lt;/a&gt; or &lt;a href="https://www.giantswarm.io/customers/vodafone"&gt;Vodafone&lt;/a&gt; backup a company that, supported by a &lt;a href="https://www.giantswarm.io/blog/surviving-and-thriving-how-to-really-work-emotely"&gt;fully remote team&lt;/a&gt;, has been able to foresee the trends of technology and working lifestyle.&lt;/p&gt;

&lt;p&gt;As a managed Kubernetes company, its services and infrastructure enable enterprises to run resilient distributed systems at scale while removing the burden of Day 2 operations. Giant Swarm takes pride in delivering a fully open-source platform that's carefully curated and opinionated.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security and simplicity
&lt;/h4&gt;

&lt;p&gt;Giant Swarm takes security as seriously as ease of management. Hence, when using a managed Kubernetes platform, everything that happens on the &lt;a href="https://docs.giantswarm.io/general/management-clusters/"&gt;management cluster&lt;/a&gt; is as important as the performance of the workload cluster itself.&lt;/p&gt;

&lt;p&gt;That's why, leveraging the concept of operators to control all resources that clusters need as 'Custom Resources', Giant Swarm can deploy and update its management clusters in the quickest possible way. Needless to say, this is exactly what Giant Swarm offers to its customers to manage their applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Falco, the Runtime Security Project
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://falco.org"&gt;Falco&lt;/a&gt; is the de facto Kubernetes threat detection engine, and also extends its reach to cloud and Linux hosts. It monitors the behavior of every process in the node and can alert us when something fishy happens. &lt;/p&gt;

&lt;p&gt;How does Falco do that? Based on a set of &lt;a href="http://falco.org/docs/rules"&gt;rules&lt;/a&gt; that Falco interprets at startup time, it waits for events and &lt;a href="https://falco.org/docs/rules/supported-events/"&gt;syscalls&lt;/a&gt; that would trigger one of those rules. When a rule is triggered, Falco raises an alert and, thanks to applications like &lt;a href="https://github.com/falcosecurity/falcosidekick"&gt;Falco Sidekick&lt;/a&gt;, allows teams to react accordingly.&lt;/p&gt;

&lt;p&gt;But with great power comes great responsibility. What happens when we start getting false positives our Falco rules haven't been updated for some months, or our Falco daemon is a few versions behind? The answer is as simple as updating. Well, maybe not that simple if we are responsible for tens of clusters with hundreds of nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Giant Swarm App Platform
&lt;/h3&gt;

&lt;p&gt;Giant Swarm describes &lt;a href="https://docs.giantswarm.io/app-platform/overview/"&gt;App Platform&lt;/a&gt; as a set of features that allow you to browse, install, and manage the configurations of &lt;a href="https://docs.giantswarm.io/app-platform/apps/"&gt;managed apps&lt;/a&gt; from a single place: The management cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oPckicRl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ek12j4krofs5punr9gd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oPckicRl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ek12j4krofs5punr9gd3.png" alt="Giant Swarm App Platform - Managed Apps" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The technology behind it is simple: Apps are packaged as &lt;a href="https://helm.sh/docs/intro/using_helm/"&gt;Helm charts&lt;/a&gt;, can be configured with values, overridden with a different app configuration, etc. - whatever meets your needs. To deploy, a CRD (&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions"&gt;Custom Resource Definition&lt;/a&gt;) resource is created, interpreted by the &lt;a href="http://github.com/giantswarm/app-operator"&gt;App Operator&lt;/a&gt; (running on the managed cluster), assigned to the &lt;a href="https://github.com/giantswarm/chart-operator"&gt;Chart Operator&lt;/a&gt; (running on the workload cluster), and in a few seconds, our application will be deployed on as many clusters as desired.&lt;/p&gt;

&lt;p&gt;The App Platform offers its repertoire of applications from the App Catalog. Giant Swarm offers two App Catalogs out of the box: The Giant Swarm Catalog and the Giant Swarm Playground. But what we love the most from the App Platform is that we can have our additional catalogs, storing our applications and configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  What does it look like on the CLI?
&lt;/h3&gt;

&lt;p&gt;It's now time to look at App Platform running. Let's walk through its deployment on a &lt;strong&gt;minikube&lt;/strong&gt; cluster. Following these instructions, it shouldn't take too long until we are ready to deploy our first managed app, &lt;strong&gt;Falco&lt;/strong&gt;, using a single CRD. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To keep this as standard as possible, we'll even go through some steps to compile some interesting Giant Swarm tools, like the plugin kubectl-gs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Do you already have a Kubernetes cluster nearby?
&lt;/h4&gt;

&lt;p&gt;If not, we can spin up a &lt;a href="https://minikube.sigs.k8s.io/docs/"&gt;&lt;strong&gt;minikube&lt;/strong&gt;&lt;/a&gt; instance pretty quickly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube start --driver virtualbox
😄  minikube v1.25.1 on Darwin 11.6.6
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't have &lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl"&gt;kubectl&lt;/a&gt; installed or your system, the easiest way to access it would be through an &lt;a href="https://minikube.sigs.k8s.io/docs/handbook/kubectl/"&gt;alias&lt;/a&gt; to &lt;code&gt;minikube kubectl&lt;/code&gt;, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alias kubectl="minikube kubectl --"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget the &lt;code&gt;--&lt;/code&gt; at the end. That tells the command prompt not to pass any added parameters to &lt;code&gt;minikube&lt;/code&gt;, since we need them to be understood by kubectl. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One disadvantage of this method, in comparison to having a local &lt;code&gt;kubectl&lt;/code&gt; binary, is that the &lt;code&gt;kubectl-gs&lt;/code&gt; plugin might not work when called as &lt;code&gt;kubectl gs&lt;/code&gt; (explained later during this tutorial) so you might need to call it directly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To ensure our cluster is up and running, execute the following command and verify that all nodes, pods, and containers are up and ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes,ns,pods -A
NAME            STATUS   ROLES                  AGE     VERSION
node/minikube   Ready    control-plane,master   4m16s   v1.23.1

NAME                        STATUS   AGE
namespace/default           Active   4m14s
namespace/kube-node-lease   Active   4m15s
namespace/kube-public       Active   4m15s
namespace/kube-system       Active   4m16s

NAMESPACE     NAME                                   READY   STATUS    RESTARTS        AGE
kube-system   pod/coredns-64897985d-qbf4n            1/1     Running   0               4m
kube-system   pod/etcd-minikube                      1/1     Running   0               4m12s
kube-system   pod/kube-apiserver-minikube            1/1     Running   0               4m12s
kube-system   pod/kube-controller-manager-minikube   1/1     Running   0               4m12s
kube-system   pod/kube-proxy-6ds89                   1/1     Running   0               4m
kube-system   pod/kube-scheduler-minikube            1/1     Running   0               4m14s
kube-system   pod/storage-provisioner                1/1     Running   1 (3m29s ago)   4m10s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Prerequisites: Compiling &lt;code&gt;apptestctl&lt;/code&gt; and &lt;code&gt;kubectl-gs&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;As mentioned above, we'll compile a couple of tools. The first one will be &lt;code&gt;apptestctl&lt;/code&gt;. This tool will help us bootstrap &lt;strong&gt;App Platform&lt;/strong&gt; on a cluster not managed by Giant Swarm.&lt;/p&gt;

&lt;p&gt;To do this, we'll use the &lt;code&gt;docker.io/golang:1.17&lt;/code&gt; image. &lt;/p&gt;

&lt;p&gt;The following command will prepare an available instance of a Golang compiler for us to compile both of these tools:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl run golang --image docker.io/golang:1.17 -- sleep infinity
pod/golang created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Compiling &lt;code&gt;apptestctl&lt;/code&gt;
&lt;/h5&gt;

&lt;p&gt;These steps are quite simple: clone the &lt;a href="https://github.com/giantswarm/apptestctl"&gt;&lt;code&gt;apptestctl&lt;/code&gt;&lt;/a&gt; repository and compile it as indicated. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We'll do this inside the container we created in the previous step so we don't pollute our system.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it golang -- git clone https://github.com/giantswarm/apptestctl src/apptestctl
Cloning into 'apptestctl'...
... output omitted ...
Resolving deltas: 100% (791/791), done.

$ kubectl exec -it golang -- make -C src/apptestctl
make: Entering directory '/go/src/apptestctl'
... output omitted ...
====&amp;gt; apptestctl-v-linux-amd64
... output omitted ...
cp -a apptestctl-v-linux-amd64 apptestctl
====&amp;gt; build
make: Leaving directory '/go/src/apptestctl'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, you can build a Darwin client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it golang -- make build-darwin -C src/apptestctl
make: Entering directory '/go/src/apptestctl'
... output omitted ...
====&amp;gt; apptestctl-v-darwin-amd64
... output omitted ...
cp -a apptestctl-v-darwin-amd64 apptestctl-darwin
====&amp;gt; build-darwin
make: Leaving directory '/go/src/apptestctl'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Either way, you can copy the &lt;code&gt;apptestctl&lt;/code&gt; binary to your system and use it from wherever you prefer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl cp golang:/go/src/apptestctl/apptestctl-darwin ./apptestctl
$ kubectl chmod u+x ./apptestctl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Compiling &lt;code&gt;kubectl-gs&lt;/code&gt;
&lt;/h5&gt;

&lt;p&gt;Use the same steps to compile the &lt;a href="https://github.com/giantswarm/kubectl-gs"&gt;&lt;code&gt;kubectl-gs&lt;/code&gt;&lt;/a&gt; plugin this time, which will allow us to interact with App Platform. Pay attention to the fact that we'll compile it just for Darwin in this instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it golang -- git clone https://github.com/giantswarm/kubectl-gs src/kubectl-gs
Cloning into 'kubectl-gs'...
... output omitted ...
Resolving deltas: 100% (4427/4427), done.

$ kubectl exec -it golang -- make build-darwin -C src/kubectl-gs
make: Entering directory '/go/src/kubectl-gs'
... output omitted ...
====&amp;gt; kubectl-gs-v-darwin-amd64
... output omitted ...
cp -a kubectl-gs-v-darwin-amd64 kubectl-gs-darwin
====&amp;gt; build-darwin
make: Leaving directory '/go/src/kubectl-gs'

$ kubectl cp golang:/go/src/kubectl-gs/kubectl-gs-darwin ./kubectl-gs
$ kubectl chmod u+x ./kubectl-gs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Deploying App Platform via &lt;code&gt;apptestctl&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Once we have both tools, &lt;code&gt;apptestctl&lt;/code&gt; and &lt;code&gt;kubectl-gs&lt;/code&gt;, it's time to bootstrap App Platform. To do that, we'll use the &lt;code&gt;apptestctl bootstrap&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;The command &lt;code&gt;apptestctl bootstrap&lt;/code&gt; needs the KUBECONFIG information to access our &lt;em&gt;minikube&lt;/em&gt; cluster, so in this case, we will use the command &lt;code&gt;kubectl config view --flatten --minify -o json&lt;/code&gt; to obtain it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Alternatively, we would need to look for the .kube/config file and pass it with the &lt;code&gt;--kubeconfig-path&lt;/code&gt; option.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apptestctl bootstrap --kubeconfig "$(kubectl config view --flatten --minify -o json)"
bootstrapping app platform components
... output omitted ...
app platform components are ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once deployed, we can run a few commands to observe the resources created in our cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deployments -n giantswarm
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
app-operator              1/1     1            1           1m20s
chart-operator            1/1     1            1           1m20s
chartmuseum-chartmuseum   1/1     1            1           1m20s

# kubectl get catalog -A
NAMESPACE   NAME          CATALOG URL                                   AGE
default     chartmuseum   http://chartmuseum-chartmuseum:8080/charts/   1m25s
default     helm-stable   https://charts.helm.sh/stable/packages/       1m25s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait a moment... Where does this &lt;code&gt;Catalog&lt;/code&gt; resource come from? The bootstrap process of App Platform creates some CRDs that will support the operators to manage our applications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crd
NAME                                          CREATED AT
appcatalogentries.application.giantswarm.io   2022-06-10T15:30:12Z
appcatalogs.application.giantswarm.io         2022-06-10T15:30:12Z
apps.application.giantswarm.io                2022-06-10T15:30:12Z
catalogs.application.giantswarm.io            2022-06-10T15:30:12Z
charts.application.giantswarm.io              2022-06-10T15:30:12Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In short, once we register a &lt;code&gt;Catalog&lt;/code&gt;, several &lt;code&gt;AppCatalogEntries&lt;/code&gt; resources will be created. There will be at least one per application and version.&lt;/p&gt;

&lt;h4&gt;
  
  
  Registering a &lt;code&gt;Catalog&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Now, it looks like a great time to see what the &lt;code&gt;kubectl-gs&lt;/code&gt; plugin can do for us.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl-gs get catalogs 
NAME          NAMESPACE   CATALOG URL                                   AGE
chartmuseum   default     http://chartmuseum-chartmuseum:8080/charts/   25m
helm-stable   default     https://charts.helm.sh/stable/packages/       25m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All right, that was maybe not so impressive, but it'll become much more useful when we register our first catalog. Why is that? Because &lt;code&gt;kubectl gs&lt;/code&gt; will help us generate the definition of a &lt;code&gt;Catalog&lt;/code&gt; resource through its &lt;code&gt;template&lt;/code&gt; subcommand.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl-gs template catalog --name giantswarm --namespace default \
  --description "Giant Swarm Catalog" --logo http://logo-url        \
  --url https://giantswarm.github.io/giantswarm-catalog
---
apiVersion: application.giantswarm.io/v1alpha1
kind: Catalog
metadata:
  name: giantswarm
  labels:
    application.giantswarm.io/catalog-visibility: public
  namespace: default
spec:
  title: giantswarm
  description: Giant Swarm Catalog
  logoURL: http://logo-url
  storage:
    URL: https://giantswarm.github.io/giantswarm-catalog
    type: helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et voilà, our &lt;code&gt;Catalog&lt;/code&gt; CRD pointing to a Giant Swarm collection of applications is ready to be deployed into our cluster. &lt;/p&gt;

&lt;p&gt;You might have figured out already what each parameter represents. &lt;code&gt;kubectl gs&lt;/code&gt; will complain if any of those parameters are missing. Also, pay attention that we didn't use a real logo URL, but if you were using &lt;a href="https://github.com/giantswarm/happa"&gt;&lt;code&gt;happa&lt;/code&gt;&lt;/a&gt;, the Giant Swarm Web-UI, would't you like to see a logo identifying your application?&lt;/p&gt;

&lt;p&gt;Finally, the URL is the location of the Helm repository from which App Platform will download the applications.&lt;/p&gt;

&lt;p&gt;Once we understand what the &lt;code&gt;kubectl gs template&lt;/code&gt; command has generated, it's time to create it inside the cluster and let the App Operator do its magic. Let's go for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl-gs template catalog --name giantswarm --namespace default \
  --description "Giant Swarm Catalog" --logo http://logo-url        \
  --url https://giantswarm.github.io/giantswarm-catalog | kubectl apply -f -
catalog.application.giantswarm.io/giantswarm created

$ kubectl-gs get catalogs
NAME          NAMESPACE   CATALOG URL                                       AGE
chartmuseum   default     http://chartmuseum-chartmuseum:8080/charts/       35m
helm-stable   default     https://charts.helm.sh/stable/packages/           35m
giantswarm    default     https://giantswarm.github.io/giantswarm-catalog   53s

$ kubectl gs get catalog giantswarm
CATALOG      APP NAME                           VERSION      UPSTREAM VERSION   AGE     DESCRIPTION
... output omitted ...
giantswarm   falco-app                          0.3.2        0.0.1              5m26s   A Helm chart for falco
... output omitted ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do you remember the aforementioned AppCatalogEntries that the App Operator had to create once we defined the Catalog? Here are the Falco ones.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get AppCatalogEntries | grep falco-app
giantswarm-falco-app-0.1.2                               giantswarm   falco-app                          0.1.2          0.0.1              240d
giantswarm-falco-app-0.2.0                               giantswarm   falco-app                          0.2.0          0.0.1              176d
giantswarm-falco-app-0.3.0                               giantswarm   falco-app                          0.3.0          0.0.1              103d
giantswarm-falco-app-0.3.1                               giantswarm   falco-app                          0.3.1          0.0.1              94d
giantswarm-falco-app-0.3.2                               giantswarm   falco-app                          0.3.2          0.0.1              79d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing an App from the App Catalog
&lt;/h3&gt;

&lt;p&gt;What we've done so far was deploy App Platform, which is required only once. Giant Swarm would have configured that for us already if we were using their services.&lt;/p&gt;

&lt;p&gt;Now, it's finally time to create the CRD that will trigger the App Operator to assist in the deployment of Falco. How do we do that? &lt;code&gt;kubectl gs&lt;/code&gt; comes to the rescue again!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl gs template app --catalog giantswarm --name falco-app --namespace falco-ns 
  --version 0.3.2 --app-name my-falco --in-cluster
---
apiVersion: application.giantswarm.io/v1alpha1
kind: App
metadata:
  name: my-falco
  labels:
    app-operator.giantswarm.io/version: 0.0.0
  namespace: falco-ns
spec:
  name: falco-app
  version: 0.3.2
  namespace: falco-ns
  kubeConfig:
    inCluster: true
  catalog: giantswarm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is worth mentioning that we are testing on a &lt;em&gt;minikube&lt;/em&gt; cluster, where we install applications inside the cluster itself. To achieve that, we use the &lt;code&gt;--in-cluster&lt;/code&gt; parameter passed to the previous commands. &lt;/p&gt;

&lt;p&gt;Otherwise, if we wanted to install or update the application in one of our managed workload clusters, we would use the &lt;code&gt;--cluster&lt;/code&gt; parameter to indicate where the application should be deployed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl gs template app --catalog giantswarm --name falco-app --namespace falco-ns \
  --version 0.3.2 --cluster cluster-123 --app-name my-falco
---
apiVersion: application.giantswarm.io/v1alpha1
kind: App
metadata:
  name: my-falco
  namespace: cluster-123
spec:
  name: falco-app
  version: 0.3.2
  namespace: falco-ns
  kubeConfig:
    inCluster: false
  catalog: giantswarm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the previous output, you can see how the namespace field inside the metadata section receives the name of the cluster instead of the actual namespace where the application should reside. &lt;/p&gt;

&lt;p&gt;The reason is that, although the application will be installed on one of the workload clusters, this CRD will be created in a namespace inside the management cluster. This topic alone would be enough for a whole new post.&lt;/p&gt;

&lt;p&gt;Here is a graphical representation of the CRDs supporting App Platform, in the management cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G7-Fji25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ynn729ndwvdflaymplc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G7-Fji25--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ynn729ndwvdflaymplc.png" alt="Giant Swarm App Platform - App CRDs" width="880" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, the last step is creating the CRD for the App in the cluster. Don't forget to ensure that the namespace where the CRD will belong exists, or the &lt;code&gt;kubectl apply&lt;/code&gt; command will fail.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create ns falco-ns 
namespace/falco-ns created

$ kubectl gs template app --catalog giantswarm --name falco-app --namespace falco-ns \
  --version 0.3.2 --in-cluster --app-name my-falco | kubectl apply -f-
app.application.giantswarm.io/my-falco created

$ kubectl gs get app -n falco-ns
NAME       VERSION   LAST DEPLOYED   STATUS     NOTES
my-falco   0.3.2     113s            deployed   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are the resulting Kubernetes resources when using regular kubectl commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get app,deployment,daemonset -n falco-ns
NAME                                     INSTALLED VERSION   CREATED AT   LAST DEPLOYED   STATUS
app.application.giantswarm.io/my-falco   0.3.2               4m25s        4m24s           deployed

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-falco-falcosidekick   2/2     2            2           4m24s

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/my-falco                  1         1         1       1            1           &amp;lt;none&amp;gt;          4m24s
daemonset.apps/my-falco-falco-exporter   1         1         1       1            1           &amp;lt;none&amp;gt;          4m24s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The previous output might differ depending on the type of cluster you would be using, among other variables.&lt;/p&gt;

&lt;p&gt;As you can see, once App Platform is up and running, we only need to create the namespace that should contain the Falco application (which should already exist if we are deploying from a managed workload cluster), and the CRD based on the template from the &lt;code&gt;kubectl gs&lt;/code&gt; plugin. In a matter of seconds, Falco will be up and running, watching for threats and alerting when suspicious behaviors arise.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managed Security
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://twitter.com/StoneZach/"&gt;Zach Stone&lt;/a&gt;, Platform Engineer at Giant Swarm, walked us through some of the biggest challenges that the company's customers face and how his team is using Falco to develop thoughtful solutions. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;“The biggest problem that most of our customers face isn't what happens in the cluster, it's what happens with the information once they get it out of the cluster,”&lt;/em&gt; asserted Stone. &lt;em&gt;“People also focus too much on the capability that a tool offers and don't take a bigger look at the security processes it supports.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“If a customer has a vulnerability management program, we can track all of the vulnerabilities in their components, but if fixing those vulnerabilities isn’t a priority, then the program doesn’t work,”&lt;/em&gt; remarked Stone. &lt;em&gt;"The larger discussion is usually about where the alerts should go, who bears responsibility for remediation, and how to fit that work into the team's limited capacity. We spend a lot of time trying to ensure security isn't just something that sits alongside the business, but rather is a meaningful part of the daily routine."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Part of that effort is in tuning detection rules and alerting. &lt;em&gt;"Any time we surface an alert, it should be actionable and have a clear owner who is invested in never seeing that alert again."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“I think Falco's superpower is in the flexibility of the policies. I'm also really excited about the changes that are slated to make it easier to update them. Most rules aren't one-size-fits-all -- for a given policy, there is usually some refinement needed to ensure the policy makes sense within our platform, and then customers modify it even further to meet their security requirements. All that customization can make it incredibly difficult to reconcile,”&lt;/em&gt; said Stone. &lt;em&gt;“The fact that we can already do it with Falco speaks volumes about the versatility of the solution.”&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Security Pack
&lt;/h4&gt;

&lt;p&gt;Giant Swarm's &lt;a href="https://docs.giantswarm.io/app-platform/apps/security/"&gt;Security Pack&lt;/a&gt; is a collection of open-source security tools offered by Giant Swarm, which not only contains Falco but also a plethora of other open-source projects, including &lt;em&gt;Kyverno&lt;/em&gt; for policy enforcement, &lt;em&gt;Trivy&lt;/em&gt; for image scanning, and &lt;em&gt;Cosign&lt;/em&gt; for image signature verification.&lt;/p&gt;

&lt;p&gt;Security does not apply to a single level and, therefore, Security Pack consists of multiple applications, each one independently installable and configurable, available via their App Platform. &lt;em&gt;“Falco will be the cornerstone of our node-level security capabilities,”&lt;/em&gt; affirmed Stone, &lt;em&gt;“the biggest opportunity for API plug-ins I see is to get feedback from the node level back into the Security Pack so that we can further contextualize events in the ecosystem.”&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Adding simplicity to our cluster management is considered a requirement nowadays, especially in those cases where the lack of resources in an organization can keep it from achieving an acceptable level of security. &lt;/p&gt;

&lt;p&gt;Features like Giant Swarm's App Platform and Security Pack will help organizations to finally focus on what actually matters to them: Running their business. In the future, Giant Swarm plans to launch its security pack across all its customers' clusters, enabled by default and built on Falco. &lt;/p&gt;

</description>
      <category>falco</category>
      <category>giantswarm</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
  </channel>
</rss>
