<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bryan Bende</title>
    <description>The latest articles on DEV Community by Bryan Bende (@bbende).</description>
    <link>https://dev.to/bbende</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bbende"/>
    <language>en</language>
    <item>
      <title>Apache NiFi - Stateless</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Wed, 10 Nov 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/apache-nifi-stateless-33np</link>
      <guid>https://dev.to/bbende/apache-nifi-stateless-33np</guid>
      <description>&lt;p&gt;The past several releases of Apache NiFi have made significant improvements to the Stateless NiFi engine. If you are not familiar with Stateless NiFi, then I would recommend reading this &lt;a href="https://github.com/apache/nifi/blob/main/nifi-stateless/nifi-stateless-assembly/README.md" rel="noopener noreferrer"&gt;overview&lt;/a&gt; first.&lt;/p&gt;

&lt;p&gt;This post will examine the differences between running a flow in traditional NiFi vs. Stateless NiFi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Traditional NiFi
&lt;/h2&gt;

&lt;p&gt;As an example, let’s assume there is a Kafka topic with CDC events and we want to consume the events and apply them to a another relational database. This can be achieved with a simple flow containing &lt;code&gt;ConsumeKafka_2_6&lt;/code&gt; connected to &lt;code&gt;PutDatabaseRecord&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F01-traditional-flow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F01-traditional-flow.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In traditional NiFi, each node has a set of internal repositories that are stored on local disk. The _Flow File Repository_contains the state of each flow file, including it’s attributes and location in the flow, and the _Content Repository_stores the content of each flow file.&lt;/p&gt;

&lt;p&gt;Each execution of a processor is given a reference to a session that acts like a transaction for operating on flow files. If all operations complete successfully and the session is committed, then all updates are persisted to NiFi’s repositories. In the event that NiFi is restarted, all data is preserved in the repositories and the flow will start processing from the last committed state.&lt;/p&gt;

&lt;p&gt;Let’s consider how the example flow will execute in traditional NiFi…&lt;/p&gt;

&lt;p&gt;First, &lt;code&gt;ConsumeKafka_2_6&lt;/code&gt; will poll Kafka for available records. Then it will use the session to create a flow file and write the content of the records to the output stream of the flow file. The processor will then commit the NiFi session, followed by committing the Kafka offsets. The flow file will then be transferred to &lt;code&gt;PutDatabaseRecord&lt;/code&gt;. The overall sequence is summarized in the following diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F02-traditional-sequence.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F02-traditional-sequence.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A key point here is the ordering of committing the NiFi session before committing the Kafka offsets. This provides an_“at least once”_ guarantee by ensuring the data is persisted in NiFi before acknowledging the offsets. If committing the offsets fails, possibly due to a consumer rebalance, NiFi will consume those same offsets again and receive duplicate data. If the ordering was reversed, it would be possible for the offsets to be successfully committed, followed by a failure to commit the NiFi session, which would create data loss and be considered &lt;em&gt;“at most once”&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A second key point is that there is purposely no coordination across processors, meaning that each processor succeeds or fails independently of the other processors in the flow. Once &lt;code&gt;ConsumeKafka_2_6&lt;/code&gt; successfully executes, the consumed data is now persisted in NiFi, regardless of whether &lt;code&gt;PutDatabaseRecord&lt;/code&gt; succeeds.&lt;/p&gt;

&lt;p&gt;Let’s look at how this same flow would execute in Stateless NiFi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stateless NiFi
&lt;/h2&gt;

&lt;p&gt;Stateless NiFi adheres to the same NiFi API as traditional NiFi, which means it can run the same processors and flow definitions, it just provides a different implementation of the underlying engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F03-stateless-flow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F03-stateless-flow.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The primary abstraction is a &lt;code&gt;StatelessDataFlow&lt;/code&gt; which can be triggered to execute. Each execution of the flow produces a result that can be considered a success or failure. A failure can occur from a processor throwing an exception, or from explicitly routing flow files to a named “failure port”.&lt;/p&gt;

&lt;p&gt;A key difference in Stateless NiFi is around committing the NiFi session. A new commit method was introduced to &lt;code&gt;ProcessSession&lt;/code&gt; with the following signature:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void commitAsync(Runnable onSuccess);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives the session implementation control over when to execute the given callback. In traditional NiFi the session can execute the callback as the last step of &lt;code&gt;commitAsync&lt;/code&gt;, which produces the same behavior we looked at earlier. The stateless NiFi session can hold the callback and execute it only when the entire flow has completed successfully.&lt;/p&gt;

&lt;p&gt;Let’s consider how the example flow will execute in Stateless NiFi…&lt;/p&gt;

&lt;p&gt;When the &lt;code&gt;StatelessDataFlow&lt;/code&gt; is triggered, &lt;code&gt;ConsumeKafka_2_6&lt;/code&gt; begins executing the same as it would in traditional NiFi, by polling Kafka for records, creating a flow file, and writing the records to the output stream of the flow file. It then calls &lt;code&gt;commitAsync&lt;/code&gt; to commit the NiFi session and passes in a callback for committing the offsets to Kafka, which in this case will be held until later.&lt;/p&gt;

&lt;p&gt;The flow file is then transferred to &lt;code&gt;PutDatabaseRecord&lt;/code&gt; which attempts to apply the event to the database. Let’s assume&lt;code&gt;PutDatabaseRecord&lt;/code&gt; was successful, then the overall execution of the flow completes successfully. The stateless engine then acknowledges the result which executes any held callbacks, and thus commits the offsets to Kafka. The overall sequence is summarized in the following diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F04-stateless-sequence.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F04-stateless-sequence.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A key point here is that the execution of the entire flow is being treated like a single transaction. If a failure were to occur at&lt;code&gt;PutDatabaseRecord&lt;/code&gt;, the overall execution would be considered a failure, and the &lt;code&gt;onSuccess&lt;/code&gt; callbacks from &lt;code&gt;commitAsync&lt;/code&gt;would never get executed. In this case, that would mean the offsets were never committed to Kafka, and the entire flow can be tried again for the same records.&lt;/p&gt;

&lt;p&gt;Another type of failure scenario would be if Stateless NiFi crashed in the middle of executing the flow. Since Stateless NiFi generally uses in-memory repositories, any data that was in the middle of processing would be gone. However, since the source processor had not yet acknowledged receiving that data (i.e. the onSuccess callback never got executed), it would pull the same data again on the next execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  ExecuteStateless
&lt;/h2&gt;

&lt;p&gt;Previously, the primary mechanism to use Stateless NiFi was through the &lt;code&gt;nifi-stateless&lt;/code&gt; binary which launches a standalone process to execute the flow.&lt;/p&gt;

&lt;p&gt;The 1.15.0 release introduced a new processor called &lt;code&gt;ExecuteStateless&lt;/code&gt; which can be used to run the Stateless engine from within traditional NiFi. This allow you to manage the execution of the Stateless flow using the traditional NiFi UI, as well as connect the output of the Stateless flow to follow on processing in the traditional NiFi flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F05-execute-stateless.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-stateless%2F05-execute-stateless.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to use the &lt;code&gt;ExecuteStateless&lt;/code&gt; processor, you would first use traditional NiFi to create a process group containing the flow you want to execute with the Stateless engine. You would then download the flow definition, or commit the flow to a NiFi Registry instance. From there, you would configure &lt;code&gt;ExecuteStateless&lt;/code&gt; with the location of the flow definition.&lt;/p&gt;

&lt;p&gt;For a more in depth look at &lt;code&gt;ExecuteStateless&lt;/code&gt;, check out &lt;a href="https://www.youtube.com/watch?v=VyzoD8eh-t0" rel="noopener noreferrer"&gt;Mark Payne’s YouTube Video on “Kafka Exactly Once with NiFi”&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>nifi</category>
      <category>2021</category>
    </item>
    <item>
      <title>Apache NiFi 1.14.0 - Secure by Default</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Mon, 19 Jul 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/apache-nifi-1-14-0-secure-by-default-11jh</link>
      <guid>https://dev.to/bbende/apache-nifi-1-14-0-secure-by-default-11jh</guid>
      <description>&lt;p&gt;One of the major improvements in Apache NiFi 1.14.0 was to enable security for the default configuration. This means all you have to do now is run &lt;code&gt;bin/nifi.sh start&lt;/code&gt;, and your local instance will be running over &lt;code&gt;https&lt;/code&gt; with the ability to login via username and password.&lt;/p&gt;

&lt;p&gt;The overall work for this improvement was done through &lt;a href="https://issues.apache.org/jira/browse/NIFI-8220"&gt;NIFI-8220&lt;/a&gt; and required three major pieces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic generation of a self-signed certificate&lt;/li&gt;
&lt;li&gt;Single User Login Identity Provider&lt;/li&gt;
&lt;li&gt;Single User Authorizer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From a high level, the overall setup looks like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SnmLCtAU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-secure-by-default/01-overview.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SnmLCtAU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-secure-by-default/01-overview.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Certificate Generation
&lt;/h3&gt;

&lt;p&gt;In order to have any form of authentication &amp;amp; authorization, we first need to be connecting over &lt;code&gt;https&lt;/code&gt;, which means NiFi’s web server needs a keystore and truststore.&lt;/p&gt;

&lt;p&gt;In order to achieve this, &lt;a href="https://issues.apache.org/jira/browse/NIFI-8403"&gt;NIFI-8403&lt;/a&gt; introduced the ability to generate a self-signed certficate during start-up. When keystore and truststore files are specified in &lt;code&gt;nifi.properties&lt;/code&gt; and the files don’t exist, they will automatically be generated and &lt;code&gt;nifi.properties&lt;/code&gt; will be updated with the passwords.&lt;/p&gt;

&lt;p&gt;As a result, the default &lt;code&gt;nifi.propeties&lt;/code&gt; file now comes with provided values for the keystore and truststore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.security.keystore=./conf/keystore.p12
nifi.security.keystoreType=PKCS12
nifi.security.keystorePasswd=
nifi.security.keyPasswd=
nifi.security.truststore=./conf/truststore.p12
nifi.security.truststoreType=PKCS12
nifi.security.truststorePasswd=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition, the default web host and port have been switched to the following &lt;code&gt;https&lt;/code&gt; values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.web.https.host=127.0.0.1
nifi.web.https.port=8443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a side note, there are two other new properties related to certificates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These were not required for the default secure setup, but they allow the keystore and truststore to be reloaded while the application is running. This can be helpful for replacing certificates that may be close to expiring.&lt;/p&gt;

&lt;h3&gt;
  
  
  Single User Login Identity Provider
&lt;/h3&gt;

&lt;p&gt;The next step was to provide a mechanism for authenticating the user. NiFi supports many different authentication mechanisms, but most of them require additional dependencies and/or configuration.&lt;/p&gt;

&lt;p&gt;In this case, we want a user to login with a username and password without doing anything else. In order to achieve this, &lt;a href="https://issues.apache.org/jira/browse/NIFI-8363"&gt;NIFI-8363&lt;/a&gt; introduced the &lt;em&gt;Single User Login Identity Provider&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This login identity provider allows a single username/password pair to be configured. When this provider is initialized, if the username and password are not present, random values will be generated and &lt;code&gt;login-identity-providers.xml&lt;/code&gt; will be updated with the values.&lt;/p&gt;

&lt;p&gt;The default &lt;code&gt;login-identity-providers.xml&lt;/code&gt; now contains the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;provider&amp;gt;
   &amp;lt;identifier&amp;gt;single-user-provider&amp;lt;/identifier&amp;gt;
   &amp;lt;class&amp;gt;org.apache.nifi.authentication.single.user.SingleUserLoginIdentityProvider&amp;lt;/class&amp;gt;
   &amp;lt;property name="Username"&amp;gt;&amp;lt;/property&amp;gt;
   &amp;lt;property name="Password"&amp;gt;&amp;lt;/property&amp;gt;
&amp;lt;/provider&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: The password value here is the hashed password. See the last section below about obtaining and changing the default values.&lt;/p&gt;

&lt;p&gt;The default &lt;code&gt;nifi.properies&lt;/code&gt; then specifies this login identity provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.security.user.login.identity.provider=single-user-provider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Single User Authorizer
&lt;/h3&gt;

&lt;p&gt;The next step was providing a mechanism to perform authorization. In this case, we just want the default user to be authorized for all actions. In order to achieve this, &lt;a href="https://issues.apache.org/jira/browse/NIFI-8363"&gt;NIFI-8363&lt;/a&gt; introduced the &lt;em&gt;Single User Authorizer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This authorizer just returns true for all authorization checks, with the caveat that it can only be used when the &lt;em&gt;Single User Login Identity Provider&lt;/em&gt; is also configured.&lt;/p&gt;

&lt;p&gt;The default &lt;code&gt;authorizers.xml&lt;/code&gt; now contains the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;authorizer&amp;gt;
   &amp;lt;identifier&amp;gt;single-user-authorizer&amp;lt;/identifier&amp;gt;
   &amp;lt;class&amp;gt;org.apache.nifi.authorization.single.user.SingleUserAuthorizer&amp;lt;/class&amp;gt;
&amp;lt;/authorizer&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default &lt;code&gt;nifi.properties&lt;/code&gt; then specifies this authorizer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.security.user.authorizer=single-user-authorizer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Default Username/Password
&lt;/h3&gt;

&lt;p&gt;The first time the application is started, the &lt;em&gt;Single User Login Identity Provider&lt;/em&gt; generates the username and password and logs them to &lt;code&gt;nifi-app.log&lt;/code&gt;. An example would be the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2021-07-16 15:46:31,006 INFO [main] o.a.n.w.c.ApplicationStartupContextListener Flow Controller started successfully.
2021-07-16 15:46:31,026 INFO [main] o.a.n.a.s.u.SingleUserLoginIdentityProvider

Generated Username [6fcaba96-5445-4835-822f-e004c4642d3b]
Generated Password [ScCULiVSEwlqVLG6aHxGv/utRTHxWa7n]

2021-07-16 15:46:31,026 INFO [main] o.a.n.a.s.u.SingleUserLoginIdentityProvider Run the following command to change credentials: nifi.sh set-single-user-credentials USERNAME PASSWORD
2021-07-16 15:46:31,338 INFO [main] o.a.n.a.s.u.SingleUserLoginIdentityProvider Updating Login Identity Providers Configuration [./conf/login-identity-providers.xml]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you then access &lt;code&gt;https://localhost:8443/nifi&lt;/code&gt; in your browser (accept warnings about self-signed certificates), you should be able to login with the username/password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s32DBujM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-secure-by-default/02-nifi-ui-logged-in.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s32DBujM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-secure-by-default/02-nifi-ui-logged-in.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the logs mention above, the default username/password can be changed by running the following utility:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./bin/nifi.sh set-single-user-credentials USERNAME PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>nifi</category>
      <category>2021</category>
    </item>
    <item>
      <title>K3s on Raspberry Pi - Jenkins / Registry (Part 2)</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Sat, 03 Jul 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/k3s-on-raspberry-pi-jenkins-registry-part-2-1j5a</link>
      <guid>https://dev.to/bbende/k3s-on-raspberry-pi-jenkins-registry-part-2-1j5a</guid>
      <description>&lt;p&gt;This is the second part of setting up Jenkins and a private Docker registry on K3s. The &lt;a href="https://bryanbende.com/development/2021/07/02/k3s-raspberry-pi-jenkins-registry-p1"&gt;first part&lt;/a&gt; left off with the private registry up and running and accessible to K3s, and Jenkins being able to execute a basic job through the Kubernetes plugin.&lt;/p&gt;

&lt;p&gt;This post will cover setting up a more realistic Jenkins job for an example Spring Boot application, including publishing images to the private registry and running them on K3s.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Application Overview
&lt;/h3&gt;

&lt;p&gt;The code for the example application can be found at &lt;a href="https://github.com/bbende/cloud-native-examples/tree/main/example-app"&gt;cloud-native-examples/example-app&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The project is made up of three modules:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;example-app-api&lt;/strong&gt; - Shared model classes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;example-app-backend&lt;/strong&gt; - Spring Boot application with a single REST endpoint for storing/retrieving messages, messages are stored using a simple in-memory implementation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;example-app-frontend&lt;/strong&gt; - Spring Boot application that communicates with the backend and provides a simple UI to create/view messages&lt;/p&gt;

&lt;p&gt;The frontend &lt;code&gt;application.properties&lt;/code&gt; contains the location of the backend application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;example.app.backend.server.protocol=http
example.app.backend.server.host=localhost
example.app.backend.server.port=8081
example.app.backend.server.messages-context=/messages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These values would potentially need to be overridden depending how the application is being deployed.&lt;/p&gt;

&lt;p&gt;If we start each Spring Boot application locally from Intellij, we should see the following…&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Backend&lt;/em&gt; &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jLgdcL_Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/01-example-app-backend-intellij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jLgdcL_Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/01-example-app-backend-intellij.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Frontend&lt;/em&gt; &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cnMRUIYf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/02-example-app-frontend-intellij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cnMRUIYf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/02-example-app-frontend-intellij.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Browser at &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt;&lt;/em&gt; &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8m6gceZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/03-example-app-frontend-ui-local.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8m6gceZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/03-example-app-frontend-ui-local.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Images
&lt;/h3&gt;

&lt;p&gt;Since our goal is to deploy the application to K3s, we need a way to produce container images for the backend and frontend Spring Boot applications.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;spring-boot-maven-plugin&lt;/code&gt; recently added support for building OCI images through the new &lt;code&gt;build-image&lt;/code&gt; goal. This relies on &lt;a href="https://buildpacks.io/"&gt;buildpacks&lt;/a&gt; behind the scenes, which unfortunately does not support &lt;code&gt;arm&lt;/code&gt;, so the &lt;code&gt;build-image&lt;/code&gt; goal won’t succeed from one of the Raspberry Pis.&lt;/p&gt;

&lt;p&gt;Another popular option is &lt;a href="https://github.com/GoogleContainerTools/jib"&gt;Jib&lt;/a&gt;, a library from Google for containerizing Java applications. Jib has plugins for Maven and Gradle, and has the ability to build images without a local Docker daemon. This is particularly useful for our Jenkins job which is going to execute in a pod that won’t have access to a Docker daemon.&lt;/p&gt;

&lt;p&gt;As a side note, if you do need to access a Docker daemon from within a container, a common approach seems to be generally referred to as “Docker in Docker (DIND)”. This boils down to mounting &lt;code&gt;/var/run/docker.sock&lt;/code&gt; from the host into the container, so the container is actually using the host’s Docker daemon. Since K3s uses &lt;code&gt;containerd&lt;/code&gt; as the runtime, there isn’t a Docker daemon on the host anyway.&lt;/p&gt;

&lt;p&gt;The configuration of the Jib plugin is done in &lt;a href="https://github.com/bbende/cloud-native-examples/blob/02-jib/example-app/pom.xml#L45-L98"&gt;pluginManagement at the example-app level&lt;/a&gt; so that it can be shared across the frontend and backend applications.&lt;/p&gt;

&lt;p&gt;There are three profiles available:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;jib-docker-daemon&lt;/strong&gt; - Builds to the local Docker daemon&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;jib-docker-hub&lt;/strong&gt; - Builds and pushes to Docker Hub, requires running &lt;code&gt;docker login&lt;/code&gt; first&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;jib-k3s-private&lt;/strong&gt; - Builds and pushes to the private registry in K3s, requires specifying username/password of the registry with &lt;code&gt;-Djib.registry.username&lt;/code&gt; and &lt;code&gt;-Djib.registry.password&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can test building the images with our local Docker daemon by executing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mvn clean package -Pjib-docker-daemon
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a successful build, running the command &lt;code&gt;docker images&lt;/code&gt; should show the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REPOSITORY TAG IMAGE ID CREATED SIZE
example-backend 0.0.1-SNAPSHOT ea3241a5cfd6 25 seconds ago 261MB
example-backend latest ea3241a5cfd6 25 seconds ago 261MB
example-frontend 0.0.1-SNAPSHOT 1cf33e4a9207 37 seconds ago 265MB
example-frontend latest 1cf33e4a9207 37 seconds ago 265MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we can setup a Jenkins job that will use the &lt;code&gt;jib-k3s-private&lt;/code&gt; profile to build and publish the images to private registry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins Job
&lt;/h3&gt;

&lt;p&gt;First, we need to create a Jenkins Credential to store the username/password for the private registry so that these values can be referenced securely from the build configuration.&lt;/p&gt;

&lt;p&gt;Navigate to &lt;em&gt;Manage Jenkins&lt;/em&gt; -&amp;gt; &lt;em&gt;Manage Credentials&lt;/em&gt; -&amp;gt; &lt;em&gt;“Jenkins” Store&lt;/em&gt; -&amp;gt; &lt;em&gt;Domain “Global Credentials (unrestricted)”&lt;/em&gt;, and then click &lt;em&gt;Add Credential&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5njknGcW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/04-jenkins-docker-credentials.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5njknGcW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins-pt2/04-jenkins-docker-credentials.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the username and password for the registry, and enter a unique id for the credential, we will call it &lt;code&gt;docker-registry-private&lt;/code&gt;. The id can be anything, we just need to reference it later.&lt;/p&gt;

&lt;p&gt;Now we can create the new pipeline. From the main Jenkins page, click &lt;em&gt;New Item&lt;/em&gt; and add a pipeline named &lt;code&gt;cloud-native-examples&lt;/code&gt;. Under the &lt;em&gt;Pipeline Definition&lt;/em&gt;, select &lt;code&gt;Pipeline Script&lt;/code&gt; and enter the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    environment {
        GIT_REPO_URL = 'https://github.com/bbende/cloud-native-examples.git'
        GIT_REPO_BRANCH = 'main'
        REGISTRY_URL = 'docker-registry-service.docker-registry.svc.cluster.local:5000'
        REGISTRY_CREDENTIAL = credentials('docker-registry-private')
    }
    agent {
        kubernetes {
            defaultContainer 'jnlp'
            yaml """
apiVersion: v1
kind: Pod
spec:
  containers:
    - name: maven
      image: maven:3.8.1-openjdk-11
      command: ["tail", "-f", "/dev/null"]
      imagePullPolicy: IfNotPresent
      resources:
        requests:
          memory: "1Gi"
          cpu: "500m"
        limits:
          memory: "1Gi"
      volumeMounts:
            - name: jenkins-maven
              mountPath: /root/.m2
  volumes:
    - name: jenkins-maven
      persistentVolumeClaim:
        claimName: jenkins-maven-pvc
"""
        }
    }
    stages {
        stage('Git Clone') {
            steps {
                git(url: "${GIT_REPO_URL}", branch: "${GIT_REPO_BRANCH}")
            }
        }
        stage('Build') {
            steps {
                container('maven') {
                    sh 'mvn package -Pjib-k3s-private -Djib.registry.username=$REGISTRY_CREDENTIAL_USR -Djib.registry.password=$REGISTRY_CREDENTIAL_PSW -DsendCredentialsOverHttp=true'
                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s breakdown each part of the pipeline…&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;environment&lt;/strong&gt; - Defines variables for use in other parts of the pipeline, including a variable for the private registry credential from calling &lt;code&gt;credentials('docker-registry-private')&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;agent/kubernetes&lt;/strong&gt; - Defines an agent for executing the pipeline in Kubernetes with a field for the YAML definition of the agent Pod which uses the image &lt;code&gt;maven:3.8.1-openjdk-11&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;stage(‘Git Clone’)&lt;/strong&gt; - Clones the git repository for the project&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;stage(‘Build’)&lt;/strong&gt; - Executes the Maven build using the &lt;code&gt;jib-k3s-private&lt;/code&gt; profile and overrides variables to specify the username/password for the registry using the credential environment variables&lt;/p&gt;

&lt;p&gt;As an optimization, the Maven container mounts a persistent volume to &lt;code&gt;/root/.m2&lt;/code&gt; which comes from a Longhorn PVC. This allows the local Maven repository to be persisted across builds, instead of downloading every dependency on every build.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;REGISTRY_URL&lt;/code&gt; is defined as &lt;code&gt;docker-registry-service.docker-registry.svc.cluster.local&lt;/code&gt;. This is because we have a service named &lt;code&gt;docker-registry-service&lt;/code&gt; in the &lt;code&gt;docker-registry&lt;/code&gt; namespace, but we are accessing it from Jenkins in the &lt;code&gt;jenkins&lt;/code&gt; namespace, so we need the fully qualified hostname.&lt;/p&gt;

&lt;p&gt;The registry is secured with username/password authentication, but the internal service at &lt;code&gt;docker-registry-service&lt;/code&gt; is not TLS-enabled, so we have to add &lt;code&gt;-DsendCredentialsOverHttp=true&lt;/code&gt; to allow Jib to authenticate over http. This is acceptable for internal communication on our example cluster, but is not recommended for a real environment.&lt;/p&gt;

&lt;p&gt;After creating the pipeline we can click &lt;em&gt;Build Now&lt;/em&gt; to start a build. From watching the Console Output of the build, we should see something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Created Pod: k3s jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][Scheduled] Successfully assigned jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3 to rpi-3
Still waiting to schedule task
‘cloud-native-examples-26-k493m-qzrwp-3xpv3’ is offline
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][SuccessfulAttachVolume] AttachVolume.Attach succeeded for volume "pvc-af0ab285-fe51-48a8-be4c-f106bea22566"
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][Pulled] Container image "maven:3.8.1-openjdk-11" already present on machine
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][Created] Created container maven
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][Started] Started container maven
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][Pulled] Container image "pi4k8s/inbound-agent:4.3" already present on machine
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][Created] Created container jnlp
[Normal][jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3][Started] Started container jnlp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows that the pod &lt;code&gt;jenkins/cloud-native-examples-26-k493m-qzrwp-3xpv3&lt;/code&gt; was launched to execute the build. The persistent volume for the Maven repo was then attached and the containers were created and started.&lt;/p&gt;

&lt;p&gt;The set of containers is a combination of the default Pod template created in Part 1, and the YAML definition in the pipeline. So the combined set of containers includes &lt;code&gt;maven:3.8.1-openjdk-11&lt;/code&gt;, &lt;code&gt;pi4k8s/inbound-agent:4.3&lt;/code&gt;, and &lt;code&gt;jnlp&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The rest of the build should continue with a standard Maven build of a Spring Boot application. At some point in the build output, we should see the frontend and backend images being published:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[INFO Built and pushed image as docker-registry-service.docker-registry.svc.cluster.local:5000/example-frontend, docker-registry-service.docker-registry.svc.cluster.local:5000/example-frontend:0.0.1-SNAPSHOT-k3s
...
[INFO Built and pushed image as docker-registry-service.docker-registry.svc.cluster.local:5000/example-backend, docker-registry-service.docker-registry.svc.cluster.local:5000/example-backend:0.0.1-SNAPSHOT-k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can verify that the images are now available in the private registry by running the following command from our laptop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -k -X GET --basic -u registry https://docker.registry.private/v2/_catalog | python -m json.tool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "repositories": [
        "arm32v7/nginx",
        "example-backend",
        "example-frontend"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing Images on K3s
&lt;/h3&gt;

&lt;p&gt;Now that the images are available in the private registry, we can test deploying pods on K3s to verify the images work correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace example-app

kubectl run example-backend --image docker.registry.private/example-backend --namespace example-app

kubectl run example-frontend --image docker.registry.private/example-frontend --namespace example-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The images are prefixed with the registry location of &lt;code&gt;docker.registry.private&lt;/code&gt;, which must line up with the configuration in &lt;code&gt;/etc/rancher/k3s/registries.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If we check the status of the pods in the &lt;code&gt;example-app&lt;/code&gt; namespace, we should see two pods come up running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods --namespace example-app
NAME READY STATUS RESTARTS AGE
example-backend 1/1 Running 0 102s
example-frontend 1/1 Running 0 59s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This proves K3s is able to pull images from the private registry and the images were correctly built for &lt;code&gt;arm64&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;There would be a bit more work to correctly deploy the application. Currently the frontend isn’t correctly configured to access the backend, and the frontend isn’t accessible from outside the cluster, but those are topics for another post.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>raspberrypi</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>K3s on Raspberry Pi - Jenkins / Registry (Part 1)</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Fri, 02 Jul 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/k3s-on-raspberry-pi-jenkins-registry-part-1-13ai</link>
      <guid>https://dev.to/bbende/k3s-on-raspberry-pi-jenkins-registry-part-1-13ai</guid>
      <description>&lt;p&gt;Building on my previous K3s posts, this one will cover deploying Jenkins and a private Docker registry, as well as configuring the Jenkins Kubernetes plugin.&lt;/p&gt;

&lt;p&gt;The overall setup looks like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ddFIkl4A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/00-overall-setup-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ddFIkl4A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/00-overall-setup-2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Private Registry Setup
&lt;/h3&gt;

&lt;p&gt;For the private registry, I primarily followed this article: &lt;a href="https://carpie.net/articles/installing-docker-registry-on-k3s"&gt;Installing Docker Registry on K3s&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My version of the configuration can be found here: &lt;a href="https://github.com/bbende/k3s-config/tree/main/docker-registry"&gt;ks-config/docker-registry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In order for K3s to pull images from the private registry, the &lt;em&gt;containerd&lt;/em&gt; daemon on each node needs to access the registry running within a pod in K3s.&lt;/p&gt;

&lt;p&gt;In my configuration, the registry is exposed via ingress with a host of &lt;code&gt;docker.registry.private&lt;/code&gt;, so we can edit &lt;code&gt;/etc/hosts&lt;/code&gt; on each node to map this hostname to the current node.&lt;/p&gt;

&lt;p&gt;An example from the master node would be the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192.168.1.244 rpi-1 docker.registry.private
192.168.1.245 rpi-2
192.168.1.246 rpi-3  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This mapping needs to be done on each node since a pod may be deployed to any node in the cluster.&lt;/p&gt;

&lt;p&gt;Next we need to configure K3s to know about the private registry. This is done by creating the file &lt;code&gt;/etc/rancher/k3s/registries.yaml&lt;/code&gt; on each node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mirrors:
  docker.registry.private:
    endpoint:
      - "https://docker.registry.private"
configs:
  "docker.registry.private":
    auth:
      username: registry
      password: &amp;lt;replace-with-your-password&amp;gt;
    tls:
      insecure_skip_verify: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The initial K3s Ansible setup placed everything on the worker nodes under &lt;code&gt;/etc/rancher/node&lt;/code&gt;, but &lt;code&gt;registries.yaml&lt;/code&gt; is only recognized from &lt;code&gt;/etc/rancher/k3s&lt;/code&gt;, so you will need to create this directory first.&lt;/p&gt;

&lt;p&gt;I wasn’t sure of the best way to configure the &lt;code&gt;tls&lt;/code&gt; section to trust the self-signed certificate presented by registry’s ingress, so specifying &lt;code&gt;insecure_skip_verify: true&lt;/code&gt; disables certificate verification for now.&lt;/p&gt;

&lt;p&gt;After getting this file in place on each node, you can adjust the permissions and restart K3s.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Master node
sudo chmod go-r /etc/rancher/k3s/registries.yaml
sudo systemctl restart k3s

# Worker nodes
sudo chmod go-r /etc/rancher/k3s/registries.yaml
sudo systemctl restart k3s-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, we should be able to deploy a pod that references an image from &lt;code&gt;docker.registry.private&lt;/code&gt;. Assuming we pushed the image &lt;code&gt;arm32v7/nginx&lt;/code&gt; to our registry, we can deploy the following pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: docker-registry-test
spec:
  containers:
  - name: nginx
    image: docker.registry.private/arm32v7/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is working correctly, the pod should deploy and end up in the running state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins Setup
&lt;/h3&gt;

&lt;p&gt;For deploying Jenkins, I primarily followed these two articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://devopscube.com/setup-jenkins-on-kubernetes-cluster"&gt;How to Setup Jenkins on Kubernetes Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/slalom-build/jenkins-on-kubernetes-4d8c3d9f2ece"&gt;Jenkins on Kubernetes: From Zero to Hero&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My configuration can be found here: &lt;a href="https://github.com/bbende/k3s-config/tree/main/jenkins"&gt;ks-config/jenkins&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately the standard Jenkins image doesn’t support &lt;code&gt;arm64&lt;/code&gt;, but the images under &lt;code&gt;jenkins4eval&lt;/code&gt; do, so using &lt;code&gt;jenkins4eval/jenkins:latest&lt;/code&gt; ended up working.&lt;/p&gt;

&lt;p&gt;In my configuration, I created an ingress with the host &lt;code&gt;jenkins.private&lt;/code&gt; and then mapped this hostname to the master node in &lt;code&gt;/etc/hosts&lt;/code&gt; on my laptop. This follows the same approach for how I’m accessing other services from my laptop, such as Longhorn UI and the private registry.&lt;/p&gt;

&lt;p&gt;The first time accessing &lt;code&gt;https://jenkins.private&lt;/code&gt;, you will be prompted to unlock Jenkins. The password is printed to the log of the Jenkins pod during start up. The above articles cover this in more detail.&lt;/p&gt;

&lt;p&gt;Assuming you are successfully logged in to Jenkins, the next thing to do is install the Kubernetes plugin.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins Kubernetes Plugin
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://plugins.jenkins.io/kubernetes/"&gt;Jenkins Kubernetes Plugin&lt;/a&gt; launches worker agents as Kubernetes pods, which means we get a fresh build environment for each job based on a pod specification.&lt;/p&gt;

&lt;p&gt;From the left hand menu, select &lt;em&gt;“Manage Jenkins”&lt;/em&gt; and then &lt;em&gt;“Manage Plugins”&lt;/em&gt;. Search for the Kubernetes Plugin and select &lt;em&gt;“Install without Restart”&lt;/em&gt;. I already have it installed, but the plugins page looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4BjLs0PE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/02-kubernetes-plugin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4BjLs0PE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/02-kubernetes-plugin.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the install completes, restart Jenkins by using the URL &lt;a href="https://jenkins.private/safeRestart"&gt;https://jenkins.private/safeRestart&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once Jenkins has restarted, navigate to &lt;em&gt;“Manage Jenkins”&lt;/em&gt; -&amp;gt; &lt;em&gt;“Manage Nodes and Clouds”&lt;/em&gt; -&amp;gt; &lt;em&gt;“Configure Clouds”&lt;/em&gt;, and add a cloud of type &lt;em&gt;Kubernetes&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V0XhTwpY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/04-configure-cloud-kubernetes-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V0XhTwpY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/04-configure-cloud-kubernetes-2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to enter the namespace where Jenkins is deployed, and then we can click &lt;em&gt;Test Connection&lt;/em&gt; to verify that Jenkins can communicated with Kubernetes.&lt;/p&gt;

&lt;p&gt;In the next section, we need to enter a value for the &lt;em&gt;Jenkins URL&lt;/em&gt; field. This is the URL used by the Jenkins worker pod to communicate back to the main Jenkins server. This URL should use the &lt;code&gt;ClusterIP&lt;/code&gt; service that we created as part of deploying Jenkins, so the URL should be &lt;code&gt;http://jenkins:8080&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WeWWmcK6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/05-configure-cloud-kubernetes-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WeWWmcK6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/05-configure-cloud-kubernetes-3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the &lt;em&gt;Advanced&lt;/em&gt; section, set the &lt;em&gt;Defaults Provider Template Name&lt;/em&gt; to &lt;code&gt;default-agent&lt;/code&gt;. This is the name of a pod template we are going to create to provide default values for all worker pods. The main reason we are defining a pod template is to override the default &lt;code&gt;jnlp&lt;/code&gt; container to a specific one that supports &lt;code&gt;arm64.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;em&gt;Pod Templates&lt;/em&gt;, click &lt;em&gt;Add Pod Template&lt;/em&gt;, enter the name as &lt;code&gt;default-agent&lt;/code&gt; and the namespace as &lt;code&gt;jenkins&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yPswyZo7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/06-pod-template.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yPswyZo7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/06-pod-template.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the &lt;em&gt;Containers&lt;/em&gt; section, click &lt;em&gt;Add Container&lt;/em&gt;, enter a name of &lt;code&gt;jnlp&lt;/code&gt; and specify the image as &lt;code&gt;pi4k8s/inbound-agent:4.3&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BUm8jSDz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/07-container-template.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BUm8jSDz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/07-container-template.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We should now have the minimum working configuration for executing a pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Pipeline
&lt;/h3&gt;

&lt;p&gt;From the main Jenkins page, click &lt;em&gt;New Item&lt;/em&gt; and add a pipeline named &lt;code&gt;k3s-test-pipleine&lt;/code&gt;. Under the &lt;em&gt;Pipeline Definition&lt;/em&gt;, select &lt;code&gt;Pipeline Script&lt;/code&gt; and enter the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    agent {
        kubernetes {
            defaultContainer 'jnlp'
        }
    }
    stages {
        stage('Hello') {
            steps {
                echo 'Hello everyone! This is now running on a Kubernetes executor!'
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click &lt;em&gt;Build Now&lt;/em&gt; and we should get a successful execution. The output of the job should print the YAML definition used for the worker pod, which should show the customizations we made:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Iod-FQk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/08-test-pipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Iod-FQk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/k3s-rpi-jenkins/08-test-pipeline.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next post I’ll cover how to setup a pipeline that builds an image for a Spring Boot application and publishes to our private registry.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>raspberrypi</category>
      <category>2021</category>
    </item>
    <item>
      <title>K3s on Raspberry Pi - cert-manager</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Thu, 01 Jul 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/k3s-on-raspberry-pi-cert-manager-1d9e</link>
      <guid>https://dev.to/bbende/k3s-on-raspberry-pi-cert-manager-1d9e</guid>
      <description>&lt;p&gt;In this post we’ll look at deploying &lt;a href="https://cert-manager.io/" rel="noopener noreferrer"&gt;cert-manager&lt;/a&gt; on K3s and how to use it with the Traefik Ingress Controller to enable TLS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;In a previous post, we covered the &lt;a href="https://bryanbende.com/development/2021/05/08/k3s-raspberry-pi-ingress" rel="noopener noreferrer"&gt;Traefik Ingress Controller&lt;/a&gt; and created an example deployment using the &lt;code&gt;whoami&lt;/code&gt; container, along with an ingress over &lt;code&gt;http&lt;/code&gt;. We can use the IP address of any node, such as &lt;code&gt;http://192.168.1.244/foo&lt;/code&gt;, and get a response like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F01-whoami-http.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F01-whoami-http.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we change the URL to &lt;code&gt;https&lt;/code&gt;, we get a warning about about a self-signed certificate. Clicking &lt;em&gt;View Certificate&lt;/em&gt;, shows a self-signed Traefik certificate, and if we accept and continue, we get a &lt;em&gt;404 Not Found&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F03-whoami-https-certificate.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F03-whoami-https-certificate.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is because the &lt;code&gt;whoami&lt;/code&gt; ingress doesn’t have TLS enabled and the request is falling back to Traefik’s default &lt;code&gt;https&lt;/code&gt; handling. We can fix this by deploying cert-manager and using it to request a certificate for the &lt;code&gt;whoami&lt;/code&gt; ingress.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying cert-manager
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://cert-manager.io/docs/installation/kubernetes/" rel="noopener noreferrer"&gt;cert-manager documentation&lt;/a&gt; covers the options for installing cert-manager on Kubernetes. We are going to use the manifest approach with a slight modification to ensure that &lt;code&gt;arm&lt;/code&gt; images are used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sL \
https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml |\
sed -r 's/(image:.*):(v.*)$/\1-arm:\2/g' &amp;gt; cert-manager-arm.yaml

kubectl create namespace cert-manager
kubectl create -f cert-manager-arm.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach came from the article &lt;a href="https://opensource.com/article/20/3/ssl-letsencrypt-k3s" rel="noopener noreferrer"&gt;Make SSL certs easy with k3s&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After all of the resources are created, we should eventually see three running pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods --namespace cert-manager

NAME READY STATUS
cert-manager-webhook-7c58d9689f-74j7c 1/1 Running
cert-manager-7c5b8cb7cf-nzz64 1/1 Running
cert-manager-cainjector-67df6b6b68-lzkng 1/1 Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, we can follow the steps from the cert-manager documentation to &lt;a href="https://cert-manager.io/docs/installation/kubernetes/#verifying-the-installation" rel="noopener noreferrer"&gt;verify the installation is working correctly&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traefik Ingress with cert-manager
&lt;/h3&gt;

&lt;p&gt;In order to obtain certificates from cert-manager, we need to create an issuer to act as a certificate authority. We have the option of creating an &lt;code&gt;Issuer&lt;/code&gt; which is a namespaced resource, or a &lt;code&gt;ClusterIssuer&lt;/code&gt; which is a global resource. We’ll create a self-signed &lt;code&gt;ClusterIssuer&lt;/code&gt; using the following definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: self-signed-issuer
spec:
  selfSigned: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then update the &lt;code&gt;whoami&lt;/code&gt; ingress to obtain a certificate from the &lt;code&gt;ClusterIssuer&lt;/code&gt; and enable TLS. This is done by adding the following annotations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cert-manager.io/cluster-issuer: self-signed-issuer
traefik.ingress.kubernetes.io/router.tls: "true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The spec also needs a new &lt;code&gt;tls&lt;/code&gt; section to specify the hostname and the name of the secret that will store the certificate created by cert-manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tls:
- hosts:
  - whoami
  secretName: whoami-tls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full ingress definition looks like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: whoami-tls
  namespace: whoami
  annotations:
    kubernetes.io/ingress.class: "traefik"
    cert-manager.io/cluster-issuer: self-signed-issuer
    traefik.ingress.kubernetes.io/router.tls: "true"
spec:
  tls:
  - hosts:
    - whoami
    secretName: whoami-tls
  rules:
    - host: whoami
      http:
        paths:
          - path: /bar
            pathType: Prefix
            backend:
              service:
                name: whoami
                port:
                  number: 80
          - path: /foo
            pathType: Prefix
            backend:
              service:
                name: whoami
                port:
                  number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After creating this ingress, we can inspect the &lt;code&gt;whoami-tls&lt;/code&gt; secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl --namespace whoami describe secret whoami-tls

Name: whoami-tls
Namespace: whoami
Labels: &amp;lt;none&amp;gt;
Annotations: cert-manager.io/alt-names: whoami
              cert-manager.io/certificate-name: whoami-tls
              cert-manager.io/common-name:
              cert-manager.io/ip-sans:
              cert-manager.io/issuer-group: cert-manager.io
              cert-manager.io/issuer-kind: ClusterIssuer
              cert-manager.io/issuer-name: self-signed-issuer
              cert-manager.io/uri-sans:

Type: kubernetes.io/tls

Data
====
ca.crt: 1017 bytes
tls.crt: 1017 bytes
tls.key: 1679 bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The secret type is &lt;code&gt;kubernetes.io/tls&lt;/code&gt; and the data holds the CA certificate, the &lt;code&gt;whoami&lt;/code&gt; public certificate (tls.crt), and the &lt;code&gt;whoami&lt;/code&gt; private key (tls.key).&lt;/p&gt;

&lt;p&gt;In order to access the &lt;code&gt;whoami&lt;/code&gt; service via &lt;code&gt;https&lt;/code&gt;, the hostname of the URL must match the hostname of the certificate, which in this case is &lt;code&gt;whoami&lt;/code&gt;. To make this work, we can modify &lt;code&gt;/etc/hosts&lt;/code&gt; so that &lt;code&gt;whoami&lt;/code&gt; maps to the IP address of one of our nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192.168.1.244 rpi-1 whoami
192.168.1.245 rpi-2
192.168.1.246 rpi-3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if we access &lt;code&gt;https://whoami/bar&lt;/code&gt; in our browser, we still get a warning about a self-signed certificate, but this time the certificate is the &lt;code&gt;whoami&lt;/code&gt; certificate and not the default Traefik certificate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F04-whoami-https-certificate-after.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F04-whoami-https-certificate-after.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we accept the warning and continue, we now get a successful response!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F05-whoami-https.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-cert-manager%2F05-whoami-https.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>raspberrypi</category>
      <category>2021</category>
    </item>
    <item>
      <title>K3s on Raspberry Pi - Volumes and Storage</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Sat, 15 May 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/k3s-on-raspberry-pi-volumes-and-storage-1om5</link>
      <guid>https://dev.to/bbende/k3s-on-raspberry-pi-volumes-and-storage-1om5</guid>
      <description>&lt;p&gt;In this post we’ll look at how volumes and storage work in a K3s cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local Path Provisioner
&lt;/h3&gt;

&lt;p&gt;K3s comes with a default &lt;a href="https://rancher.com/docs/k3s/latest/en/storage" rel="noopener noreferrer"&gt;Local Path Provisioner&lt;/a&gt; that allows creating a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; backed by host-based storage. This means the volume is using storage on the host where the pod is located.&lt;/p&gt;

&lt;p&gt;Let’s take a look at an example…&lt;/p&gt;

&lt;p&gt;Create a specification for a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; and use the &lt;code&gt;storageClassName&lt;/code&gt; of &lt;code&gt;local-path&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-path-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a specification for a &lt;code&gt;Pod&lt;/code&gt; that binds to this PVC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: local-path-test
  namespace: default
spec:
  containers:
  - name: local-path-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: local-path-pvc
      mountPath: /data
    ports:
    - containerPort: 80
  volumes:
  - name: local-path-pvc
    persistentVolumeClaim:
      claimName: local-path-pvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create these two resources with &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f pvc-local-path.yaml
kubectl create -f pod-local-path-test.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a minute or so, we should be able to see our running pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE   
local-path-test 1/1 Running 0 7m7s 10.42.2.34 rpi-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pod only has one container, so we can get a shell to the container with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it local-path-test -- sh
/ #
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a test file in &lt;code&gt;/data&lt;/code&gt;, which is the &lt;code&gt;mountPath&lt;/code&gt; for the persistent volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # echo "testing" &amp;gt; /data/test.txt
/ # ls -l /data/
total 4
-rw-r--r-- 1 root root 8 May 15 20:24 test.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output above showed that this pod is running on the node &lt;code&gt;rpi-2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If we exit out of the shell for the container, and then SSH to the node &lt;code&gt;rpi-2&lt;/code&gt;, we can expect to the find the file &lt;code&gt;test.txt&lt;/code&gt; somewhere, since the persistent volume is backed by host-based storage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh pi@rpi-2
sudo find / -name test.txt
/var/lib/rancher/k3s/storage/pvc-a5a439ab-ea94-481d-bee2-9e5c0e0f8008_default_local-path-pvc/test.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows that we get out-of-the-box support for persistent storage with K3s, but what if our pod goes down and is relaunched on a different node?&lt;/p&gt;

&lt;p&gt;In that case, the new node won’t have access to the data from &lt;code&gt;/var/lib/rancher/k3s/storage&lt;/code&gt; on the previous node. To handle that case, we need distributed block storage.&lt;/p&gt;

&lt;p&gt;With distributed block storage, the storage is decouple from the pods, and the &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; can be mounted to the pod regardless of where the pod is running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Longhorn
&lt;/h3&gt;

&lt;p&gt;Longhorn is a “lightweight, reliable and easy-to-use distributed block storage system for Kubernetes.” It is also created by Rancher Labs, so it makes the integration with K3s very easy.&lt;/p&gt;

&lt;p&gt;For the most part, we can just follow the &lt;a href="https://longhorn.io/docs/0.8.0/install/install-with-kubectl/" rel="noopener noreferrer"&gt;Longhorn Install Documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, make sure you install the &lt;code&gt;open-iscsi&lt;/code&gt; package on all nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install open-iscsi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On my first attempt, I forgot to install this package and ran into issues later.&lt;/p&gt;

&lt;p&gt;Deploy Longhorn using &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v0.8.0/deploy/longhorn.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see everything created by listing all resources in the &lt;code&gt;longhorn-system&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all --namespace longhorn-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are too many resources to list here, but wait until everything is running.&lt;/p&gt;

&lt;p&gt;Longhorn has an admin UI that we can access by creating an ingress:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: longhorn-system
  name: longhorn-ingress
  annotations:
    kubernetes.io/ingress.class: "traefik"
spec:
  rules:
  - host: longhorn-ui
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: longhorn-frontend
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Originally I wanted to remove the &lt;code&gt;host&lt;/code&gt; entry to let it bind to all hosts, and specify a more specific path, but currently there is &lt;a href="https://github.com/longhorn/longhorn/issues/1745" rel="noopener noreferrer"&gt;an issue&lt;/a&gt; where the path must be &lt;code&gt;/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So we can leave the &lt;code&gt;host&lt;/code&gt; and &lt;code&gt;path&lt;/code&gt; as shown, and work around this by creating an &lt;code&gt;/etc/hosts&lt;/code&gt; entry on our local machine to map the hostname of &lt;code&gt;longhorn-ui&lt;/code&gt; to one of our nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /etc/hosts
192.168.1.244 rpi-1 longhorn-ui
192.168.1.245 rpi-2
192.168.1.246 rpi-3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thanks to &lt;a href="https://www.jericdy.com/blog/installing-k3s-with-longhorn-and-usb-storage-on-raspberry-pi/" rel="noopener noreferrer"&gt;Jeric Dy’s post&lt;/a&gt; for this idea.&lt;/p&gt;

&lt;p&gt;In a browser, if you go to &lt;code&gt;http://longhorn-ui/&lt;/code&gt;, you should see the dashboard like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-volumes-storage%2F01-longhorn-ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-volumes-storage%2F01-longhorn-ui.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can do a similar test as we did earlier to verify that everything is working…&lt;/p&gt;

&lt;p&gt;Create a specification for a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; and use the &lt;code&gt;storageClassName&lt;/code&gt; of &lt;code&gt;longhorn&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-pvc
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a specification for a &lt;code&gt;Pod&lt;/code&gt; that binds to this PVC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: longhorn-test
spec:
  containers:
  - name: longhorn-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: longhorn-pvc
      mountPath: /data
    ports:
    - containerPort: 80
  volumes:
  - name: longhorn-pvc
    persistentVolumeClaim:
      claimName: longhorn-pvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create these resources with &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f pvc-longhorn.yaml
kubectl create -f pod-longhorn-test.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a minute or so, we should be able to see the pod running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
longhorn-test 1/1 Running 0 42s 10.42.2.37 rpi-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get a shell to the container and create a file on the persistent volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it longhorn-test -- sh
/ # echo "testing" &amp;gt; /data/test.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Longhorn UI, we should now be able to see this volume under the Volumes tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-volumes-storage%2F02-longhorn-ui-volumes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-volumes-storage%2F02-longhorn-ui-volumes.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drilling in to the PVC shows that the volume is replicated across all three nodes, with the data located under &lt;code&gt;/var/lib/longhorn/replicas&lt;/code&gt; on each node:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-volumes-storage%2F03-longhorn-ui-volume.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-volumes-storage%2F03-longhorn-ui-volume.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can’t quite as easily see the file on disk because it is stored differently, but we can inspect the location on one of the nodes to see what is there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh pi@rpi-2

sudo ls -l /var/lib/longhorn/replicas/
total 4
drwx------ 2 root root 4096 May 15 17:03 pvc-317f4b66-822c-4483-ac44-84541dfac09b-bb6c2f0c

sudo ls -l /var/lib/longhorn/replicas/pvc-317f4b66-822c-4483-ac44-84541dfac09b-bb6c2f0c/
total 49828
-rw------- 1 root root 4096 May 15 17:06 revision.counter
-rw-r--r-- 1 root root 1073741824 May 15 17:06 volume-head-000.img
-rw-r--r-- 1 root root 126 May 15 17:03 volume-head-000.img.meta
-rw-r--r-- 1 root root 142 May 15 17:03 volume.meta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, if our pod on node &lt;code&gt;rpi-2&lt;/code&gt; goes down and is relaunched on another node, it will be able to bind to the same persistent volume, since it came from Longhorn and not from host-based storage.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>raspberrypi</category>
      <category>2021</category>
    </item>
    <item>
      <title>K3s on Raspberry Pi - Ingress</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Sat, 08 May 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/k3s-on-raspberry-pi-ingress-lf4</link>
      <guid>https://dev.to/bbende/k3s-on-raspberry-pi-ingress-lf4</guid>
      <description>&lt;p&gt;In this post we’ll look at how ingress works in a K3s cluster. For background, I recommend reading the &lt;a href="https://rancher.com/docs/k3s/latest/en/networking/" rel="noopener noreferrer"&gt;Networking Section&lt;/a&gt; of the K3s documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ingress Overview
&lt;/h3&gt;

&lt;p&gt;K3s automatically deploys the &lt;a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress" rel="noopener noreferrer"&gt;Traefik Ingress Controller&lt;/a&gt; and provides a service load balancer called Klipper. To see everything deployed in the &lt;code&gt;kube-system&lt;/code&gt; namespace, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all --namespace kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;NOTE: I have my default context set to &lt;code&gt;rpi-k3s&lt;/code&gt; so I don’t have to specify &lt;code&gt;--context&lt;/code&gt; on every command.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This shows the following resources related to Traefik:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pod/traefik-97b44b794-dbmz2  
service/traefik
deployment.apps/traefik
replicaset.apps/traefik-97b44b794
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the following resource related to the Klipper load balancer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pod/svclb-traefik-fc57n
pod/svclb-traefik-mj4md
pod/svclb-traefik-4qnbh
daemonset.apps/svclb-traefik
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The traefik deployment contains the specification for a pod with one container using the image &lt;code&gt;rancher/library-traefik:2.4.8&lt;/code&gt; and having container ports &lt;code&gt;8000&lt;/code&gt; (web) and &lt;code&gt;8443&lt;/code&gt; (websecure).&lt;/p&gt;

&lt;p&gt;The traefik service specifies a LoadBalancer for the traffic pod, and maps port &lt;code&gt;80&lt;/code&gt; of the service to port &lt;code&gt;8000&lt;/code&gt; on the traefik container, and port &lt;code&gt;443&lt;/code&gt; of the service to port &lt;code&gt;8443&lt;/code&gt; on the traefik container.&lt;/p&gt;

&lt;p&gt;Klipper then creates a DaemonSet called &lt;code&gt;svclb-traefik&lt;/code&gt;, which creates a pod on each node to act as a proxy to the service. Each of these pods is accessible from the node’s external IP address, and exposes ports &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt;, which map to the respective ports on the service.&lt;/p&gt;

&lt;p&gt;The overall setup looks something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-ingress%2F01-traefik-klipper-default-setup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-ingress%2F01-traefik-klipper-default-setup.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we can deploy an example application and expose it through Traefik.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ingress Test
&lt;/h3&gt;

&lt;p&gt;This example is adapted from the Traefik documentation.&lt;/p&gt;

&lt;p&gt;Create a namespace called &lt;code&gt;whoami&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Namespace
apiVersion: v1
metadata:
  name: whoami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a deployment for running two pods with the &lt;code&gt;whoami&lt;/code&gt; container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoami
  namespace: whoami
  labels:
    app: traefiklabs
    name: whoami

spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefiklabs
      task: whoami
  template:
    metadata:
      labels:
        app: traefiklabs
        task: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a service for the &lt;code&gt;whoami&lt;/code&gt; deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: whoami
  namespace: whoami

spec:
  ports:
    - name: http
      port: 80
  selector:
    app: traefiklabs
    task: whoami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an ingress to link the &lt;code&gt;whoami&lt;/code&gt; service to Traefik:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: whoami
  namespace: whoami
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web

spec:
  rules:
    - http:
        paths:
          - path: /bar
            pathType: Prefix
            backend:
              service:
                name: whoami
                port:
                  number: 80
          - path: /foo
            pathType: Prefix
            backend:
              service:
                name: whoami
                port:
                  number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the above ingress we should now be able to access the paths &lt;code&gt;/foo&lt;/code&gt; or &lt;code&gt;/bar&lt;/code&gt; on port 80, using the external IP address of any node. The overall setup now looks like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-ingress%2F02-whoami-ingress-test.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk3s-rpi-ingress%2F02-whoami-ingress-test.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we open a browser and navigate to &lt;code&gt;http://192.168.1.244/bar&lt;/code&gt;, we get the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hostname: whoami-7d666f84d8-4fs7c
IP: 127.0.0.1
IP: ::1
IP: 10.42.1.4
IP: fe80::c414:76ff:fe4c:75cc
RemoteAddr: 10.42.2.3:39256
GET /bar HTTP/1.1
Host: 192.168.1.244
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:88.0) Gecko/20100101 Firefox/88.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Dnt: 1
Sec-Gpc: 1
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 10.42.0.0
X-Forwarded-Host: 192.168.1.244
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-97b44b794-dbmz2
X-Real-Ip: 10.42.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows the request reached one of the &lt;code&gt;whoami&lt;/code&gt; containers at &lt;code&gt;10.42.1.4&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If we refresh the page, we now see the response came from the other &lt;code&gt;whoami&lt;/code&gt; container at &lt;code&gt;10.42.2.5&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Since Traefik is performing round-robing load balancing, the requests will continue to alternate between the two &lt;code&gt;whoami&lt;/code&gt; containers.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>raspberrypi</category>
      <category>2021</category>
    </item>
    <item>
      <title>K3s on Raspberry Pi - Initial Setup</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Fri, 07 May 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/k3s-on-raspberry-pi-initial-setup-45in</link>
      <guid>https://dev.to/bbende/k3s-on-raspberry-pi-initial-setup-45in</guid>
      <description>&lt;p&gt;As part of trying to learn more about Kubernetes, I thought it’d be interesting to setup a mini cluster running on Raspberry Pis. I had no real purpose for doing this, but figured it would be a good learning experience and would leave me with a somewhat realistic environment.&lt;/p&gt;

&lt;p&gt;There are already a ton of great resources that cover various aspects of running Kubernetes on Raspberry Pi. This post is just a summary of the steps that worked for me for reference.&lt;/p&gt;

&lt;h1&gt;
  
  
  Hardware
&lt;/h1&gt;

&lt;p&gt;I decided a three node cluster would be enough to get started. Here are the items I purchased:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 x &lt;a href="https://thepihut.com/collections/raspberry-pi/products/raspberry-pi-4-model-b?variant=31994565689406" rel="noopener noreferrer"&gt;Raspberry Pi 4 Model B (8GB)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;3 x &lt;a href="https://thepihut.com/collections/raspberry-pi-sd-cards/products/sandisk-microsd-card-class-10-a1?variant=39641172377795" rel="noopener noreferrer"&gt;SanDisk MicroSD Card (32GB)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;3 x &lt;a href="https://thepihut.com/collections/raspberry-pi-power-supplies/products/raspberry-pi-psu-us" rel="noopener noreferrer"&gt;Official US Raspberry Pi 4 Power Supply (5.1V 3A)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;2 x &lt;a href="https://thepihut.com/collections/raspberry-pi-cases/products/cluster-case-for-raspberry-pi" rel="noopener noreferrer"&gt;Cluster Case for Raspberry Pi (with Fans)&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most likely the 4GB model would have been more than enough, but for $20 more you can double the RAM to 8GB. Also, I already had an 8-port switch with some ethernet cables, although each Pi also has on-board Wifi as an option.&lt;/p&gt;

&lt;p&gt;The instructions for setting up the cluster case are &lt;a href="https://thepihut.com/blogs/raspberry-pi-tutorials/cluster-case-assembly-instructions" rel="noopener noreferrer"&gt;available online&lt;/a&gt;. I think it could have been made a little simpler, but overall it wasn’t too hard. Pay close attention to the orientation of everything in the pictures.&lt;/p&gt;

&lt;p&gt;Here is what my setup looked like after putting it together: &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk8s-rpi%2F00-rpi-cluster.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fk8s-rpi%2F00-rpi-cluster.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Raspberry Pi Setup
&lt;/h1&gt;

&lt;p&gt;Before we can prepare the SD cards, we have to decide which operating system to use, which then leads to thinking about which Kubernetes distribution to use.&lt;/p&gt;

&lt;p&gt;After doing some research, it seemed like there were two main options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Raspberry Pi OS 64-bit&lt;/code&gt; + &lt;code&gt;K3s&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Ubuntu 20.04&lt;/code&gt; + &lt;code&gt;MicroK8s&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since Raspberry Pi OS is the official operating system, I decided to go with that and give K3s a try.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image SD Cards
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Download the latest &lt;a href="https://downloads.raspberrypi.org/raspios_arm64/images/raspios_arm64-2021-04-09/2021-03-04-raspios-buster-arm64.zip" rel="noopener noreferrer"&gt;64-bit version of Raspberry Pi OS&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the &lt;a href="https://www.raspberrypi.org/software/" rel="noopener noreferrer"&gt;Raspberry Pi Imager&lt;/a&gt; to image the SD cards &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under &lt;em&gt;Operating System&lt;/em&gt;, choose &lt;em&gt;Use Custom&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Select the OS image downloaded in step 1&lt;/li&gt;
&lt;li&gt;Under &lt;em&gt;Storage&lt;/em&gt;, select the SD card&lt;/li&gt;
&lt;li&gt;Customize settings by pressing &lt;code&gt;CMD+SHIFT+X&lt;/code&gt; on Mac&lt;/li&gt;
&lt;li&gt;Set a hostname like &lt;code&gt;rpi-1&lt;/code&gt;, enable password-based SSH&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Write&lt;/em&gt; to begin imaging&lt;/li&gt;
&lt;li&gt;Repeat the process for each SD card&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Insert the SD cards into the Pis&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect Pis to the switch with ethernet cables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect one ethernet cable from the switch to your home router&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect a power supply to each Pi&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Plug the power supplies into outlets&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point, the Pis should boot up and get assigned a dynamic IP address on your home network, just like any other device. You can use your router’s admin console to determine the IP addresses.&lt;/p&gt;

&lt;p&gt;Once you have the IP addresses, you can test SSH to each one by running &lt;code&gt;ssh pi@&amp;lt;ip-address&amp;gt;&lt;/code&gt;, using the password you set when imaging the SD cards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Assign Static IP Addresses
&lt;/h3&gt;

&lt;p&gt;My router had a DHCP range of &lt;code&gt;192.168.1.2&lt;/code&gt; - &lt;code&gt;192.168.1.255&lt;/code&gt;, so I adjusted the ending value to &lt;code&gt;192.168.1.235&lt;/code&gt; in order to reserve some addresses for static assignment. That means we can pick three consecutive addresses somewhere between &lt;code&gt;192.168.1.236&lt;/code&gt; - &lt;code&gt;192.168.1.255&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For this example, we’ll use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192.168.1.244
192.168.1.245
192.168.1.246
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To set the static addresses, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;SSH to the first Pi&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ssh pi@&amp;lt;current-ip-address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install &lt;code&gt;vim&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo apt-get install vim
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/etc/dhcpcd.conf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo vim /etc/dhcpcd.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Find the section for static address, uncomment and edit:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; interface eth0
 static ip_address=192.168.1.244/24
 static routers=192.168.1.1
 static domain_name_servers=192.168.1.1
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Replace &lt;code&gt;192.168.1.1&lt;/code&gt; with your router’s IP address.&lt;/p&gt;

&lt;p&gt;Replace &lt;code&gt;192.168.1.244&lt;/code&gt; with the static address for the first Pi.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reboot the Pi&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After waiting a minute or two, SSH with the static IP&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ssh pi@192.168.1.244
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat the steps for each Pi.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setup &lt;code&gt;/etc/hosts&lt;/code&gt; mappings on local machine&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 192.168.1.244 rpi-1
 192.168.1.245 rpi-2
 192.168.1.246 rpi-3
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create aliases in &lt;code&gt;~/.zshrc&lt;/code&gt; for easy SSH access&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; alias sshrp1='ssh pi@rpi-1'
 alias sshrp2='ssh pi@rpi-2'
alias sshrp3='ssh pi@rpi-3'
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point you can test the SSH aliases to confirm that you can SSH to each Pi using a hostname mapped to the static IP address. For more detailed information, see &lt;a href="https://www.linuxscrew.com/raspberry-pi-static-ip" rel="noopener noreferrer"&gt;Setting a Static IP Address on Raspberry Pi&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Password-less SSH
&lt;/h3&gt;

&lt;p&gt;This is not really a requirement to run Kubernetes, but it will be more convenient to not type passwords over and over, plus it will allow us to use Ansible to issue commands to the Raspberry Pi nodes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Generate an SSH key on your local machine&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ssh-keygen
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Accept all the defaults and don’t enter a password.&lt;/p&gt;

&lt;p&gt;This will produce &lt;code&gt;~/.ssh/id_rsa&lt;/code&gt; (private key) and &lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt; (public key)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the content of &lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SSH to &lt;code&gt;rpi-1&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; mkdir ~/.ssh
 touch ~/.ssh/authorized_keys
 chmod 0700 ~/.ssh
 chmod 0600 ~/.ssh/authorized_keys
 vi ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Paste contents of the copied &lt;code&gt;id_rsa.pub&lt;/code&gt; and save the file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat the process for each Pi&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point, you should be able to run any of the SSH aliases without entering a password.&lt;/p&gt;

&lt;p&gt;I also followed a similar process to setup password-less SSH between all of the Raspberry Pi nodes, but this is also just a nice-to-have and not necessary.&lt;/p&gt;

&lt;h1&gt;
  
  
  Install k3s
&lt;/h1&gt;

&lt;p&gt;For installing k3s, I used the &lt;a href="https://github.com/k3s-io/k3s-ansible" rel="noopener noreferrer"&gt;k3s-ansible&lt;/a&gt; setup, which required that I get Ansible on my laptop. I already had &lt;code&gt;pip3&lt;/code&gt; installed through &lt;code&gt;homebrew&lt;/code&gt;, so installing Ansible amounted to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install ansible --user
export PATH="/Users/bbende/Library/Python/3.7/bin:$PATH"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now for running k3s-ansible…&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone the &lt;a href="https://github.com/k3s-io/k3s-ansible" rel="noopener noreferrer"&gt;k3s-ansible&lt;/a&gt; repo&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the inventory by copying the example:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; cp -R inventory/sample inventory/rpi
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;inventory/rpi/hosts.ini&lt;/code&gt; to look like the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; [master]
 192.168.1.244

 [node]
 192.168.1.245
 192.168.1.246

 [k3s_cluster:children]
 master
 node
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;inventory/rpi/group_vars/all.yml&lt;/code&gt; to set the k3s_version:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; k3s_version: v1.21.0+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Launch the setup, this will take a few minutes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ansible-playbook site.yml -i inventory/rpi/hosts.ini
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configure kubectl
&lt;/h3&gt;

&lt;p&gt;Assuming the setup completed successfully, you can configure &lt;code&gt;kubectl&lt;/code&gt; on our local machine to connect to the k3s cluster.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Transfer the Kubeconfig from the master node to your local machine&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; scp pi@192.168.1.244:~/.kube/config ~/.kube/config-rpi-k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure &lt;code&gt;KUBECONFIG&lt;/code&gt; environment variable in &lt;code&gt;.zshrc&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; export KUBECONFIG="~/.kube/config"
 export KUBECONFIG="$KUBECONFIG:~/.kube/config-rpi-k3s"
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;code&gt;kubectl&lt;/code&gt; to check the available config contexts&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl config get-contexts

 CURRENT NAME CLUSTER AUTHINFO NAMESPACE
 * docker-desktop docker-desktop docker-desktop
           minikube minikube minikube
           rpi-k3s default default
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;If the name of the Raspberry Pi context is something different, you can rename it using the command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl config rename-context &amp;lt;CURRENT_NAME&amp;gt; &amp;lt;NEW_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;code&gt;kubectl&lt;/code&gt; to view the nodes of the &lt;code&gt;rpi-k3s&lt;/code&gt; cluster&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl --context rpi-k3s get nodes

 NAME STATUS ROLES AGE VERSION
 rpi-3 Ready &amp;lt;none&amp;gt; 9d v1.21.0+k3s1
 rpi-2 Ready &amp;lt;none&amp;gt; 9d v1.21.0+k3s1
 rpi-1 Ready control-plane,master 9d v1.21.0+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point you should have a working k3s cluster to play around with.&lt;/p&gt;

&lt;p&gt;As a next step, you can try following the k3s docs for &lt;a href="https://rancher.com/docs/k3s/latest/en/installation/kube-dashboard/" rel="noopener noreferrer"&gt;installing the Kubernetes Dashboard&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>raspberrypi</category>
      <category>2021</category>
    </item>
    <item>
      <title>Apache NiFi SAML Authentication with Keycloak</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Wed, 17 Feb 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/apache-nifi-saml-authentication-with-keycloak-1o3l</link>
      <guid>https://dev.to/bbende/apache-nifi-saml-authentication-with-keycloak-1o3l</guid>
      <description>&lt;p&gt;One of the features I worked on for the 1.13.0 release of NiFi was the ability to authenticate via a SAML identity provider (IDP). In this post I’ll show how you can setup NiFi to use Keycloak as the SAML IDP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial NiFi Setup
&lt;/h3&gt;

&lt;p&gt;In order to perform any type of authentication, we first need a secured NiFi instance. There are already many posts that cover this topic, so the starting point will be assuming that you can configure NiFi with a keystore, truststore, and https host/port.&lt;/p&gt;

&lt;p&gt;These are the following properties that would need to be modified from the default config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.remote.input.secure=true

nifi.web.http.host=
nifi.web.http.port=

nifi.web.https.host=localhost
nifi.web.https.port=8443

nifi.security.keystore=/path/to/keystore.jks
nifi.security.keystoreType=JKS
nifi.security.keystorePasswd=changeit
nifi.security.keyPasswd=changeit

nifi.security.truststore=/path/to/truststore.jks
nifi.security.truststoreType=JKS
nifi.security.truststorePasswd=changeit

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Keycloak Setup
&lt;/h3&gt;

&lt;p&gt;Download the latest version of &lt;a href="https://www.keycloak.org/downloads" rel="noopener noreferrer"&gt;Keycloak&lt;/a&gt; and extract it somewhere. For this post I am using 12.0.2, but any recent version should be similar. After extracting, start Keycloak using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd keycloak-12.0.2
./bin/standalone.sh -Djboss.socket.binding.port-offset=100

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;port-offset&lt;/em&gt; is an easy way to increment the default port by 100 to avoid conflicts with other services that may already be using the default port. This makes the default port 8180, instead of 8080.&lt;/p&gt;

&lt;p&gt;Navigate to &lt;a href="http://localhost:8180" rel="noopener noreferrer"&gt;http://localhost:8180&lt;/a&gt; in your browser and follow the instructions to create an admin user for the Administration Console. After that, click the link for the Administration Console and login with the newly created user.&lt;/p&gt;

&lt;p&gt;In the admin console, select &lt;em&gt;Clients&lt;/em&gt; from the menu on the left, and then click the &lt;em&gt;Create_button to add a new client. Enter a _Client ID&lt;/em&gt; (we’ll need this later when configuring NiFi), select SAML as the &lt;em&gt;Client Protocol&lt;/em&gt;, and click &lt;em&gt;Save&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F01-keycloak-add-client.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F01-keycloak-add-client.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are a lot of options that can be configured, but we’ll mostly stick with defaults for now and focus on the minimal configuration to get a working setup.&lt;/p&gt;

&lt;p&gt;Configure &lt;em&gt;Root URL&lt;/em&gt;, &lt;em&gt;Valid Redirect URIs&lt;/em&gt;, &lt;em&gt;Base URL&lt;/em&gt;, and &lt;em&gt;Master SAML Processing URL&lt;/em&gt; as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F02-keycloak-nifi-client-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F02-keycloak-nifi-client-1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since NiFi’s SAML implementation doesn’t use a single processing URL, we also need to configure the fine-grained SAML URLs. The values for the URLs should look like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F03-keycloak-nifi-client-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F03-keycloak-nifi-client-2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also need to tell Keycloak about the key that NiFi is going to use to sign SAML requests. So click on the &lt;em&gt;SAML Keys&lt;/em&gt; tab, and then click &lt;em&gt;Import&lt;/em&gt;. We are going to import from the &lt;em&gt;keystore.jks&lt;/em&gt; that was used in &lt;em&gt;nifi.properties&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F06-keycloak-client-saml-key.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F06-keycloak-client-saml-key.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We now need to create some users to authenticate with for testing. So click &lt;em&gt;Users&lt;/em&gt; from the menu on the left and then click the &lt;em&gt;Create&lt;/em&gt; button to add a user.&lt;/p&gt;

&lt;p&gt;Enter &lt;em&gt;user1&lt;/em&gt; as the username and click Save. On the &lt;em&gt;Credentials&lt;/em&gt; tab for user1, set a password and toggle &lt;em&gt;Temporary&lt;/em&gt; to &lt;em&gt;OFF&lt;/em&gt;. Repeat this same process to create another user named &lt;em&gt;user2&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F05-keycloak-users-list.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F05-keycloak-users-list.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we’ll complete the NiFi SAML configuration to use Keycloak.&lt;/p&gt;

&lt;h3&gt;
  
  
  NiFi SAML Configuration
&lt;/h3&gt;

&lt;p&gt;In nifi.properties you should see the following section of SAML related properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# SAML Properties #
nifi.security.user.saml.idp.metadata.url=
nifi.security.user.saml.sp.entity.id=
nifi.security.user.saml.identity.attribute.name=
nifi.security.user.saml.group.attribute.name=
nifi.security.user.saml.metadata.signing.enabled=false
nifi.security.user.saml.request.signing.enabled=false
nifi.security.user.saml.want.assertions.signed=true
nifi.security.user.saml.signature.algorithm=http://www.w3.org/2001/04/xmldsig-more#rsa-sha256
nifi.security.user.saml.signature.digest.algorithm=http://www.w3.org/2001/04/xmlenc#sha256
nifi.security.user.saml.message.logging.enabled=false
nifi.security.user.saml.authentication.expiration=12 hours
nifi.security.user.saml.single.logout.enabled=false
nifi.security.user.saml.http.client.truststore.strategy=JDK
nifi.security.user.saml.http.client.connect.timeout=30 secs
nifi.security.user.saml.http.client.read.timeout=30 secs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to set the metadata URL to the location of Keycloak’s SAML metadata so that NiFi can retrieve this metadata during start-up. We also need to set the entity id to the value we used for the_Client ID_ when creating the SAML client in Keycloak.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.security.user.saml.idp.metadata.url=http://localhost:8180/auth/realms/master/protocol/saml/descriptor
nifi.security.user.saml.sp.entity.id=org:apache:nifi:saml:sp

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last thing we need to do is configure NiFi’s authorizers.xml. We will use the file-based providers for this example, so we need to setup an initial user and initial admin that corresponds to one of the users we added to Keycloak. We’ll use &lt;em&gt;user1&lt;/em&gt; here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;userGroupProvider&amp;gt;
    &amp;lt;identifier&amp;gt;file-user-group-provider&amp;lt;/identifier&amp;gt;
    &amp;lt;class&amp;gt;org.apache.nifi.authorization.FileUserGroupProvider&amp;lt;/class&amp;gt;
    &amp;lt;property name="Users File"&amp;gt;./conf/users.xml&amp;lt;/property&amp;gt;
    &amp;lt;property name="Legacy Authorized Users File"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Initial User Identity 1"&amp;gt;user1&amp;lt;/property&amp;gt;
&amp;lt;/userGroupProvider&amp;gt;

&amp;lt;accessPolicyProvider&amp;gt;
    &amp;lt;identifier&amp;gt;file-access-policy-provider&amp;lt;/identifier&amp;gt;
    &amp;lt;class&amp;gt;org.apache.nifi.authorization.FileAccessPolicyProvider&amp;lt;/class&amp;gt;
    &amp;lt;property name="User Group Provider"&amp;gt;file-user-group-provider&amp;lt;/property&amp;gt;
    &amp;lt;property name="Authorizations File"&amp;gt;./conf/authorizations.xml&amp;lt;/property&amp;gt;
    &amp;lt;property name="Initial Admin Identity"&amp;gt;user1&amp;lt;/property&amp;gt;
    &amp;lt;property name="Legacy Authorized Users File"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Node Identity 1"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Node Group"&amp;gt;&amp;lt;/property&amp;gt;
&amp;lt;/accessPolicyProvider&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point we can go ahead and start NiFi…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd nifi-1.13.0
./bin/nifi.sh start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your browser and navigate to &lt;a href="https://localhost:8443/nifi" rel="noopener noreferrer"&gt;https://localhost:8443/nifi&lt;/a&gt; which should redirect you to the Keycloakd login page. Enter the credentials for &lt;em&gt;user1&lt;/em&gt; and click &lt;em&gt;Sign In&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F07-nifi-keycloak-signin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F07-nifi-keycloak-signin.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This should send you back to the NiFi UI where you are authenticated as &lt;em&gt;user1&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F08-nifi-ui-user1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F08-nifi-ui-user1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will notice the toolbar is disabled. This is because the current user doesn’t have any permissions to the root process group. We could create a policy and grant access to user1, but instead of giving just one user access, let’s use group-based policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Group Setup
&lt;/h3&gt;

&lt;p&gt;In order to create group-based policies, we first need to create groups that our users can be associated with.&lt;/p&gt;

&lt;p&gt;So click &lt;em&gt;Groups&lt;/em&gt; from the menu on the left and then click &lt;em&gt;New&lt;/em&gt; to create a new group. Enter &lt;em&gt;group1&lt;/em&gt; as the group name and click Save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F09-keycloak-create-group.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F09-keycloak-create-group.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We then need to associate our user to this group. This can be done by navigating to &lt;em&gt;user1&lt;/em&gt;, selecting the &lt;em&gt;Groups&lt;/em&gt; tab, and joining the user to &lt;em&gt;group1&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F10-keycloak-user-group.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F10-keycloak-user-group.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last step is to tell the SAML client how to provide this group information in a SAML response. In order to do this we need to create a mapper in the &lt;em&gt;Mappers&lt;/em&gt; tab of the SAML Client.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Mapper Type&lt;/em&gt; should be &lt;em&gt;Group List&lt;/em&gt; and the name can be anything you like. Leave the default attribute name of &lt;em&gt;member&lt;/em&gt;, set the &lt;em&gt;Name Format&lt;/em&gt; to &lt;em&gt;Basic&lt;/em&gt;, and toggle the &lt;em&gt;Full Group Path&lt;/em&gt; to &lt;em&gt;OFF&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F11-keycloak-group-mapper.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F11-keycloak-group-mapper.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result of this mapper is that each SAML response contains an attribute named &lt;em&gt;member&lt;/em&gt; where the value is a list of the groups the user belongs to. We then have to also tell NiFi the name of this attribute by configuring the follow value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.security.user.saml.group.attribute.name=member

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll need to restart NiFi after making this change.&lt;/p&gt;

&lt;p&gt;At this point, we still have the disabled toolbar, but now we can fix that. Since we are using the file-based providers for authorization, we have to tell NiFi about the existence of &lt;em&gt;group1&lt;/em&gt;, so that we can create policies using that group.&lt;/p&gt;

&lt;p&gt;We can add a new group by selecting &lt;em&gt;Users&lt;/em&gt; from the menu in the top right, and then clicking the icon to add a new user/group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F12-nifi-create-group.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F12-nifi-create-group.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We don’t need to worry about maintaining the group membership in NiFi because the SAML response is going to tell NiFi that &lt;em&gt;user1&lt;/em&gt; belongs to &lt;em&gt;group1&lt;/em&gt;, so we can leave &lt;em&gt;group1&lt;/em&gt; empty.&lt;/p&gt;

&lt;p&gt;Now we can create policies on the root process for &lt;em&gt;“view the component”&lt;/em&gt; and &lt;em&gt;“modify the component”&lt;/em&gt;. To create a policy on the root group, make sure nothing else on the canvas is selected and click the key icon in the palette on the left.&lt;/p&gt;

&lt;p&gt;The policies should look like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F13-nifi-root-pg-view.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F13-nifi-root-pg-view.png"&gt;&lt;/a&gt; &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F14-nifi-root-pg-modify.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F14-nifi-root-pg-modify.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to see these policies take effect, we need to re-authenticate to NiFi via Keycloak so that NiFi gets a new SAML response with the &lt;em&gt;member&lt;/em&gt; attribute.&lt;/p&gt;

&lt;p&gt;So click the &lt;em&gt;Logout&lt;/em&gt; link which will bring you to the logout landing page, and then click_Home_ which will start the login sequence again.&lt;/p&gt;

&lt;p&gt;Since you should already be logged into Keycloak, this should immediately go back and forth between Keycloak and NiFi, and you should now see the toolbar enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F15-nifi-authorized.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbbende.github.io%2Fassets%2Fimages%2Fnifi-saml%2F15-nifi-authorized.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s all for this post, happy SAML’ing!&lt;/p&gt;

</description>
      <category>nifi</category>
      <category>2021</category>
    </item>
    <item>
      <title>Apache NiFi Secure Cluster Setup</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Tue, 23 Oct 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/apache-nifi-secure-cluster-setup-5gln</link>
      <guid>https://dev.to/bbende/apache-nifi-secure-cluster-setup-5gln</guid>
      <description>&lt;p&gt;Setting up a secure cluster continues to generate a lot of questions, so even though several posts have already coveredthis topic, I thought I’d document the steps I performed while verifying the Apache NiFi 1.8.0 Release Candidate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial Setup
&lt;/h3&gt;

&lt;p&gt;The simplest cluster you can setup for local testing is a two node cluster, with embedded ZooKeeper running on the first node.&lt;/p&gt;

&lt;p&gt;After building the source for 1.8.0-RC3, unzip the binary distribution and create two identical directories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unzip nifi-1.8.0-bin.zip
mv nifi-1.8.0 nifi-1
cp -R nifi-1 nifi-2

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The resulting directory structure should look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cluster
├── nifi-1
│   ├── LICENSE
│   ├── NOTICE
│   ├── README
│   ├── bin
│   ├── conf
│   │   ├── authorizers.xml
│   │   ├── bootstrap-notification-services.xml
│   │   ├── bootstrap.conf
│   │   ├── logback.xml
│   │   ├── login-identity-providers.xml
│   │   ├── nifi.properties
│   │   ├── state-management.xml
│   │   └── zookeeper.properties
│   ├── docs
│   └── lib
└── nifi-2
    ├── LICENSE
    ├── NOTICE
    ├── README
    ├── bin
    ├── conf
    │   ├── authorizers.xml
    │   ├── bootstrap-notification-services.xml
    │   ├── bootstrap.conf
    │   ├── logback.xml
    │   ├── login-identity-providers.xml
    │   ├── nifi.properties
    │   ├── state-management.xml
    │   └── zookeeper.properties
    ├── docs
    └── lib

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure nifi.properties
&lt;/h3&gt;

&lt;p&gt;Edit &lt;em&gt;nifi-1/conf/nifi.properties&lt;/em&gt; and set the following properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.state.management.embedded.zookeeper.start=true

nifi.remote.input.secure=true

nifi.web.http.port=
nifi.web.https.host=localhost
nifi.web.https.port=8443

nifi.security.keystore=/path/to/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=yourpassword
nifi.security.keyPasswd=yourpassword
nifi.security.truststore=/path/to/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=yourpassword

nifi.cluster.protocol.is.secure=true

nifi.cluster.is.node=true
nifi.cluster.node.protocol.port=8088
nifi.cluster.flow.election.max.candidates=2

nifi.zookeeper.connect.string=localhost:2181

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Edit &lt;em&gt;nifi-2/conf/nifi.properties&lt;/em&gt; and set the following properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.remote.input.secure=true

nifi.web.http.port=
nifi.web.https.host=localhost
nifi.web.https.port=7443

nifi.security.keystore=/path/to/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=yourpassword
nifi.security.keyPasswd=yourpassword
nifi.security.truststore=/path/to/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=yourpassword

nifi.cluster.protocol.is.secure=true

nifi.cluster.is.node=true
nifi.cluster.node.protocol.port=7088
nifi.cluster.flow.election.max.candidates=2
nifi.cluster.load.balance.port=6343

nifi.zookeeper.connect.string=localhost:2181

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;NOTE: For nifi-1 I left the default value for nifi.cluster.load.balance.port and since we are running both nodes on thesame host, we need to set a different value for nifi-2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure authorizers.xml
&lt;/h3&gt;

&lt;p&gt;The configuration of authorizers.xml should be the same for both nodes, so edit &lt;em&gt;nifi-1/conf/authorizers.xml&lt;/em&gt; and_nifi-2/conf/authorizers.xml_ and do the following…&lt;/p&gt;

&lt;p&gt;Modify the userGroupProvider to declare the initial users for the initial admin and for the cluster nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;userGroupProvider&amp;gt;
    &amp;lt;identifier&amp;gt;file-user-group-provider&amp;lt;/identifier&amp;gt;
    &amp;lt;class&amp;gt;org.apache.nifi.authorization.FileUserGroupProvider&amp;lt;/class&amp;gt;
    &amp;lt;property name="Users File"&amp;gt;./conf/users.xml&amp;lt;/property&amp;gt;
    &amp;lt;property name="Legacy Authorized Users File"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Initial User Identity 1"&amp;gt;CN=bbende, OU=ApacheNiFi&amp;lt;/property&amp;gt;
    &amp;lt;property name="Initial User Identity 2"&amp;gt;CN=localhost, OU=NIFI&amp;lt;/property&amp;gt;
&amp;lt;/userGroupProvider&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;NOTE: In this case since both nodes are running on the same host and using the same keystore, there only needs to beone user representing both nodes, but in a real setup there would be multiple node identities.&lt;/p&gt;

&lt;p&gt;NOTE: The user identities are case-sensitive and white-space sensitive, so make sure the identities are entered exactlyas they will come across to NiFi when making a request.&lt;/p&gt;

&lt;p&gt;Modify the accessPolicyProvider to declare which user is the initial admin and which are the node identities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;accessPolicyProvider&amp;gt;
    &amp;lt;identifier&amp;gt;file-access-policy-provider&amp;lt;/identifier&amp;gt;
    &amp;lt;class&amp;gt;org.apache.nifi.authorization.FileAccessPolicyProvider&amp;lt;/class&amp;gt;
    &amp;lt;property name="User Group Provider"&amp;gt;file-user-group-provider&amp;lt;/property&amp;gt;
    &amp;lt;property name="Authorizations File"&amp;gt;./conf/authorizations.xml&amp;lt;/property&amp;gt;
    &amp;lt;property name="Initial Admin Identity"&amp;gt;CN=bbende, OU=ApacheNiFi&amp;lt;/property&amp;gt;
    &amp;lt;property name="Legacy Authorized Users File"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Node Identity 1"&amp;gt;CN=localhost, OU=NIFI&amp;lt;/property&amp;gt;
    &amp;lt;property name="Node Group"&amp;gt;&amp;lt;/property&amp;gt;
&amp;lt;/accessPolicyProvider&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure state-management.xml
&lt;/h3&gt;

&lt;p&gt;The configuration of state-management.xml should be the same for both nodes, so edit &lt;em&gt;nifi-1/conf/state-management.xml&lt;/em&gt; and_nifi-2/conf/state-management.xml_ and do the following…&lt;/p&gt;

&lt;p&gt;Modify the cluster state provider to specify the ZooKeeper connect string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;cluster-provider&amp;gt;
    &amp;lt;id&amp;gt;zk-provider&amp;lt;/id&amp;gt;
    &amp;lt;class&amp;gt;org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider&amp;lt;/class&amp;gt;
    &amp;lt;property name="Connect String"&amp;gt;localhost:2181&amp;lt;/property&amp;gt;
    &amp;lt;property name="Root Node"&amp;gt;/nifi&amp;lt;/property&amp;gt;
    &amp;lt;property name="Session Timeout"&amp;gt;10 seconds&amp;lt;/property&amp;gt;
    &amp;lt;property name="Access Control"&amp;gt;Open&amp;lt;/property&amp;gt;
&amp;lt;/cluster-provider&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure zookeeper.properties
&lt;/h3&gt;

&lt;p&gt;The configuration of zookeeper.properties should be the same for both nodes, so edit &lt;em&gt;nifi-1/conf/zookeeper.properties&lt;/em&gt; and_nifi-2/conf/zookeeper.properties_ and do the following…&lt;/p&gt;

&lt;p&gt;Specify the servers that are part of the ZooKeeper ensamble:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server.1=localhost:2888:3888

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Create ZooKeeper myid file
&lt;/h3&gt;

&lt;p&gt;Since embedded ZooKeeper will only be started on the first node, we only need to do the following for nifi-1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir nifi-1/state
mkdir nifi-1/state/zookeeper
echo 1 &amp;gt; nifi-1/state/zookeeper/myid

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Start Cluster
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./nifi-1/bin/nifi.sh start &amp;amp;&amp;amp; ./nifi-2/bin/nifi.sh start

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;At this point, assuming you have the client cert (p12) for the initial admin in your browser (“CN=bbende, OU=ApacheNiFi”)then you can access your cluster at &lt;a href="https://localhost:8443/nifi"&gt;https://localhost:8443/nifi&lt;/a&gt; or &lt;a href="https://localhost:7443/nifi"&gt;https://localhost:7443/nifi&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Issues
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Embedded ZooKeeper fails to create directory&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; java.io.IOException: Unable to create data directory ./state/zookeeper/version-2
     at org.apache.zookeeper.server.persistence.FileTxnSnapLog.&amp;lt;init&amp;gt;(FileTxnSnapLog.java:85)
     at org.apache.nifi.controller.state.server.ZooKeeperStateServer.startStandalone(ZooKeeperStateServer.java:85)
     ... 51 common frames omitted

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This happens frequently with embedded ZooKeeper and usually you can just try to start again and it will resolve itself.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Mistake in initial admin identity, or node identity&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you made a typo in any of the identities and need to edit authorizers.xml, you MUST delete conf/users.xml and conf/authorizations.xml from each node before restarting.&lt;/p&gt;

&lt;p&gt;The reason is that the initial admin and node identities are only used when NiFi starts with no other users, groups, or policies. If you don’t delete the users and authorization files, then your changes will be ignored.&lt;/p&gt;

</description>
      <category>nifi</category>
      <category>2018</category>
    </item>
    <item>
      <title>Apache NiFi Registry 0.2.0</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Wed, 20 Jun 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/apache-nifi-registry-0-2-0-134i</link>
      <guid>https://dev.to/bbende/apache-nifi-registry-0-2-0-134i</guid>
      <description>&lt;p&gt;The Apache NiFi community recently completed the second release of NiFi Registry (0.2.0). This post will introduce someof the new features, including git-based flow persistence, a more configurable metadata database, and event hooks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;Before jumping into the new features, lets review how NiFi Registry works behind the scenes.&lt;/p&gt;

&lt;p&gt;The user interface is a single page webapp built with Angular, and communicates with the server via the NiFi RegistryREST API. Behind the REST API is a service layer where the primary business logic is implemented, and the servicelayer interacts with the metadata database and flow persistence provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QtHV7O3P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-registry-0_2_0/01-architecture-original.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QtHV7O3P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-registry-0_2_0/01-architecture-original.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;metadata database&lt;/em&gt; stores information about buckets and versioned items, such as identifiers, names, descriptions,and commit comments, as well as which items belong to which bucket. The initial release utilized an embedded H2 databasethat was primarily hidden from end users, except for configuring the directory location.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;flow persistence provider&lt;/em&gt; stores the content of each versioned flow and is considered a public extensionpoint. This means custom implementations can be provided by implementing the FlowPersistenceProvider interface. Theinitial release provided a file-system implementation of flow persistence provider which used the local file-systemfor persistence.&lt;/p&gt;

&lt;p&gt;The overall idea is that the metadata database contains information across all types of versioned items (which may eventuallybe more than just flows), and each type of item may have its own persistence mechanism for the content of the item.&lt;/p&gt;

&lt;h3&gt;
  
  
  Git FlowPersistenceProvider
&lt;/h3&gt;

&lt;p&gt;The 0.2.0 release provides a new git-based implementation of the FlowPersistenceProvider utilizing the JGit library.This means the content of versioned flows can now be stored in a git repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JRnyXeGi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-registry-0_2_0/02-architecture-git.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JRnyXeGi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-registry-0_2_0/02-architecture-git.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The git provider can be configured in providers.xml via the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;flowPersistenceProvider&amp;gt;
    &amp;lt;class&amp;gt;
      org.apache.nifi.registry.provider.flow.git.GitFlowPersistenceProvider
    &amp;lt;/class&amp;gt;
    &amp;lt;property name="Flow Storage Directory"&amp;gt;./flow_storage&amp;lt;/property&amp;gt;
    &amp;lt;property name="Remote To Push"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Remote Access User"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Remote Access Password"&amp;gt;&amp;lt;/property&amp;gt;
&amp;lt;/flowPersistenceProvider&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;“Flow Storage Directory”&lt;/em&gt; property specifies a local directory that is expected to already be a git repository. Thiscould be done by creating a new directory and running “git init”, or from cloning an existing remote repository.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;“Remote To Push”&lt;/em&gt; property specifies the name of remote to automatically push to. This property is optional and if notspecified then commits will remain in the local repository, unless a push is performed manually.&lt;/p&gt;

&lt;p&gt;NOTE: In order to use GitHub, do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new GitHub repo and clone it locally&lt;/li&gt;
&lt;li&gt;Set “Flow Storage Directory” to the directory where the repo was cloned&lt;/li&gt;
&lt;li&gt;Go to GitHub’s “Developer Settings” for your account&lt;/li&gt;
&lt;li&gt;Create a new “Personal Access Token” for your NiFi Registry instance&lt;/li&gt;
&lt;li&gt;Set “Remote Access User” to your GitHub username&lt;/li&gt;
&lt;li&gt;Set “Remote Access Password” to the access token&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  External Metadata Database
&lt;/h3&gt;

&lt;p&gt;The 0.2.0 release provides new configuration that allows the metadata database to utilize an external database.Currently Postgres is the only supported database besides H2, although others may work with additional testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UtuYtBQK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-registry-0_2_0/03-architecture-postgres.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UtuYtBQK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-registry-0_2_0/03-architecture-postgres.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 0.1.0 release originally had two properties in nifi-registry.properties related to the H2 database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.registry.db.directory=
nifi.registry.db.url.append=

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In order to make the database more configurable, the 0.2.0 release adds the following properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi.registry.db.url=
nifi.registry.db.driver.class=
nifi.registry.db.driver.directory=
nifi.registry.db.username=
nifi.registry.db.password=

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you are starting NiFi Registry for the first time, then the properties will default to use H2 and you don’t need todo anything unless you want to change the configuration to use a Postgres database.&lt;/p&gt;

&lt;p&gt;If you are upgrading an existing NiFi Registry, it will determine if the old nifi.registry.db.directory property ispopulated and determine if the directory contains an existing H2 database. If an existing database is found, and ifthe target database is empty, then data will automatically be migrated from the existing database to the new database.&lt;/p&gt;

&lt;p&gt;NOTE: This essentially offers a one-time migration from the existing H2 database, to a new H2 database, or a new Postgresdatabase.&lt;/p&gt;

&lt;p&gt;For specifics about each property please refer to the &lt;a href="https://nifi.apache.org/docs/nifi-registry-docs/index.html"&gt;NiFi Registry Administration Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event Hooks
&lt;/h3&gt;

&lt;p&gt;An event hook is a new extension point that allows custom code to be triggered when application events occur.&lt;/p&gt;

&lt;p&gt;In order to implement an event hook, the following Java interface must be implemented:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface EventHookProvider extends Provider {

    void handle(Event event) throws EventHookException;

}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Event object will contain an EventType, along with a list of field/value pairs that are specific to the event. At thetime of writing this, the possible event types are the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE_BUCKET
CREATE_FLOW
CREATE_FLOW_VERSION
UPDATE_BUCKET
UPDATE_FLOW
DELETE_BUCKET
DELETE_FLOW

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The list of event types and fields can be found in the code in the &lt;a href="https://github.com/apache/nifi-registry/blob/master/nifi-registry-provider-api/src/main/java/org/apache/nifi/registry/hook/EventType.java"&gt;EventType class&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The 0.2.0 release comes with two provided event hooks, &lt;em&gt;LoggingEventHookProvider&lt;/em&gt; and &lt;em&gt;ScriptedEventHookProvider&lt;/em&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  LoggingEventHookProvider
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;LoggingEventHookProvider&lt;/em&gt; logs a string representation of each event using an SLF4J logger. The logger can beconfigured via NiFi Registry’s logback.xml, which by default contains an appender that writes to a log file namednifi-registry-event.log in the logs directory.&lt;/p&gt;

&lt;p&gt;To enable this hook, simply uncomment the configuration in providers.xml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;eventHookProvider&amp;gt;
    &amp;lt;class&amp;gt;
      org.apache.nifi.registry.provider.hook.LoggingEventHookProvider
    &amp;lt;/class&amp;gt;
&amp;lt;/eventHookProvider&amp;gt;  

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After creating a bucket and starting version control on a process group, the event log should show something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2018-06-19 11:30:46,174 ## CREATE_BUCKET [BUCKET_ID=5d81dc5e-79e1-4387-8022-79e505f5e3a0, USER=anonymous]
2018-06-19 11:32:16,503 ## CREATE_FLOW [BUCKET_ID=5d81dc5e-79e1-4387-8022-79e505f5e3a0, FLOW_ID=a89bf6b7-41f9-4a96-86d4-0aeb3c3c25be, USER=anonymous]
2018-06-19 11:32:16,610 ## CREATE_FLOW_VERSION [BUCKET_ID=5d81dc5e-79e1-4387-8022-79e505f5e3a0, FLOW_ID=a89bf6b7-41f9-4a96-86d4-0aeb3c3c25be, VERSION=1, USER=anonymous, COMMENT=v1]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  ScriptEventHookProvider
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;ScriptedEventHookProvider&lt;/em&gt; allows a custom script to be executed for each event. This can be used to handle manysituations without having to formally implement an event hook in Java.&lt;/p&gt;

&lt;p&gt;The configuration for this event hook looks like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;eventHookProvider&amp;gt;
    &amp;lt;class&amp;gt;
      org.apache.nifi.registry.provider.hook.ScriptEventHookProvider
    &amp;lt;/class&amp;gt;
    &amp;lt;property name="Script Path"&amp;gt;&amp;lt;/property&amp;gt;
    &amp;lt;property name="Working Directory"&amp;gt;&amp;lt;/property&amp;gt;
&amp;lt;/eventHookProvider&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;“Script Path”&lt;/em&gt; property is the full path to a script that will executed for each event. The arguments to the scriptwill be the event fields in the order they are specified for the given event type. As an example, lets say we created ascript called logging-hook.sh with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
echo $@ &amp;gt;&amp;gt; /tmp/nifi-registry-event.log

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For each event, this would echo the arguments to the script and append them to /tmp/nifi-registry-event.log. After creatinga bucket and starting version control on a process group, the log should show something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE_BUCKET feeb0fbe-5d7e-4363-b58b-142fa80775e1 anonymous
CREATE_FLOW feeb0fbe-5d7e-4363-b58b-142fa80775e1 1a0b614c-3d0f-471a-b6b1-645e6091596d anonymous
CREATE_FLOW_VERSION feeb0fbe-5d7e-4363-b58b-142fa80775e1 1a0b614c-3d0f-471a-b6b1-645e6091596d 1 anonymous v1

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;The new features in the 0.2.0 release provide additional options for the metadata database and flow persistence provider,allowing data to be pushed off the server where the application is running, and on to external locations. In addition,event hooks provide an easy way to initiate custom workflows based on various events.&lt;/p&gt;

</description>
      <category>nifi</category>
      <category>2018</category>
    </item>
    <item>
      <title>Apache NiFi - Secure Keytab Access</title>
      <dc:creator>Bryan Bende</dc:creator>
      <pubDate>Mon, 09 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/bbende/apache-nifi-secure-keytab-access-1d25</link>
      <guid>https://dev.to/bbende/apache-nifi-secure-keytab-access-1d25</guid>
      <description>&lt;p&gt;In a multi-tenant environment, a typical approach is to create a process group per team and apply securitypolicies that restrict access, such that a given team only has access to it’s own group.&lt;/p&gt;

&lt;p&gt;Within a process group, a team may need to interact with an external system that is secured with Kerberos. As anexample, lets say there is a Kerberized HDFS cluster, and there are two teams that each have a keytab toaccess this cluster.&lt;/p&gt;

&lt;p&gt;In order to create a PutHDFS processor that sends data to the Kerberized HDFS cluster, the processor must be configuredwith a principal and keyab, and the keytab must be on a filesystem that is accessible to the NiFi JVM. In addition, the keytab must be readable by the operating system user that launched the NiFi JVM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rhH6Exou--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-keytab-isolation/01-nifi-keytab-setup-before.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rhH6Exou--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-keytab-isolation/01-nifi-keytab-setup-before.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem here is that the Keytab property is a free-form text field, so there is no way to stop team 1 from entering team 2’s keytab.&lt;/p&gt;

&lt;p&gt;In order to solve this problem, the 1.6.0 release introduced a Keytab Controller Service, along with more granular restricted components, which together can be used to properly secure access to keytabs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keytab Controller Service
&lt;/h3&gt;

&lt;p&gt;The first step to securing keytab access was to create a controller service where the principal and keytab could bedefined, and then update all processors that had the free-form properties to also have a property for a KeytabService.&lt;/p&gt;

&lt;p&gt;NOTE: The free-form properties were not removed on this release in order to give everyone a chance to migrate tothe new approach, but we’ll discuss this more later.&lt;/p&gt;

&lt;p&gt;The benefit of using a controller service is that we can restrict which users have ability to use the service via security policies. In our example above, an admin user can create a Keytab Service for team 1 that points to team 1’s keytab, and then create a policy that gives only team 1 access to this service. The admin user can then do the same thing for team 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_pHi_m2T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-keytab-isolation/02-nifi-keytab-setup-after.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_pHi_m2T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-keytab-isolation/02-nifi-keytab-setup-after.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This gets us very close to what we want, BUT &lt;em&gt;what if someone from team 1 ignores the service that was created for them, and just creates another Keytab Service pointing to team 2’s keytab&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;We need a way to also restrict who can create Keytab Controller services, so that only the admin users can create them,and not any of the users in team 1 or team 2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Granular Restricted Components
&lt;/h3&gt;

&lt;p&gt;The NiFi 1.0.0 release introduced the concept of “restricted components”. The idea was that some components may beable to do things that we don’t want all users to do, such as access the local file system, or execute code. These types of components required a user to have an additional “restricted” permission in order to create the component.&lt;/p&gt;

&lt;p&gt;The 1.6.0 release introduced more granular categories of restricted components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read-filesystem&lt;/li&gt;
&lt;li&gt;write-filesystem&lt;/li&gt;
&lt;li&gt;execute-code&lt;/li&gt;
&lt;li&gt;access-keytab&lt;/li&gt;
&lt;li&gt;export-nifi-details&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The restricted permissions are controlled from the global policies menu in the top-right:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SRSHKpqU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-keytab-isolation/03-nifi-restricted-permissions.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRSHKpqU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://bbende.github.io/assets/images/nifi-keytab-isolation/03-nifi-restricted-permissions.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our previous example, if we only give the admin users the “access-keytab” permission, then the users in team 1 and team 2 won’t be able to create a Keytab Controller service, thus forcing them to use the Keytab Services they were given permission to.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preventing Free-form Keytab Properties
&lt;/h3&gt;

&lt;p&gt;The final piece to the puzzle is to prevent the use of the old free-form keytab properties that were left around forbackwards compatibility. This can be done by configuring an environment variable in nifi-env.sh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export NIFI_ALLOW_EXPLICIT_KEYTAB=true

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Setting this value to &lt;em&gt;false&lt;/em&gt; will produce a validation error in any component where the free-form keytab property isentered, which means the component can’t be started unless it uses a Keytab Controller service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;In summary…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only admin users have the “access-keytab” permission&lt;/li&gt;
&lt;li&gt;Admin users create Keytab Services and assign appropriate permissions&lt;/li&gt;
&lt;li&gt;Regular users can use the Keytab Services they were given access to, but can’t create new ones&lt;/li&gt;
&lt;li&gt;Legacy free-form properties are disallowed via the NIFI_ALLOW_EXPLICIT_KEYTAB environment variable&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nifi</category>
      <category>kerberos</category>
      <category>2018</category>
    </item>
  </channel>
</rss>
