<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Itzik Kotler</title>
    <description>The latest articles on DEV Community by Itzik Kotler (@itzikkotler).</description>
    <link>https://dev.to/itzikkotler</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/itzikkotler"/>
    <language>en</language>
    <item>
      <title>Useful faas-cli One-Liners</title>
      <dc:creator>Itzik Kotler</dc:creator>
      <pubDate>Fri, 10 Jul 2020 13:20:21 +0000</pubDate>
      <link>https://dev.to/itzikkotler/useful-faas-cli-one-liners-3j16</link>
      <guid>https://dev.to/itzikkotler/useful-faas-cli-one-liners-3j16</guid>
      <description>&lt;p&gt;&lt;a href="https://www.openfaas.com/"&gt;OpenFaaS&lt;/a&gt; is a framework for building serverless functions with Docker and Kubernetes. Their goal, in their own words, is to make it easy for developers to deploy event-driven functions and microservices to Kubernetes without repetitive, boiler-plate coding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/openfaas/faas-cli"&gt;faas-cli&lt;/a&gt; is the official CLI for OpenFaaS. In other words it is a command-line tool to help manage, prepare, and invoke functions.&lt;/p&gt;

&lt;p&gt;In this post we’re going to cover 3 tips on how to use &lt;code&gt;faas-cli&lt;/code&gt; to improve (and automate) our OpenFaaS workflows&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Handle Multiple Functions from Multiple Files
&lt;/h4&gt;

&lt;p&gt;When you start working with &lt;code&gt;faas-cli&lt;/code&gt; it’s tempting to do this a lot:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;faas-cli new --lang= ...&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will result in creating a new function (based on a template) but will also put it in it’s own YAML file. Now if we would like to leverage the &lt;code&gt;faas-cli up&lt;/code&gt; command line option, we will need to find a way to run it on multiple YAML files. Here’s a one liner that will do it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls *.yml | awk '{ print $1 }' | xargs -I {} faas-cli up -f {}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will build, push and deploy all of our functions in all of our YAML files. If we would like to remove all our functions from all our YAML files, just change &lt;code&gt;faas-cli up&lt;/code&gt; to &lt;code&gt;faas-cli remove -f&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls *.yml | awk '{ print $1 }' | xargs -I {} faas-cli remove -f {}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Another way to remove all the functions (if you don’t have the YAML files) is this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;faas-cli list | tail -n +2 | awk '{ print $1 }' | xargs -I {} faas-cli remove {}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will enumerate the output of &lt;code&gt;faas-cli list&lt;/code&gt; instead of operating on the functions in the YAML files&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Merge Multiple Functions from Multiple Files to a Single YAML Stack File
&lt;/h4&gt;

&lt;p&gt;Consolidating all your functions to one YAML file has many benefits and you can easily do it using the &lt;a href="https://github.com/mikefarah/yq"&gt;yq&lt;/a&gt; utility:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yq merge *.yml &amp;gt; stack.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will simplify your workflow, reduce your need for one liners and … it has more benefits! So keep reading the next tip :-)&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Use Multiple Cores to Build &amp;amp; Deploy your Single YAML Stack File
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;faas-cli&lt;/code&gt; lets you parallelize the build, push and deploy process by specifying the &lt;code&gt;--parallel&lt;/code&gt; command line option. The &lt;code&gt;--parallel&lt;/code&gt; takes an integer that will define how many concurrent build actions should be performed. A good starting point will be to pass &lt;a href="https://www.gnu.org/software/coreutils/manual/html_node/nproc-invocation.html"&gt;nproc&lt;/a&gt; output to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;faas-cli up -f stack.yml --parallel `nproc`
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The output of the &lt;code&gt;nproc&lt;/code&gt; utility is the number of processing units available on the computer that you are running this command on. So ideally you can maximize your CPUs in the process. &lt;/p&gt;

&lt;p&gt;And of course, removing is now easy as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;faas-cli remove -f stack.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  What’s next?
&lt;/h4&gt;

&lt;p&gt;Go ahead and apply some of the tips above to your current OpenFaaS projects and workflows. If you are looking for more tips, check out this &lt;a href="https://www.openfaas.com/blog/five-cli-tips/"&gt;post&lt;/a&gt; by Alex Ellis. Do you have a favorite &lt;code&gt;faas-cli&lt;/code&gt; trick? Feel free to leave it in your comments below.&lt;/p&gt;

&lt;p&gt;Big thanks to &lt;a href="https://dev.to/alexellis"&gt;Alex Ellis&lt;/a&gt; for reviewing this post&lt;/p&gt;

</description>
      <category>openfaas</category>
      <category>faas</category>
      <category>cli</category>
      <category>shell</category>
    </item>
    <item>
      <title>Running OpenFaaS and MongoDB on Raspbian 64bit</title>
      <dc:creator>Itzik Kotler</dc:creator>
      <pubDate>Tue, 09 Jun 2020 13:17:54 +0000</pubDate>
      <link>https://dev.to/itzikkotler/running-openfaas-and-mongodb-on-raspbian-64bit-j75</link>
      <guid>https://dev.to/itzikkotler/running-openfaas-and-mongodb-on-raspbian-64bit-j75</guid>
      <description>&lt;p&gt;In this tutorial we’ll get a Lightweight kubernetes running on Raspberry Pi 4B/3B/3B+ as ARM64 and set up OpenFaaS with MongoDB.&lt;/p&gt;

&lt;h4&gt;
  
  
  Raspberry Pi OS (64 bit) vs Raspbian 64bit:
&lt;/h4&gt;

&lt;p&gt;To avoid confusion, this article will NOT talk about &lt;a href="https://www.raspberrypi.org/forums/viewtopic.php?f=117&amp;amp;t=275370"&gt;Raspberry Pi 64bit OS&lt;/a&gt; that is currently in beta. Instead, we will focus on Raspbian which is a 32bit but is capable of ALSO running 64bit applications. I assume there are plenty of people who still run Raspbian and will keep running in the foreseeable future.&lt;/p&gt;

&lt;p&gt;We’ll use the following software:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;raspbian-nspawn-64&lt;/code&gt; package&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;k3sup&lt;/code&gt; tool&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;arkade&lt;/code&gt; tool&lt;/li&gt;
&lt;li&gt;OpenFaaS Platform&lt;/li&gt;
&lt;li&gt;MongoDB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And my hardware setup includes: A cluster of 10 x Raspberry Pi 4B with 4GB each.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step #1: Enabling ARM64 &amp;amp; 64bit Userland (aka. 64bit Shell):
&lt;/h4&gt;

&lt;p&gt;The default OS of Raspberry Pi (aka. Raspbian) as of now is completely 32bit, this is often abbreviated as armv7l or armhf. However, Raspberry Pi’s CPU starting from model 3B supports 64bit. In this step we will enable the 64bit kernel and setup a 64bit shell.&lt;/p&gt;

&lt;p&gt;Thanks to the great work of &lt;a href="https://github.com/sakaki-"&gt;sakaki&lt;/a&gt;'s this is as easy as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y raspbian-nspawn-64&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can read more about it here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sakaki-/raspbian-nspawn-64"&gt;https://github.com/sakaki-/raspbian-nspawn-64&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Make sure you select ‘Yes’ in the dialog and reboot your Pi before proceeding to the next step.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step #2: Using Docker/Containers Inside The 64bit Userland:
&lt;/h4&gt;

&lt;p&gt;Since we’re use containers, we need to make some adjustments in our 64bit Userland:&lt;/p&gt;

&lt;p&gt;In the Wiki page of the &lt;a href="https://github.com/sakaki-/raspbian-nspawn-64"&gt;raspbian-nspawn-64&lt;/a&gt; project there’s an article with a tutorial that explains how to get there:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sakaki-/raspbian-nspawn-64/wiki/Using-Docker-Inside-the-Container"&gt;https://github.com/sakaki-/raspbian-nspawn-64/wiki/Using-Docker-Inside-the-Container&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please follow all the instructions until the ‘Installing Docker in the 64-bit Debian Buster Container’ chapter. You can go ahead and install Docker if you want, that’s optional and can be handy but it won't be used by k3s.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3#: Installing k3s (i.e., Lightweight Kubernetes) using k3sup and OpenFaaS using arakde:
&lt;/h4&gt;

&lt;p&gt;Instead of getting Kubernetes (i.e., k8s) installed on our IoT edge, we’ll use k3s. k3s is a lightweight Kubernetes distribution, which is easy to install. To automate and simplify our workflow we’ll use &lt;a href="https://github.com/alexellis/k3sup"&gt;k3sup&lt;/a&gt; and &lt;a href="https://github.com/alexellis/arkade"&gt;arkade&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.alexellis.io/"&gt;Alex Ellis&lt;/a&gt; who wrote those great tools also wrote an excellent tutorial on how to use ‘em to bootstrap your Kubernetes (with OpenFaaS):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@alexellisuk/walk-through-install-kubernetes-to-your-raspberry-pi-in-15-minutes-84a8492dc95a"&gt;https://medium.com/@alexellisuk/walk-through-install-kubernetes-to-your-raspberry-pi-in-15-minutes-84a8492dc95a&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To avoid confusion, in our current situation you should start from the ‘Get your CLI tools’ chapter. DO NOT flash the OS image otherwise you would lose all the progress from the previous steps.&lt;/p&gt;

&lt;p&gt;At the end of this you should have a working k3s cluster with OpenFaaS deployed and you can verify it’s indeed ARM64 by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe nodes&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or in short:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe nodes | grep "Architecture"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And confirm it’s indeed:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Architecture:               arm64&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step #4: Installing MongoDB:
&lt;/h4&gt;

&lt;p&gt;Unfortunately arkade doesn't support installing MongoDB on ARM64, and as of now the helm charts are also not supporting ARM64. This means we’ll have to do some hacking.&lt;/p&gt;

&lt;p&gt;We’re going to use MongoDB from Docker’s Official Images library:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hub.docker.com/_/mongo"&gt;https://hub.docker.com/_/mongo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That supports ARM64. We’ll base our deployment on &lt;a href="https://github.com/bitnami/charts/tree/master/bitnami/mongodb"&gt;bitnami&lt;/a&gt;'s helm chart but make the necessary modifications as the container image is different. Copy and paste below into a &lt;code&gt;mongodb.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Source: mongodb/templates/svc-standalone.yaml
apiVersion: v1
kind: Service
metadata:
  name: db-mongodb
  namespace: default
  labels:
    app: mongodb
    release: "db"
spec:
  type: ClusterIP
  ports:
    - name: mongodb
      port: 27017
      targetPort: mongodb
  selector:
    app: mongodb
    release: "db"
---
# Source: mongodb/templates/deployment-standalone.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: db-mongodb
  namespace: default
  labels:
    app: mongodb
    release: "db"
spec:
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: mongodb
      release: "db"
  template:
    metadata:
      labels:
        app: mongodb
        release: "db"
    spec:
      containers:
        - name: db-mongodb
          image: docker.io/library/mongo:latest
          imagePullPolicy: "IfNotPresent"
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              value: "root"
            - name: MONGO_INITDB_ROOT_PASSWORD
              value: "example"
            - name: MONGODB_ENABLE_IPV6
              value: "no"
            - name: MONGODB_ENABLE_DIRECTORY_PER_DB
              value: "no"
          ports:
            - name: mongodb
              containerPort: 27017
          livenessProbe:
            exec:
              command:
                - mongo
                - --eval
                - "db.adminCommand('ping')"
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
                - mongo
                - --eval
                - "db.adminCommand('ping')"
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          volumeMounts:
            - name: data
              mountPath: /data/db
              subPath:
          resources:
            {}
      volumes:
        - name: data
          emptyDir: {}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;mongodb.yaml&lt;/code&gt; above is not designed for production deployment. It doesn’t include persistent storage settings and has a hardcoded login and password to name a few caveats. However, it’s good enough for our proof-of-concept.&lt;/p&gt;

&lt;p&gt;Deploying it is as easy as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f mongodb.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And testing it (i.e., connecting to to MongoDB) is as easy as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl run --namespace default db-mongodb-client --rm --tty -i --restart='Never' --image docker.io/library/mongo:latest --command -- mongo admin --host db-mongodb --authenticationDatabase admin -u root -p example&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The latter adopted from the NOTES.txt of bitnami’s MongoDB helm chart.&lt;/p&gt;

&lt;h4&gt;
  
  
  End-to-end Project:
&lt;/h4&gt;

&lt;p&gt;We have taken all the steps above in order to finally be able to have storage for our OpenFaaS functions. Alex wrote an awesome tutorial on this topic here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.openfaas.com/blog/get-started-with-python-mongo/"&gt;https://www.openfaas.com/blog/get-started-with-python-mongo/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again, to avoid confusion, you should follow the tutorial from the ‘Create the hello-python3 function’ chapter.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusions:
&lt;/h4&gt;

&lt;p&gt;In this tutorial we’ve taken multiple steps to upgrade our cluster from 32bit to 64bit, quickly setup k3s with OpenFaaS, hack a MongoDB helm chart and get to a situation where we’ve got storage for our functions.&lt;/p&gt;

&lt;p&gt;Big thanks to Alex Ellis for reviewing &amp;amp; contributing to this article!&lt;/p&gt;

</description>
      <category>openfaas</category>
      <category>mongodb</category>
      <category>kubernetes</category>
      <category>arm64</category>
    </item>
    <item>
      <title>Making HPA more responsive for resource-based scaling in OpenFaaS</title>
      <dc:creator>Itzik Kotler</dc:creator>
      <pubDate>Fri, 15 May 2020 18:17:08 +0000</pubDate>
      <link>https://dev.to/itzikkotler/making-hpa-more-responsive-for-resource-based-scaling-in-openfaas-312f</link>
      <guid>https://dev.to/itzikkotler/making-hpa-more-responsive-for-resource-based-scaling-in-openfaas-312f</guid>
      <description>&lt;p&gt;I've got a Raspberry Pi 4 cluster that is running Kubernetes. Thanks to &lt;a href="https://github.com/alexellis/k3sup"&gt;k3sup&lt;/a&gt; and &lt;a href="https://github.com/alexellis/arkade"&gt;arkade&lt;/a&gt; I got it up and running in jiffy. Here are a few tutorials that I've followed to get there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://medium.com/@alexellisuk/five-years-of-raspberry-pi-clusters-77e56e547875"&gt;Five years of Raspberry Pi Clusters&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://medium.com/@alexellisuk/walk-through-install-kubernetes-to-your-raspberry-pi-in-15-minutes-84a8492dc95a"&gt;Walk-through — install Kubernetes to your Raspberry Pi in 15 minutes&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having said that k3s is a little bit different and this post is about this difference when it comes to how to change the Kubernetes Horizontal Pod Autoscaling "cool down" period. (aka. &lt;code&gt;--horizontal-pod-autoscaler-downscale-stabilization&lt;/code&gt;) in it.&lt;/p&gt;

&lt;p&gt;Recently I've started playing with HPAv2 (aka. Kubernetes Horizontal Pod Autoscaling) and &lt;a href="https://github.com/openfaas/faas"&gt;OpenFaaS&lt;/a&gt;. The reason for that is that my OpenFaaS functions tend to be CPU and MEM intensive (as oppose to high API hit rates, which is where the built-in OpenFaaS autoscaler/alertmanager is focused). &lt;/p&gt;

&lt;p&gt;OpenFaaS and HPAv2 play nicely together and I've started by followed the guide here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.openfaas.com/tutorials/kubernetes-hpa/"&gt;https://docs.openfaas.com/tutorials/kubernetes-hpa/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, as correctly stated in the document the HPAv2 scale-down is a slow process (i.e., default of 5 minutes). The reason for that is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/"&gt;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And to be more specific:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When managing the scale of a group of replicas using the Horizontal Pod Autoscaler, it is possible that the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated. This is sometimes referred to as thrashing.&lt;/p&gt;

&lt;p&gt;Starting from v1.6, a cluster operator can mitigate this problem by tuning the global HPA settings exposed as flags for the kube-controller-manager component:&lt;/p&gt;

&lt;p&gt;Starting from v1.12, a new algorithmic update removes the need for the upscale delay.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--horizontal-pod-autoscaler-downscale-stabilization&lt;/code&gt;: The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now I'm using k3s (v1.17.2+k3s1) and in order to change it I need to pass a new value to kube-controller-manager. k3s is awesome, but it doesn't include the default path of:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/kubernetes/manifests/kube-controller-manager.yaml&lt;/code&gt; on the master node&lt;/p&gt;

&lt;p&gt;So I did some poking and found that it's possible to pass this flag/value to k3s upon start. In other words: We need to shutdown our k3s master node, edit the systemd unit of k3s, add this flag in runtime, and start it again.&lt;/p&gt;

&lt;p&gt;Here are the steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SSH to your master node&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run: &lt;code&gt;sudo systemctl stop k3s.service&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edit &lt;code&gt;/etc/systemd/system/k3s.service&lt;/code&gt; with your favorite editor (e.g., &lt;code&gt;sudo vim /etc/systemd/system/k3s.service&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to ExecStart section and append:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'--kube-controller-manager-arg' \
'horizontal-pod-autoscaler-downscale-stabilization=1m'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To command line options. Here's an example of my ExecStart Entry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ExecStart=/usr/local/bin/k3s \
    server \
    '--tls-san' \
    '192.168.86.180' \
    '--no-deploy' \
    'servicelb' \
    '--no-deploy' \
    'traefik' \
    '--kube-controller-manager-arg' \
    'horizontal-pod-autoscaler-downscale-stabilization=1m'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(Note in my setup I've also disabled the built-in LB and traefik v1 ingress; this was done for me when I've passed &lt;code&gt;--no-extras&lt;/code&gt; to k3sup)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Run: &lt;code&gt;sudo systemctl daemon-reload&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run: &lt;code&gt;sudo systemctl start k3s.service&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it! &lt;/p&gt;

&lt;p&gt;We have successfully changed the "cool down" period to one minute (i.e., 1m) in k3s. Now it's time to go back to our &lt;a href="https://github.com/openfaas/faas"&gt;OpenFaaS&lt;/a&gt; functions and enjoy a more responsive resource-based scaling&lt;/p&gt;

</description>
      <category>k3s</category>
      <category>kubernetes</category>
      <category>openfaas</category>
    </item>
  </channel>
</rss>
