<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Àlex Serra</title>
    <description>The latest articles on DEV Community by Àlex Serra (@bounteous17).</description>
    <link>https://dev.to/bounteous17</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bounteous17"/>
    <language>en</language>
    <item>
      <title>AWS EKS cost efficient 🤑</title>
      <dc:creator>Àlex Serra</dc:creator>
      <pubDate>Wed, 23 Oct 2024 17:32:25 +0000</pubDate>
      <link>https://dev.to/bounteous17/aws-eks-reserved-instances-51ab</link>
      <guid>https://dev.to/bounteous17/aws-eks-reserved-instances-51ab</guid>
      <description>&lt;p&gt;Running a &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; cluster at some of the &lt;a href="https://www.bunnyshell.com/blog/best-kubernetes-cloud-providers/" rel="noopener noreferrer"&gt;clouds providing&lt;/a&gt; that service has become a great option for lowering costs on those teams that have the capacity to develop and maintain such thing. In our case we have opted for &lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;EKS&lt;/a&gt;, the native implementation of Kubernetes for AWS.&lt;/p&gt;

&lt;p&gt;Before going deeper it's important to understand the key concepts on how EKS manages &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/worker.html" rel="noopener noreferrer"&gt;nodes attached to the cluster&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One of the reasons why Kubernetes has expanded so quick on the most used cloud providers is because that platforms developed different integrations for letting k8s manage the amount of nodes attached to the cluster based on the performance metrics. For EKS, AWS offers us &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html" rel="noopener noreferrer"&gt;two different controllers&lt;/a&gt; for delivering on them the responsibility to adjust the amount of EC2 machines and their &lt;a href="https://aws.amazon.com/ec2/instance-types/" rel="noopener noreferrer"&gt;instance class&lt;/a&gt; that needs to be running under the different preconfigured scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0isqcceene2vc25821p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0isqcceene2vc25821p.jpg" alt="" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Enough chatter, let's spend the money. An interesting starting point would be to create two types of pools within our cluster. One pool will be made up of &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html" rel="noopener noreferrer"&gt;ON_DEMAND&lt;/a&gt; nodes and the other one will be using &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html" rel="noopener noreferrer"&gt;SPOT&lt;/a&gt; type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42b639amee1tx2vtu7r9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42b639amee1tx2vtu7r9.png" alt="Compute section inside EKS service" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To summarize, we can say that an EC2 instance pool in &lt;code&gt;SPOT&lt;/code&gt; mode guarantees us the type and number of instances desired at the time for a finite time, at which point the instance will be replaced by an identical one. As you can imagine, this causes inconvenience to our cluster, having to move the resources from the replaced node to those that are available. That is why it is the cheapest option with discounts of up to 72%. On the other hand, &lt;code&gt;ON_DEMAND&lt;/code&gt; instances mean a guarantee that that specific EC2 instance is reserved for us for as long as we want without being replaced.&lt;/p&gt;

&lt;p&gt;This is where the main post theme comes in. &lt;code&gt;SPOT&lt;/code&gt; instances cannot get more discounts, but the other type can. Since it is a better service, the &lt;code&gt;ON_DEMAND&lt;/code&gt; type pool can get discounts of up to 37% depending on the time and type of payment under which we reserve said instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsigi4cyrwre5dvwc2mu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsigi4cyrwre5dvwc2mu5.png" alt="Actual saving plan for ON_DEMAND instances" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The current market price for &lt;code&gt;t4g.xlarge&lt;/code&gt; instances is about &lt;code&gt;$0.15&lt;/code&gt;, but we are paying &lt;code&gt;$0.0967&lt;/code&gt; for them for one year. If we increase the reservation period to three years or the payment type to &lt;strong&gt;upfront&lt;/strong&gt;, either partial or complete, we would obtain an even greater discount.&lt;/p&gt;




&lt;p&gt;But what happens if I change my mind after signing the contract? 😱&lt;/p&gt;

&lt;p&gt;There is a market (no shady stuff) where we can sell the reservation. We will have previously been able to choose whether to make a &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-convertible-exchange.html" rel="noopener noreferrer"&gt;convertible reservation&lt;/a&gt; in the future, this will give us more room to maneuver, but a less succulent discount.&lt;/p&gt;

&lt;p&gt;Other options to explore would be &lt;a href="https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html" rel="noopener noreferrer"&gt;Saving plans&lt;/a&gt;, in which it is not necessary to specify even the smallest detail of the type of instance to be booked. Again, with lower discounts as they offer greater flexibility.&lt;/p&gt;




&lt;p&gt;How can I see the price improvements that have been applied to me?&lt;/p&gt;

&lt;p&gt;In the following section "Billing and Cost Management" you can find all the details of the savings achieved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmqbta0405bhwhpu3m9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmqbta0405bhwhpu3m9m.png" alt="Image description" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If we have the autoscaling of replicas within the cluster well configured and we ensure a minimum cost for them, we ensure that we are generating interesting savings with an investment of time that makes up for it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I hope you found it useful, and if you want to donate some of the money you've saved, you can do so.&lt;/p&gt;

&lt;p&gt;Be good guys, bye!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>go</category>
      <category>ec2</category>
    </item>
    <item>
      <title>How I built my own SeedBox with K8S</title>
      <dc:creator>Àlex Serra</dc:creator>
      <pubDate>Sun, 28 Jul 2024 10:31:34 +0000</pubDate>
      <link>https://dev.to/bounteous17/how-i-built-my-own-seedbox-with-k8s-1gp7</link>
      <guid>https://dev.to/bounteous17/how-i-built-my-own-seedbox-with-k8s-1gp7</guid>
      <description>&lt;p&gt;We are going to build our self-hosted HA SeedBox for &lt;a href="https://www.qbittorrent.org/" rel="noopener noreferrer"&gt;Qbittorrent&lt;/a&gt;, one of the most popular decentralized network protocols in the world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;You may have noticed that whenever the topic of a &lt;em&gt;P2P&lt;/em&gt; network comes up during a conversation, it is giving rise to some kind of allusion to topics that refer to piracy or undesirable content being shared without control. Ultimately, technology is not to blame for misuse, but despite the fact that it is a decentralized network, the network traffic can be blocked by &lt;a href="https://www.investopedia.com/terms/i/isp.asp" rel="noopener noreferrer"&gt;ISPs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Nice to have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.howtogeek.com/764504/what-is-a-seedbox-and-why-would-you-want-one/" rel="noopener noreferrer"&gt;A little more in-depth knowledge about a SeedBox&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://itnext.io/finally-a-viable-helm-replacement-388d538f9e1f" rel="noopener noreferrer"&gt;Werf, the viable Helm Replacement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Flinux%2Fx86-64%2Fstable%2Fbinary+download" rel="noopener noreferrer"&gt;A Kubernetes cluster where we can create deployments&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://k9scli.io/" rel="noopener noreferrer"&gt;Terminal based UI to interact with your Kubernetes cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.howtogeek.com/742893/what-is-a-nas/" rel="noopener noreferrer"&gt;What is a NAS server?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prepare the scenario
&lt;/h2&gt;

&lt;p&gt;You've probably already played the &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; game and know what it's all about, but if not, I highly recommend you to discover the best container manager in the world and play with it before.&lt;/p&gt;

&lt;p&gt;You can take a look at &lt;a href="https://github.com/Bounteous17/helm-chart-qbittorrent" rel="noopener noreferrer"&gt;this public repository&lt;/a&gt; I have prepared to make available a &lt;a href="https://www.freecodecamp.org/news/what-is-a-helm-chart-tutorial-for-kubernetes-beginners/" rel="noopener noreferrer"&gt;Helm chart&lt;/a&gt; that can be customized to the needs that each of us have. I'm following &lt;a href="https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221" rel="noopener noreferrer"&gt;this other guide&lt;/a&gt; as I write this to publish the repository as a chart package.&lt;/p&gt;

&lt;p&gt;I almost forgot to mention that we need to have a NAS server accessible from the network of our Kubernetes cluster. I have my server configured on a &lt;a href="https://www.raspberrypi.com/products/raspberry-pi-3-model-b/" rel="noopener noreferrer"&gt;Raspberry pi 3&lt;/a&gt; where a HDD has been attached to it and has been published on the network by using &lt;a href="https://www.openmediavault.org/" rel="noopener noreferrer"&gt;Openmediavault&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; showmount &lt;span class="nt"&gt;-e&lt;/span&gt; 192.168.2.10 &lt;span class="c"&gt;# list exported paths&lt;/span&gt;
Export list &lt;span class="k"&gt;for &lt;/span&gt;192.168.2.10:
/export                      192.168.2.0/24
/export/home-lab-nas-runtime 192.168.2.0/24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sadly the hardware used for the NAS server is a bottleneck, as the local network transfer speed works much faster than the USB 2.0 to which the disk is mounted.&lt;/p&gt;

&lt;h2&gt;
  
  
  First contact
&lt;/h2&gt;

&lt;p&gt;At the time of writing this &lt;a href="https://werf.io/getting_started/?usage=localDev&amp;amp;os=linux&amp;amp;buildBackend=docker" rel="noopener noreferrer"&gt;Werf&lt;/a&gt; does not yet implement all of Helm's functionality, which is why the alternative itself contains a separate &lt;a href="https://helm.sh/docs/intro/quickstart/#install-helm" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; installation to cover the actual missing implementations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; helm version
version.BuildInfo&lt;span class="o"&gt;{&lt;/span&gt;Version:&lt;span class="s2"&gt;"v3.15.3"&lt;/span&gt;, GitCommit:&lt;span class="s2"&gt;"3bb50bbbdd9c946ba9989fbe4fb4104766302a64"&lt;/span&gt;, GitTreeState:&lt;span class="s2"&gt;"clean"&lt;/span&gt;, GoVersion:&lt;span class="s2"&gt;"go1.22.5"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; werf helm version                                                                                                                                                                                             
version.BuildInfo&lt;span class="o"&gt;{&lt;/span&gt;Version:&lt;span class="s2"&gt;"v3.14"&lt;/span&gt;, GitCommit:&lt;span class="s2"&gt;""&lt;/span&gt;, GitTreeState:&lt;span class="s2"&gt;""&lt;/span&gt;, GoVersion:&lt;span class="s2"&gt;"go1.21.6"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fortunately &lt;a href="https://werf.io/getting_started/?usage=localDev&amp;amp;os=linux&amp;amp;buildBackend=docker" rel="noopener noreferrer"&gt;Werf&lt;/a&gt; implements the &lt;a href="https://helm.sh/docs/intro/quickstart/#install-helm" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; specifications, so if you feel more comfortable using the &lt;code&gt;helm&lt;/code&gt; CLI you can continue doing so.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing the chart
&lt;/h2&gt;

&lt;p&gt;We have to ways to do it. Both methods read the previously configured values ​​that best suit for my scenario, but you should modify them.&lt;/p&gt;

&lt;p&gt;Feel free to edit the &lt;code&gt;.helm/values.yaml&lt;/code&gt; file. You will primarily need to modify the volumes related section under the directory structure you have configured on your NAS server.&lt;/p&gt;

&lt;p&gt;The chart parameters that can be modified has been documented &lt;a href="https://github.com/Bounteous17/helm-chart-qbittorrent?tab=readme-ov-file#configuration" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Werf converge (Recommended)
&lt;/h3&gt;

&lt;p&gt;We can clone the chart &lt;em&gt;git&lt;/em&gt; &lt;a href="https://github.com/Bounteous17/helm-chart-qbittorrent" rel="noopener noreferrer"&gt;repository&lt;/a&gt; and deploy it in an orderly manner. The first joy that &lt;code&gt;werf&lt;/code&gt; gives us is that it shows us a detailed output in real time about what is happening with the deployment. With &lt;code&gt;helm&lt;/code&gt; this doesn't happen.&lt;/p&gt;

&lt;p&gt;If no working path is specified, the default directory is &lt;code&gt;.helm&lt;/code&gt;  from which to attempt the deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/werf/nelm" rel="noopener noreferrer"&gt;Nelm&lt;/a&gt; is the re-written implementation for &lt;code&gt;helm&lt;/code&gt;. Unfortunately this does not yet have a dedicated command, so the only way to use it is through the &lt;code&gt;werf&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; werf converge &lt;span class="nt"&gt;--dev&lt;/span&gt;                                                                                &lt;span class="o"&gt;[&lt;/span&gt;±master ✓]
Version: v2.6.4
Using werf config render file: /tmp/werf-config-render-2521405566
Starting release &lt;span class="s2"&gt;"qbittorrent"&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;namespace: &lt;span class="s2"&gt;"qbittorrent"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
Constructing release &lt;span class="nb"&gt;history
&lt;/span&gt;Constructing chart tree
Processing resources
Constructing new release
Constructing new deploy plan
Starting tracking
Executing deploy plan
┌ Progress status
│ RESOURCE &lt;span class="o"&gt;(&lt;/span&gt;→READY&lt;span class="o"&gt;)&lt;/span&gt;                    STATE    INFO
│ Deployment/qbittorrent               WAITING  Ready:0/1
│  • Pod/qbittorrent-648fd97cd7-fbrcw  CREATED  Status:ContainerCreating
│ Ingress/qbittorrent                  READY
│ Service/qbittorrent                  READY
└ Progress status

┌ Progress status
│ RESOURCE &lt;span class="o"&gt;(&lt;/span&gt;→READY&lt;span class="o"&gt;)&lt;/span&gt;                    STATE  INFO
│ Deployment/qbittorrent               READY  Ready:1/1
│  • Pod/qbittorrent-648fd97cd7-fbrcw  READY  Status:Running
└ Progress status

┌ Completed operations
│ Create resource: Deployment/qbittorrent
│ Create resource: Ingress/qbittorrent
│ Create resource: Service/qbittorrent
└ Completed operations

Succeeded release &lt;span class="s2"&gt;"qbittorrent"&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;namespace: &lt;span class="s2"&gt;"qbittorrent"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
Running &lt;span class="nb"&gt;time &lt;/span&gt;8.98 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Helm install
&lt;/h3&gt;

&lt;p&gt;Perhaps the most classic way to deploy an application is by using &lt;code&gt;helm&lt;/code&gt;. I think there is nothing more to add here, except that comparing the level of detail that the previous option gives us, this option may be &lt;strong&gt;more tedious if we need to debug possible errors&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; werf helm repo add home-lab-qbittorrent https://bounteous17.github.io/helm-chart-qbittorrent
&lt;span class="s2"&gt;"home-lab-qbittorrent"&lt;/span&gt; has been added to your repositories
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; werf helm search repo home-lab-qbittorrent                                                                                                                                                                    
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION                
home-lab-qbittorrent/qbittorrent        0.1.0           4.6.5-r0-ls334  A Helm chart &lt;span class="k"&gt;for &lt;/span&gt;Kubernetes
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; werf helm &lt;span class="nb"&gt;install &lt;/span&gt;home-lab-qbittorrent home-lab-qbittorrent/qbittorrent
NAME: home-lab-qbittorrent
LAST DEPLOYED: Sun Jul 28 11:18:36 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Let's make torrents eternal :)
&lt;/h2&gt;

&lt;p&gt;Our application will be available under the ingress host that we have configured from the &lt;code&gt;ingress.host&lt;/code&gt; parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa59u083lw3n0yl601tem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa59u083lw3n0yl601tem.png" alt="Running Qbittorrent application" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will have seen that in the &lt;code&gt;.helm/values.yaml&lt;/code&gt; file ​​of this chart &lt;a href="https://github.com/Bounteous17/helm-chart-qbittorrent/blob/2017ff7009b611e286ae62ae7a8776c60b3d6274/.helm/values.yaml#L40" rel="noopener noreferrer"&gt;these&lt;/a&gt; are the default values ​ configured to indicate in which OS path the data that we do not want to disappear if our container is restarted within the cluster should be stored.&lt;/p&gt;

&lt;p&gt;Now that we have &lt;strong&gt;data persistence assured&lt;/strong&gt;, we will be able to access this NAS server from clients other than our cluster deployment to read the downloaded data.&lt;/p&gt;

&lt;p&gt;If you are using a Linux machine as a second client to access the NAS volume with downloaded content, be sure to check out &lt;a href="https://wiki.archlinux.org/title/NFS#Client_configuration" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; for optimal setup. Ultimately, the advantage of using this highly performing network protocol is that it is compatible with almost all operating systems.&lt;/p&gt;

&lt;p&gt;Indeed, it would be really cool to set up a &lt;a href="https://jellyfin.org/" rel="noopener noreferrer"&gt;Jellyfin&lt;/a&gt; server now and connect it to the network volume to enjoy the content from the couch ;)&lt;/p&gt;

&lt;p&gt;I will be super happy to answer any questions (your skills doesn't matter, don't be afraid) and share with you solutions to any problems you may have encountered during this process.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>torrent</category>
      <category>devops</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
