<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anas Alloush</title>
    <description>The latest articles on DEV Community by Anas Alloush (@anas_alloush).</description>
    <link>https://dev.to/anas_alloush</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anas_alloush"/>
    <language>en</language>
    <item>
      <title>Balancing Flexibility &amp; Responsibility: The Shift to Openness in Telcos, At What Cost?</title>
      <dc:creator>Anas Alloush</dc:creator>
      <pubDate>Wed, 07 May 2025 19:38:05 +0000</pubDate>
      <link>https://dev.to/anas_alloush/balancing-flexibility-responsibility-the-shift-to-openness-in-telcos-at-what-cost-3peb</link>
      <guid>https://dev.to/anas_alloush/balancing-flexibility-responsibility-the-shift-to-openness-in-telcos-at-what-cost-3peb</guid>
      <description>&lt;h2&gt;
  
  
  The Past:
&lt;/h2&gt;

&lt;p&gt;For decades, Telecommunication operators built and ran their networks using closed, proprietary solutions, products, protocols, data schemas and practices from a few dominant Telco vendors. These systems were delivered as turnkey solutions, often described as "black boxes" where the operator had little visibility or control over the internal workings.&lt;/p&gt;

&lt;p&gt;For example in the Core domain (2G/3G/4G) vendors provided complete EPC solutions as monolithic systems. Customization was minimal, and operators depended on the vendor for upgrades, integration, and even basic configuration.&lt;/p&gt;

&lt;p&gt;Another example would be the RAN domain, where monopoly was even higher. RAN Solutions came as vertically integrated hardware-software bundles. Vendor's X baseband units (BBU) only worked with their radios and software, and had no chance to integrate it with Vendor' Y similar solutions.&lt;/p&gt;

&lt;p&gt;This was the paradigm for the las 4 decades, which had its pros and cons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single Vendor, E2E support. Simplicity for Operator's teams.&lt;/li&gt;
&lt;li&gt;Less expertise level required. Operators only operate. They don't self-develop, self-optimize or self-integrate..&lt;/li&gt;
&lt;li&gt;Clear product SLAs with robust definitions.&lt;/li&gt;
&lt;li&gt;The internals of the solution are well integrated, tested and optimized.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operators had to adapt their operations to the vendor's products, not the other way around.&lt;/li&gt;
&lt;li&gt;Limiting operators’ ability to mix and match components, or diversify their supply chain, which would increase competition and lower prices&lt;/li&gt;
&lt;li&gt;Solution's changes were often costly. Costly Change Requests sometimes are more expensive than the solution itself.&lt;/li&gt;
&lt;li&gt;Vendor lock-in, limited innovation, and slow adaptability to new technologies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7ucokza7gz4220e7llx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7ucokza7gz4220e7llx.png" alt="Image description" width="597" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Recent Openness Movement
&lt;/h2&gt;

&lt;p&gt;Today, a significant shift is sweeping the Telco industry. A multi-dimensional change is taking shape in different Telco classical domains. From Core and RAN to configuration and automation. A change of the whole Telco Ops mindset is happening, and it is GOOD!.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disaggregation was/is the Motto of this revolution.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;5G standards came with drastic changes from what the Telco business were used to. HTTP? REST APIs? completely out-of-the-box for a Telco veteran who used to wrestle with SS7 and Diameter headers.&lt;/p&gt;

&lt;p&gt;Simultaneous shift was happening with the HW/SW disaggregation. Why buying a specialized costly piece of Metal to operate the MME if I can containerize it and run it on a COTs server with 1/6th of the cost?&lt;/p&gt;

&lt;p&gt;Mixing &amp;amp; matching between different vendor's CNFs and VNFs started to gain interest as well. More competition, less monopoly..&lt;/p&gt;

&lt;p&gt;And suddenly, Cloud Native technologies with all its practices and methodologies like DevOps, GitOps, ZTP ,, etc started finding its way to the Telco landscape.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39xjm303hmwu3itwywou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39xjm303hmwu3itwywou.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Standard bodies started to flourish: OpenRAN, ONAP, ETSi NFV,,etc. Although many of them had already existed for a while but they were not that relevant in the closed-monolith solutions era.&lt;/p&gt;

&lt;p&gt;Telco Operators starting leveraging community-driven innovation power (something happening in IT since many decades) and invested in the movement.&lt;/p&gt;

&lt;p&gt;Less Infra Capex, smaller maintenance windows, faster TTM with new products, leveraging cloud powers ,, all loved and adopted by Telco operators.&lt;/p&gt;

&lt;p&gt;But was it all roses and butterflies? as anything in life; No!&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;This newfound freedom came with some challenges, mainly the increased responsibility which is new to the Telcos business..&lt;/p&gt;

&lt;p&gt;Embracing open standards and open-source demands a deeper understanding of the underlying technologies. Disaggregating means more vendors to deal with and that requires - among other things - well defined SLAs to avoid pointing-finger games.&lt;/p&gt;

&lt;p&gt;Here are some of the main challenges that are facing Telcos in the Disaggregation era:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Integration Complexity:&lt;/strong&gt;&lt;br&gt;
Unlike turnkey proprietary solutions, open standards require operators to stitch together multiple components. Ensuring interoperability between vendors increases testing and validation efforts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Risk of Performance Bottlenecks:&lt;/strong&gt;&lt;br&gt;
Due to mismatched integrations and the long list of variables to be tweaked between different products, performance can easily suffer if integration was not tight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Talent Shortage &amp;amp; Expertise Gaps&lt;/strong&gt;&lt;br&gt;
Open ecosystems require skills in cloud-native technologies, automation, orchestration and open-source tooling. Telco operators often lack in-house expertise, and retraining legacy staff on new technologies adds time and cost.&lt;/p&gt;

&lt;p&gt;Also, Competition with tech giants for software engineers, DevOps, and cloud specialists makes hiring difficult and expensive. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Impact on SLAs &amp;amp; Operational Responsibility&lt;/strong&gt;&lt;br&gt;
With proprietary systems, vendors took full accountability for performance and outages. Now, operators own the integration, and demand disaggregation, making them "virtually" responsible for E2E service reliability.&lt;/p&gt;

&lt;p&gt;Meeting strict Service-Level Agreements (SLAs) requires deeper control over the entire stack, increasing operational pressure. (Telco likes multis 9's SLAs😉 )&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Complex Troubleshooting&lt;/strong&gt;&lt;br&gt;
When issues arise across multi-vendor, open-source stacks and environments, troubleshooting required multiple teams setting together reviewing different stacks and trying to correlate telemetry and connecting the dots to find out the root cause of the issue. Again require deep expertise and full cooperation between different domains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Suggestions
&lt;/h2&gt;

&lt;p&gt;Abandoning the openness and disaggregation is not an option. Finding the right strategic balance between the flexibility offered by open approaches while building the internal capabilities and processes necessary to manage the associated responsibilities is the way forward. &lt;/p&gt;

&lt;p&gt;Should I mention AI here? 😀 ,, Sure AI is key to navigate the complexity of disaggregated multi-vendor networks, but there is still a long way to utilize it efficiently without harming the delegate balance of Telcos annual TCO..&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing words;&lt;/strong&gt; transition is not easy, and Telcos need to be real and fact-check themselves to what level can they handle disaggregation based on their conditions. &lt;/p&gt;

&lt;p&gt;Each Telco as an individual case and finding this sweet spot is key to foster innovation, reduce costs, and ultimately deliver better services in the evolving digital landscape.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Run DeepSeek locally on BareMetal via K8s</title>
      <dc:creator>Anas Alloush</dc:creator>
      <pubDate>Sun, 09 Mar 2025 20:31:11 +0000</pubDate>
      <link>https://dev.to/anas_alloush/run-deepseek-locally-on-baremetal-via-k8s-3ng9</link>
      <guid>https://dev.to/anas_alloush/run-deepseek-locally-on-baremetal-via-k8s-3ng9</guid>
      <description>&lt;h2&gt;
  
  
  What is DeepSeek?
&lt;/h2&gt;

&lt;p&gt;Just as OpenAI has the ChatGPT chatbot, DeepSeek also has a similar chatbot, and it comes with two models: DeepSeek-V3 and DeepSeek-R1.&lt;/p&gt;

&lt;p&gt;**DeepSeek-V3 **is the default model used when we interact with the DeepSeek app. It’s a versatile large language model (LLM) that stands out as a general-purpose tool that can handle a wide range of tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6u1mfuvv24b4uxbf8ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6u1mfuvv24b4uxbf8ew.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DeepSeek-R1&lt;/strong&gt; is a powerful reasoning model built for solving tasks that require advanced reasoning and deep problem-solving. It works great for coding challenges that go beyond regurgitating code that has been written thousands of times and logic-heavy questions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cogi1z2trircjjk39e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cogi1z2trircjjk39e8.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What gave DeepSeek the blazing fame, was the mathematical tricks used to enhance efficiency and compensate for the performance improvements of Nvidia’s GPU and Collective Communications Library (NCCL)&lt;/p&gt;

&lt;p&gt;In short, DeepSeek used smart math to avoid using expensive HW like Nvidia’s H100 GPU for training large datasets.&lt;br&gt;
Also affected how the model can be run and used, hence we can run it on local servers with normal (no GPU) compute powers.&lt;/p&gt;


&lt;h2&gt;
  
  
  DeepSeek’s Mathematical Tricks for Computational Efficiency:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Low-Rank Approximations for Faster Computation&lt;/strong&gt;&lt;br&gt;
One of DeepSeek’s key optimizations is low-rank matrix approximations, which reduce the number of operations needed in matrix multiplications. Instead of performing full-rank matrix multiplications, these methods approximate matrices with lower-dimensional representations, significantly reducing computational cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Grouped Query Attention (GQA) for Memory Savings&lt;/strong&gt;&lt;br&gt;
GQA restructures how attention is computed in transformer models, reducing the memory bandwidth required for attention operations. Instead of computing attention for each query separately, GQA allows multiple queries to share the same key-value pairs, leading to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lower memory consumption&lt;/li&gt;
&lt;li&gt;Faster inference speeds&lt;/li&gt;
&lt;li&gt;Reduced redundant computations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Mixed-Precision Training for Speed and Efficiency&lt;/strong&gt;&lt;br&gt;
DeepSeek utilizes mixed-precision training, where computations use FP16/BF16 instead of FP32, reducing memory footprint and accelerating training. However, to maintain numerical stability, loss scaling techniques are applied, ensuring that small gradients are not lost due to precision truncation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Quantization for Reduced Computational Complexity&lt;/strong&gt;&lt;br&gt;
Beyond mixed-precision, DeepSeek also benefits from quantization, where tensors are represented using lower-bit precision (e.g., INT8). This allows for faster matrix multiplications and reduced memory bandwidth consumption, making training more efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Stochastic Rounding to Maintain Accuracy&lt;/strong&gt;&lt;br&gt;
When using lower-precision floating-point formats, stochastic rounding is employed to mitigate the accumulation of rounding errors, ensuring the model maintains high accuracy despite using reduced precision.&lt;/p&gt;

&lt;p&gt;DeepSeek’s mathematical optimizations allowed for cheaper training and lighter models that we can run on servers/PCs with minimum HW.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why would someone run a LLM model locally?
&lt;/h2&gt;

&lt;p&gt;Running a large language model (LLM) locally offers several advantages, depending on the use case and requirements. Here are some key reasons why someone might choose to run an LLM locally:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Data Privacy and Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sensitive Data: When working with confidential or sensitive information (e.g., medical, legal, or proprietary business data), running the model locally ensures that the data never leaves your environment, reducing the risk of exposure or breaches.&lt;/li&gt;
&lt;li&gt;Compliance: Local deployment can help meet regulatory requirements (e.g., GDPR, HIPAA) that mandate data to remain on-premises.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Control and Customization&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full Control: Running an LLM locally gives you complete control over the model, its configuration, and the infrastructure it runs on.&lt;/li&gt;
&lt;li&gt;Customization: You can fine-tune or modify the model to better suit specific needs, which might not be possible or cost-effective with cloud-based APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Cost Efficiency&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced API Costs: Cloud-based LLM services often charge based on usage (e.g., per token or API call). Running the model locally can be more cost-effective for high-volume or continuous usage.&lt;/li&gt;
&lt;li&gt;No Subscription Fees: Local deployment avoids recurring subscription costs associated with cloud-based LLM services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Performance and Latency&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lower Latency: Local deployment eliminates network latency, which is especially important for real-time applications or when low response times are critical.&lt;/li&gt;
&lt;li&gt;Predictable Performance: You can optimize the hardware and software stack to ensure consistent performance, without being affected by external factors like cloud service outages or throttling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Offline Accessibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No Internet Dependency: Running the model locally allows you to use it in environments without reliable internet access, such as remote locations or secure facilities.&lt;/li&gt;
&lt;li&gt;Disaster Recovery: Local deployment ensures that the model remains accessible even during internet outages or cloud service disruptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Transparency and Debugging&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Model Transparency: Running the model locally allows you to inspect its behavior, outputs, and intermediate steps, which can be crucial for debugging or understanding its decision-making process.&lt;/li&gt;
&lt;li&gt;Error Analysis: You can log and analyze errors or unexpected outputs more effectively when the model is under your control.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Long-Term Sustainability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid Vendor Lock-In: By running the model locally, you are not dependent on a specific cloud provider or service, reducing the risk of vendor lock-in.&lt;/li&gt;
&lt;li&gt;Future-Proofing: Local deployment ensures that you can continue using the model even if the cloud service changes its pricing, terms, or discontinues the service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;8. Research and Development&lt;/strong&gt;&lt;br&gt;
Researchers and developers can experiment with the model’s architecture, training data, or fine-tuning processes without restrictions imposed by cloud providers.&lt;/p&gt;


&lt;h2&gt;
  
  
  Hands-On Stuff:
&lt;/h2&gt;

&lt;p&gt;There are many ways you can run DeepSeek locally.&lt;br&gt;
As a fan of K8s and containers, I chose the containerized way.&lt;/p&gt;

&lt;p&gt;Here I’m running DeepSeek-R1 model locally on a Kubernetes cluster. This cluster is running on a VM which is running on a personal Laptop. One might argue that such setup is not really a BareMetal setup which is correct, but the same K8s configuration can be used on BareMetal directly. In my case I ran it within a VM for convenience..&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup specifications:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 VM with 32G RAM &amp;amp; 16 Cores / intel i9–9880H CPU @ 2.30GHz.&lt;/li&gt;
&lt;li&gt;No GPUs used.&lt;/li&gt;
&lt;li&gt;50 GB Storage allocated to the VM.&lt;/li&gt;
&lt;li&gt;Ubuntu 22.04.5 LTS.&lt;/li&gt;
&lt;li&gt;Minikube K8s.&lt;/li&gt;
&lt;li&gt;DeepSeek-r1 with 7-billion parameters (Ollama Docker Image).&lt;/li&gt;
&lt;li&gt;Open Web UI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Practical Steps:&lt;/strong&gt;&lt;br&gt;
1- Install any K8s distribution you feel comfortable working with.&lt;br&gt;
Here I’m using Minikube K8s and allocating 14 vCPUs &amp;amp; 28GB Memory for the cluster.&lt;br&gt;
&lt;code&gt;minikube start - cpus=14 - memory=28672&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2- Prepare K8s Persistence Volumes (PVs) of any type (Static or Dynamic), as later it will be consumed by 2 PersistentVolumeClaims (PVCs).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: If you are using a Minikube K8s, you can simply use the storage-provisioner-gluster add-on as explained &lt;a href="https://minikube.sigs.k8s.io/docs/handbook/addons/storage-provisioner-gluster/" rel="noopener noreferrer"&gt;here &lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;3- Go to &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;https://ollama.com/&lt;/a&gt; and choose the model you want .. Here I’m choosing DeepSeek-R1 with 14 billion parameters. Choose less parametrs model depending on your machine resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3w3plmhsm1ub7s5ljrp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3w3plmhsm1ub7s5ljrp.png" alt="Image description" width="529" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Once you have a running K8s setup, run the below yaml configuration via kubectl..&lt;br&gt;
It will download and run the images:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DeepSeek-R1 14b model.&lt;/li&gt;
&lt;li&gt;open-webui.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It will also prepare the volumes required, and expose the Open-Webui GUI via port 11434 to interact with the model..&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: open-webui-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ollama-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: open-webui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: open-webui
  template:
    metadata:
      labels:                                                                                                                                                            
        app: open-webui                                                                                                                                                  
    spec:
      containers:
        - name: open-webui
          image: ghcr.io/open-webui/open-webui:latest
          env:
            - name: OLLAMA_BASE_URL
              value: "http://127.0.0.1:11434"
          volumeMounts:
            - mountPath: /app/backend/data
              name: open-webui-storage
      volumes:
        - name: open-webui-storage
          persistentVolumeClaim:
            claimName: open-webui-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ollama
  template:
    metadata:
      labels:
        app: ollama
    spec:
      containers:
        - name: ollama
          image: ollama/ollama:latest
          volumeMounts:
            - mountPath: /root/.ollama
              name: ollama-storage
      volumes:
        - name: ollama-storage
          persistentVolumeClaim:
            claimName: ollama-storage
---
apiVersion: batch/v1
kind: Job
metadata:
  name: ollama-pull-llama
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: ollama-pull-llama
          image: ollama/ollama:latest
          command: ["/bin/sh", "-c", "sleep 3; OLLAMA_HOST=127.0.0.1:11434 ollama pull deepseek-r1:7b"]
          volumeMounts:
            - mountPath: /root/.ollama
              name: ollama-storage
      volumes:
        - name: ollama-storage
          persistentVolumeClaim:
            claimName: ollama-storage

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Depending on your internet connection, It will take around 2 min to pull all required images (~4.5GB) and run them on your K8s cluster.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Before start using the model, the Ollama Pull job must be on Complete status. All other pods must be in Running status.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrf9637ssho72ms1hm5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrf9637ssho72ms1hm5h.png" alt="Image description" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Moment of Truth — Test ,, Test
&lt;/h2&gt;

&lt;p&gt;Now that we have a locally ready setup including a running model, its storage and an exposed port. lets start testing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1st Questions: What is a Transistor?&lt;/strong&gt;&lt;br&gt;
Thinking and writing the answer took around ~50Sec&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkahc53arg5mkrteri2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkahc53arg5mkrteri2o.png" alt="Image description" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2nd Question: How many (A) letters is there in the name (Anas)?&lt;/strong&gt;&lt;br&gt;
Some models struggle with such questions. In my DeepSeek-r1 local setup it took around ~80Sec to reason, think and answer correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwq5idke1t7e8dnvwuwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwq5idke1t7e8dnvwuwc.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is a snapshot from my linux VM with all allocated CPU going wild trying to run the model while answering the questions..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtvg6tukybi8i9146uy8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtvg6tukybi8i9146uy8.png" alt="Image description" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Learning!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Top Lightweight Open Source Kubernetes Distros for Learning and Testing</title>
      <dc:creator>Anas Alloush</dc:creator>
      <pubDate>Sun, 05 Jan 2025 14:31:59 +0000</pubDate>
      <link>https://dev.to/anas_alloush/top-lightweight-open-source-kubernetes-distros-for-learning-and-testing-57nb</link>
      <guid>https://dev.to/anas_alloush/top-lightweight-open-source-kubernetes-distros-for-learning-and-testing-57nb</guid>
      <description>&lt;p&gt;If you are a beginner looking to learn Kubernetes basics or an experienced developer, DevOps, SRE or a Platform engineer looking for a quick and “clean” setup to test new features or perform specific tests, choosing the right Kubernetes distribution is crucial.&lt;br&gt;
In this article, I’m listing the Best-in-Class free and open-source Kubernetes distributions available for learning and testing.&lt;br&gt;
Deploying K8s clusters via &lt;code&gt;kubeadm&lt;/code&gt;, is a good way to learn the outs &amp;amp; abouts of K8s itself, but it is also the hard way and the time-consuming way (+30 min for beginners to set-up a working cluster).&lt;br&gt;
Therefore, when I want to test something quickly, I just spin off a K8s cluster using one of the lightweight distributions mentioned in this article depending on the purpose and use-case I have in mind. Although for complex tests I have a fully automated &lt;code&gt;kubeadm&lt;/code&gt; based environment, but this is for another article.&lt;/p&gt;

&lt;p&gt;My criteria to choose the K8s distribution for quick testing is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Time to set up a functioning cluster.&lt;/li&gt;
&lt;li&gt;Ease of setup and use.&lt;/li&gt;
&lt;li&gt;Resource requirements (CPU, RAM, Storage).&lt;/li&gt;
&lt;li&gt;Flexibility and customization options.&lt;/li&gt;
&lt;li&gt;Documentation Clarity&lt;/li&gt;
&lt;li&gt;Community Support.&lt;/li&gt;
&lt;li&gt;Suitability &amp;amp; Usage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note1&lt;/strong&gt;: I included a “&lt;strong&gt;Quick Guide&lt;/strong&gt;” Subsection in each K8s project section, but it is advisable to always have a look at the official project installation page (Link Provided) as the installation steps tend to vary from time to time depending on the project evolution, your OS type and your point of time reading it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note2&lt;/strong&gt;: You are advised to have a good read at the “known issues” section of each project to prevent running in cycles in case encountering an issue/bug/problem before submitting a PR or ask ChatGPT ;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note3&lt;/strong&gt;: This is not an exhaustive list and not an ordered best-to-worse kinda list. It is rather a randomly ordered list based on my own experience dealing with K8s almost on daily basis since 6 years.&lt;/p&gt;


&lt;h2&gt;
  
  
  1- Minikube
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep911wps7e5ejblwzs7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep911wps7e5ejblwzs7z.png" alt="Image description" width="370" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick glance:&lt;/strong&gt;&lt;br&gt;
A project maintained as part of the Kubernetes project under the Cloud Native Computing Foundation (CNCF) which is considered the Go-To K8s distro for absolute beginners.&lt;br&gt;
Minikube is a very lightweight K8s distribution designed to help developers and learners to run a local Kubernetes cluster on their personal computers.&lt;br&gt;
The default setup creates a single-node Kubernetes cluster, which includes a master and worker node in one instance. (Multi-Node Clusters can be configured as well)&lt;br&gt;
Minikube creates a virtualized or containerized environment (depending on the driver used) that runs Kubernetes components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time to Set Up:&lt;/strong&gt;&lt;br&gt;
~2–5 mins&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ease of setup and use:&lt;/strong&gt;&lt;br&gt;
Very straight forward and easy steps. Installation page &lt;a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Flinux%2Fx86-64%2Fstable%2Fbinary+download" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Setup Guide:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt;Install the latest Minikube stable release (Assuming you have a x86–64 Linux and using Debian package):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt;Start your cluster&lt;br&gt;
From a terminal with administrator access (but not logged in as root), run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt;Interact with your cluster&lt;br&gt;
Minikube already downloads &lt;code&gt;kubectl&lt;/code&gt;if it is not already installed on your machine. Interact with your cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube kubectl - get po -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Resource Required &amp;amp; Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Minimum Resources: 2 CPUs , 2GB RAM with ~4GB Storage.&lt;/p&gt;

&lt;p&gt;• As Prerequisite: Minikube requires a container or a VM manager, such as: Docker, QEMU, Hyperkit, Hyper-V, KVM, Parallels, Podman or VirtualBox&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility &amp;amp; Customization Options:&lt;/strong&gt;&lt;br&gt;
Minikube Supports enabling add-ons that provide quick configurations for things like:&lt;br&gt;
• K8s dashboard&lt;br&gt;
• ingress&lt;br&gt;
• metrics-server&lt;br&gt;
• Loadbalancer&lt;br&gt;
• Persistent Volumes&lt;/p&gt;

&lt;p&gt;Such add-ons otherwise would take some time and require some K8s knowledge to deploy &amp;amp; configure them.&lt;br&gt;
It also supports all popular CNIs (Cilium, Calico, Flannel ,, etc) and popular CRIs (Docker, Containerd and CRI-O)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation:&lt;/strong&gt;&lt;br&gt;
Very user friendly, written in clear text and suitable for absolute beginners as it includes step-by-step guides for various use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community Support:&lt;/strong&gt;&lt;br&gt;
• Large community support with +29.7k stars on GitHub&lt;br&gt;
• All created issues are actively followed and resolved by contributors.&lt;br&gt;
• Large collection of community-created tutorials and guides are available online in text and video forms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suitability &amp;amp; Usage:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Very useful for:&lt;/em&gt;&lt;br&gt;
• Spinning off a quick K8s cluster on your local machine&lt;br&gt;
• Learning the basics and following up with tutorials.&lt;br&gt;
• Testing small scale and simple scenarios.&lt;br&gt;
• In General, it is very recommended as a starting point for absolute beginners.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Not recommended for:&lt;/em&gt;&lt;br&gt;
• Complex networking scenarios&lt;br&gt;
• Performance testing or testing low latency workloads&lt;br&gt;
• HA scenarios, and Loadbalancing tests&lt;br&gt;
• complex large Stateful applications&lt;/p&gt;


&lt;h2&gt;
  
  
  2- KinD (Kubernetes in Docker)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp5zu3tsbn62g8eq27hv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp5zu3tsbn62g8eq27hv.png" alt="Image description" width="395" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Glance:&lt;/strong&gt;&lt;br&gt;
KinD is a lightweight tool designed to run local Kubernetes clusters using Docker containers as “nodes”. Therefore, it simplifies creating multi-node clusters on a single machine without requiring virtualization or specialized hardware.&lt;br&gt;
Under the Hood, KinD uses &lt;code&gt;kubeadm&lt;/code&gt; to bootstrap the Kubernetes cluster inside the containers fully automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time to Set Up:&lt;/strong&gt;&lt;br&gt;
~5–10 min depending on your internet connection speed as creating a KinD Cluster requires downloading multiple Docker images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ease of setup and use:&lt;/strong&gt;&lt;br&gt;
Very straight forward. Installation page &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quick Setup Guide:&lt;br&gt;
Assuming you are using Linux AMD64 / x86_64&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt;Install the package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ $(uname -m) = x86_64 ] &amp;amp;&amp;amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.26.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt;Create a Kubernetes cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt;Interact with the Cluster:&lt;br&gt;
By default, the cluster access configuration is stored in &lt;code&gt;${HOME}/.kube/config&lt;/code&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/em&gt;: KinD does not require &lt;code&gt;kubectl&lt;/code&gt; to interact with the cluster.&lt;br&gt;
If you want &lt;code&gt;kubectl&lt;/code&gt;, you need to install &lt;code&gt;kubectl&lt;/code&gt; binaries yourself as KinD package does not install them for you. &lt;code&gt;kubectl&lt;/code&gt; Installation page &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
All project known issues are documents in a clear way &lt;a href="https://kind.sigs.k8s.io/docs/user/known-issues" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Required &amp;amp; Prerequisites:&lt;/strong&gt;&lt;br&gt;
•Minimum Resources: 2 CPUs or more &amp;amp; 2GB of free memory with ~4GB Storage (Additional resources depends on the number of nodes).&lt;br&gt;
•The only Prerequisite to be installed is Docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility &amp;amp; Customization Options:&lt;/strong&gt;&lt;br&gt;
A KinD Cluster can be fully defined via YAML files, making it highly customizable and portable.&lt;br&gt;
It Supports all popular CNIs (Cilium, Calico, Flannel ,, etc) and popular CRIs (Docker, Containerd and CRI-O)&lt;br&gt;
Note that there are no built-in add-ons like Minikube to quickly deploy an Ingress controller or a LoadBalancer.&lt;br&gt;
KinD helps you by lowering the bar with some of these tasks, but you need to take care of the majority of the steps yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation:&lt;/strong&gt;&lt;br&gt;
Clear, easy, and beginner-friendly documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community support:&lt;/strong&gt;&lt;br&gt;
•Large community support with +13.7k stars on GitHub&lt;br&gt;
•All created issues are actively followed and resolved by contributors.&lt;br&gt;
•Also many community-created tutorials and guides are available online in text and video forms&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suitability &amp;amp; Usage:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Extremely useful for:&lt;/em&gt;&lt;br&gt;
• Testing CI/CD pipelines integration with K8s environment.&lt;br&gt;
• Testing multi-node Cluster configurations and HA scenarios.&lt;br&gt;
• Learning topics that are a bit above the basics&lt;br&gt;
&lt;em&gt;Not recommended for:&lt;/em&gt;&lt;br&gt;
• Networking Complexity: Advanced networking setups can be tricky to configure.&lt;br&gt;
• Performance testing or testing low latency workloads&lt;br&gt;
• Complex large Stateful applications&lt;br&gt;
• Production-Like environment deployments&lt;/p&gt;


&lt;h2&gt;
  
  
  3- K0s
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjiuwm3rejb0m55pamay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjiuwm3rejb0m55pamay.png" alt="Image description" width="364" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick glance:&lt;/strong&gt;&lt;br&gt;
Pronounced "kay-zero-ess", Z0s is a zero-friction, lightweight Kubernetes distribution designed to simplify the deployment and management of Kubernetes clusters in environments where resource efficiency is critical.&lt;br&gt;
k0s bundles all the necessary Kubernetes components into a "single binary". This binary includes the Kubernetes control plane components (API server, scheduler, controller manager, etc.), worker node components (kubelet, kube-proxy), and optional add-ons (e.g., CNI plugins, metrics server).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time to Set Up:&lt;/strong&gt;&lt;br&gt;
With ~2–3min, K0s is one of the fastest Kubernetes distributions to set up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ease of setup and use:&lt;/strong&gt;&lt;br&gt;
Installation page &lt;a href="https://docs.k0sproject.io/stable/install/" rel="noopener noreferrer"&gt;here &lt;/a&gt;.. For a quick setup guide continue reading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick setup Guide:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt;Download k0s&lt;br&gt;
Run the k0s download script to download the latest stable version of k0s and make it executable from &lt;code&gt;/usr/local/bin/k0s&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl - proto '=https' - tlsv1.2 -sSf https://get.k0s.sh | sudo sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt;Install k0s as a service&lt;br&gt;
Run the following command to install a single node k0s that includes the controller and worker functions with the default configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo k0s install controller - single
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt;Start the k0s as a service ( and wait a couple minutes)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo k0s start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4.&lt;/strong&gt;Access your cluster using kubectl (K0s installs it for you)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo k0s kubectl get nodes
NAME   STATUS   ROLES    AGE    VERSION
k0s    Ready    &amp;lt;none&amp;gt;   4m6s   v1.21.2-k0s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note1:&lt;/em&gt; If you are installing a multi-node setup, using the previous command you will not see the Control-Plane node, and that does Not mean your setup is broken.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note2:&lt;/em&gt; In case you are not happy with the default installation way and would like to install K0s via &lt;code&gt;k0sctl &lt;/code&gt;,, Consider installing the &lt;code&gt;k0sctl&lt;/code&gt;tool on the Jumphost machine and not on the machines that you want to run K8s on&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbzcwhx02dxk2r4n8neb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbzcwhx02dxk2r4n8neb.png" alt="Image description" width="542" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;k0sctl.yaml&lt;/code&gt; file example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- role: controller
ssh:
address: 10.0.0.1 # replace with the controller's IP address
user: root
keyPath: ~/.ssh/id_rsa
- role: worker
ssh:
address: 10.0.0.2 # replace with the worker's IP address
user: root
keyPath: ~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Project known issues are documented &lt;a href="https://docs.k0sproject.io/stable/troubleshooting/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Required &amp;amp; Prerequisites:&lt;/strong&gt;&lt;br&gt;
•Minimum Resources: 1 CPU, 1GB RAM and ~2GB Storage per node for small clusters. (Additional resources depends on the number of nodes).&lt;br&gt;
More sophisticated system requirements are explained here&lt;/p&gt;

&lt;p&gt;•As Prerequisite, k0s has ZERO external dependencies other than a compatible Linux OS. It does not require Docker or any other container runtime to be pre-installed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility &amp;amp; Customization Options:&lt;/strong&gt;&lt;br&gt;
K0s is highly customizable. It enables the user to configure advanced networking, storage, and security settings , Enable or disable specific Kubernetes components.&lt;br&gt;
All can be done via the &lt;code&gt;k0sctl.yaml&lt;/code&gt; file via the &lt;code&gt;k0sctl&lt;/code&gt;tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation:&lt;/strong&gt;&lt;br&gt;
Beginner to Intermediate level of documentation clarity, some sections have excellent explanations with diagrams, others require some Pre-knowledge in some topics like storage or security.&lt;br&gt;
Also Includes guides for single-node, multi-node, and high-availability setups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community support:&lt;/strong&gt;&lt;br&gt;
•The project is backed by Mirantis, a well-known Kubernetes contributor.&lt;br&gt;
•Their GitHub Repo is stared with +4.1k, which is relatively great measured to the project lifetime.&lt;br&gt;
•K0s has less community-created tutorials compared to the previous options (KinD &amp;amp; Minukube) as it is the newest lightweight K8s distro compared to others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suitability &amp;amp; Usage:&lt;/strong&gt;&lt;br&gt;
K0s shines with IoT deployments as it is a very lightweight single-binary .. &lt;br&gt;
&lt;em&gt;Very recommended for:&lt;/em&gt;&lt;br&gt;
• Edge computing and IoT deployments&lt;br&gt;
• Air-gapped deployments&lt;br&gt;
• Testing CI/CD pipelines integration with K8s environment.&lt;br&gt;
• Testing multi-node Cluster configurations and HA scenarios.&lt;br&gt;
• Learning topics that are a bit above the basics&lt;br&gt;
&lt;em&gt;Not recommended for:&lt;/em&gt;&lt;br&gt;
• Networking Complexity: Advanced networking setups can be tricky to configure.&lt;br&gt;
• Performance testing or testing low latency workloads&lt;br&gt;
• Complex large Stateful applications&lt;br&gt;
• Big Production-Like environment deployments&lt;/p&gt;


&lt;h2&gt;
  
  
  4- MicroK8s
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk58yddjp0qd4a9r4cvpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk58yddjp0qd4a9r4cvpp.png" alt="Image description" width="657" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick glance:&lt;/strong&gt;&lt;br&gt;
A very lightweight, minimalistic Kubernetes distribution developed by Canonical, the company behind Ubuntu.&lt;/p&gt;

&lt;p&gt;It packages all the essential components of Kubernetes into a single, easy-to-install package, allowing users to run a Kubernetes cluster on a single machine or across multiple nodes with minimal effort.&lt;/p&gt;

&lt;p&gt;It uses Dqlite (a lightweight distributed SQLite) for high availability in multi-node setups, reducing the resource overhead compared to traditional etcd-based clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time to Set Up:&lt;/strong&gt;&lt;br&gt;
~ 5–7 min for simple single Node clusters&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ease of setup and use:&lt;/strong&gt;&lt;br&gt;
Installation page &lt;a href="https://microk8s.io/docs/getting-started" rel="noopener noreferrer"&gt;here &lt;/a&gt;.. For a quick setup guide continue reading:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick setup Guide:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt;Install MicroK8s binaries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo snap install microk8s - classic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt;Join MicroK8s group&lt;br&gt;
MicroK8s creates a Linux group to enable seamless usage of commands which require admin privilege. To add your current user to the group and gain access to the &lt;code&gt;.kube&lt;/code&gt; caching directory, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -a -G microk8s $USER
mkdir -p ~/.kube
chmod 0700 ~/.kube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Re-enter the session for the group update to take place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su - $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt;Access your K8s cluster&lt;br&gt;
MicroK8s bundles its own version of &lt;code&gt;kubectl&lt;/code&gt;for accessing the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Resource Required &amp;amp; Prerequisites:&lt;/strong&gt;&lt;br&gt;
Minimum Resources: 2 CPU, 4GB RAM and ~2GB Storage per node for small clusters. (Additional resources depends on the number of nodes).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Prerequisites:&lt;/strong&gt;&lt;br&gt;
• An Ubuntu 22.04 LTS, 20.04 LTS, 18.04 LTS or 16.04 LTS environment to run the commands (or&lt;br&gt;
• Other Linux distributions can be used as well but they must support supports &lt;code&gt;snapd&lt;/code&gt;&lt;br&gt;
• If you don’t have a Linux machine, you can use Multipass (see &lt;a href="https://microk8s.io/docs/install-multipass" rel="noopener noreferrer"&gt;Installing MicroK8s with Multipass&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility &amp;amp; Customization Options:&lt;/strong&gt;&lt;br&gt;
•MicroK8s provides Add-ons like Helm, Istio, DNS, or MetalLB that can be enabled as needed.&lt;br&gt;
•It supports single-node and multi-node clusters, allowing users to test HA scenarios&lt;br&gt;
•Users can customize their clusters by enabling or disabling add-ons, configuring networking, and integrating with other tools and services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation:&lt;/strong&gt;&lt;br&gt;
•The project has a comprehensive and well-organized documentation, making it easy for users to get started and troubleshoot issues. But the absolute beginner may struggle a bit with certain pages as a bit of Linux knowledge is required to customize or configure your MicroK8s cluster (which is a must anyway for anyone learning K8s)&lt;/p&gt;

&lt;p&gt;•Official documentation covers installation, add-ons, multi-node setups, and advanced configurations.&lt;br&gt;
•Tutorials and examples are provided for common use cases, such as deploying applications, enabling ingress, and setting up storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community support:&lt;/strong&gt;&lt;br&gt;
•Large community support with +8.6K stars on GitHub&lt;br&gt;
•All created issues are actively followed and resolved by contributors.&lt;br&gt;
•Also many community-created tutorials and guides are available online in text and video forms&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suitability &amp;amp; Usage:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Very recommended for:&lt;/em&gt;&lt;br&gt;
• Edge computing and IoT deployments&lt;br&gt;
• Testing CI/CD pipelines integration with K8s environment.&lt;br&gt;
• Testing multi-node Cluster configurations and HA scenarios.&lt;br&gt;
• Learning topics for K8s beginners&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Not recommended for:&lt;/em&gt;&lt;br&gt;
• Big Production-Like environment deployments&lt;br&gt;
• Performance testing or testing low latency workloads&lt;br&gt;
• Complex large Stateful applications&lt;/p&gt;



&lt;p&gt;5- K3s&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y3wn8e3aashl8n8s7cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y3wn8e3aashl8n8s7cx.png" alt="Image description" width="199" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick glance:&lt;/strong&gt;&lt;br&gt;
K3s is developed and maintained by Rancher (now part of SUSE) and is ideal for edge, IoT, and resource-constrained environments&lt;/p&gt;

&lt;p&gt;It is packaged as a single binary (~100 MB) that includes all necessary components (The API server, controller manager, scheduler, and kubelet)&lt;/p&gt;

&lt;p&gt;It is designed for running Kubernetes clusters in resource-constrained devices like Raspberry Pis, therefore it bundles lightweight alternatives like SQLite instead of etcd (however etcd, MySQL, and PostgreSQL are also supported for high-availability setups).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time to Set Up:&lt;/strong&gt;&lt;br&gt;
~ 5–7 min&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ease of setup and use:&lt;/strong&gt;&lt;br&gt;
K3s provides an installation script to install it as a service on &lt;code&gt;systemd&lt;/code&gt;or &lt;code&gt;openrc&lt;/code&gt; based systems.&lt;/p&gt;

&lt;p&gt;K3s package includes tools like &lt;code&gt;kubectl&lt;/code&gt;, &lt;code&gt;crictl&lt;/code&gt;, and &lt;code&gt;ctrout&lt;/code&gt; of the box, reducing the need for additional installations.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; It is not recommended to use snap based Docker packages. The “known issues” section &lt;a href="https://docs.k3s.io/known-issues" rel="noopener noreferrer"&gt;here&lt;/a&gt; mention it.&lt;/p&gt;

&lt;p&gt;Installation page &lt;a href="https://docs.k3s.io/quick-start" rel="noopener noreferrer"&gt;here &lt;/a&gt;.. Default installation summarized in the next section&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Setup Guide:&lt;/strong&gt;&lt;br&gt;
Get the installation script which installs (&lt;code&gt;kubectl&lt;/code&gt; , &lt;code&gt;crictl&lt;/code&gt;, &lt;code&gt;ctr&lt;/code&gt;, &lt;code&gt;k3s-killall.sh&lt;/code&gt;, and &lt;code&gt;k3s-uninstall.sh&lt;/code&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sfL https://get.k3s.io | sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A kubeconfig file will be written to &lt;code&gt;/etc/rancher/k3s/k3s.yaml&lt;/code&gt; and the&lt;br&gt;
&lt;code&gt;kubectl&lt;/code&gt; tool is installed by K3s will automatically use it —&lt;br&gt;
And you are READY To GO&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Required &amp;amp; Prerequisites:&lt;/strong&gt;&lt;br&gt;
•Minimum Resources: 2 CPUs , 2GB RAM with ~4GB Storage.&lt;br&gt;
•For Prerequisite, Only Linux e.g OpenSUSE, Ubuntu ,, etc is needed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility &amp;amp; Customization Options:&lt;/strong&gt;&lt;br&gt;
•K3s is highly modular and customizable.&lt;br&gt;
•Users can customize components like the container runtime, networking, or storage to suit your needs.&lt;br&gt;
•Multi-Node Clusters mode is supported, but K3s is often used for single-node setups.&lt;br&gt;
•With a &lt;code&gt;config.yaml&lt;/code&gt; file created at &lt;code&gt;/etc/rancher/k3s/k3s.yaml&lt;/code&gt; users can customize many deployment options like CNI, CRI and Storage options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation:&lt;/strong&gt;&lt;br&gt;
K3s has a well-organized and easy to follow documentation with many explanations, F&amp;amp;Qs and examples for common use cases, such as deploying applications or setting up multi-node clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community support:&lt;/strong&gt;&lt;br&gt;
It is best known of its use with Raspberry Pis for Edge and IoT Use Cases where resources are limited.&lt;br&gt;
&lt;em&gt;Very recommended for:&lt;/em&gt;&lt;br&gt;
• Edge computing and IoT deployments.&lt;br&gt;
• Air-gapped deployments.&lt;br&gt;
• Testing CI/CD pipelines integration with K8s environment.&lt;br&gt;
• Learning topics that are a bit above the basics.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Not recommended for:&lt;/em&gt;&lt;br&gt;
• Big Production-Like environment deployments.&lt;br&gt;
• Testing multi-node Cluster configurations and HA scenarios.&lt;br&gt;
• Networking Complexity: Advanced networking setups can be tricky to configure.&lt;br&gt;
• Performance testing or testing low latency workloads.&lt;br&gt;
• Complex large Stateful applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Happy Learning!&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Anas Alloush&lt;br&gt;
Telco Cloud Specialist&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
