<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ali Shaikh</title>
    <description>The latest articles on DEV Community by Ali Shaikh (@alishaikh).</description>
    <link>https://dev.to/alishaikh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alishaikh"/>
    <language>en</language>
    <item>
      <title>Secure Self-Hosted OpenClaw AI Assistant: Step-by-Step Proxmox Template Guide with Tailscale Integration</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Sun, 01 Feb 2026 16:15:50 +0000</pubDate>
      <link>https://dev.to/alishaikh/secure-self-hosted-openclaw-ai-assistant-step-by-step-proxmox-template-guide-with-tailscale-ghj</link>
      <guid>https://dev.to/alishaikh/secure-self-hosted-openclaw-ai-assistant-step-by-step-proxmox-template-guide-with-tailscale-ghj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16jx2yuvorw6t5cf0mwe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16jx2yuvorw6t5cf0mwe.jpg" alt="Secure Self-Hosted OpenClaw AI Assistant: Step-by-Step Proxmox Template Guide with Tailscale Integration" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the last few weeks, you could not escape seeing "Clawd" everywhere - the viral AI assistant that went from zero to 100,000 GitHub stars in just two months. Originally called Clawdbot, now rebranded as OpenClaw, it's an open-source AI that actually does things instead of just chatting. So this weekend, I set it up on my Proxmox homelab.&lt;/p&gt;

&lt;p&gt;Now I have a digital assistant that I can communicate with through WhatsApp and Telegram. I asked it to choose its own name and persona, and now it is called Lyra. I gave it a dedicated email and GitHub account.&lt;/p&gt;

&lt;p&gt;This guide walks through creating a reusable Proxmox template with all dependencies pre-installed. Clone the template, boot it, configure OpenClaw and Tailscale, and you're done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you get:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debian 13 VM template with Node.js 24, OpenClaw CLI, and Tailscale pre-installed&lt;/li&gt;
&lt;li&gt;Automatic installation via cloud-init (no manual package installs)&lt;/li&gt;
&lt;li&gt;Secure HTTPS access via Tailscale Serve (no public exposure)&lt;/li&gt;
&lt;li&gt;Progress indicators and status helpers baked in&lt;/li&gt;
&lt;li&gt;~15 minutes from template creation to running AI assistant&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/Ali-Shaikh/proxmox-toolbox/blob/main/templates/openclaw/debian-13-openclaw-ready-template.sh?ref=alishaikh.me" rel="noopener noreferrer"&gt;OpenClaw Proxmox Template Script&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What is OpenClaw?
&lt;/h2&gt;

&lt;p&gt;OpenClaw is an autonomous AI assistant that actually executes tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Controls browsers (fill forms, scrape data, automate workflows)&lt;/li&gt;
&lt;li&gt;Manages files on your system&lt;/li&gt;
&lt;li&gt;Integrates with WhatsApp, Telegram, Discord&lt;/li&gt;
&lt;li&gt;Runs terminal commands&lt;/li&gt;
&lt;li&gt;All through a web UI or chat interfaces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it as the plumbing layer that connects AI models (Claude, GPT-4o, Gemini) to real-world actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The catch:&lt;/strong&gt; It's powerful, which means it needs proper isolation. Hence the dedicated VM approach.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your Browser
    │
    │ HTTPS (Tailscale Serve)
    ▼
OpenClaw VM (Proxmox)
    │
    ├─ Tailscale Serve → http://127.0.0.1:18789
    │
    ├─ OpenClaw Gateway
    │ ├─ Control UI
    │ └─ AI agent + channels
    │
    └─ Internet (outbound only)
        └─ WhatsApp/Telegram/AI APIs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key design:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gateway binds to loopback (127.0.0.1) only&lt;/li&gt;
&lt;li&gt;Tailscale Serve proxies HTTPS from your Tailnet&lt;/li&gt;
&lt;li&gt;No public exposure, no firewall rules needed&lt;/li&gt;
&lt;li&gt;Messaging platforms connect over regular internet&lt;/li&gt;
&lt;li&gt;If using Tailscale, the server is only accessible through your Tailscale network, ensuring secure, private access&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proxmox VE 8.x (ensure it's updated: &lt;code&gt;apt update &amp;amp;&amp;amp; apt full-upgrade&lt;/code&gt; via SSH or web console)&lt;/li&gt;
&lt;li&gt;At least 4GB RAM and 20GB storage available per VM (adjust based on workload)&lt;/li&gt;
&lt;li&gt;SSH access to Proxmox (or use the web console as a fallback)&lt;/li&gt;
&lt;li&gt;A Tailscale account (free tier works)&lt;/li&gt;
&lt;li&gt;Your SSH public key&lt;/li&gt;
&lt;li&gt;Cloud-init enabled in Proxmox (check under Datacenter &amp;gt; Options; if not, enable it)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Quick Tip:&lt;/strong&gt; If you're new to Tailscale, start with their quickstart guide: &lt;a href="https://tailscale.com/kb/1017/install?ref=alishaikh.me" rel="noopener noreferrer"&gt;Tailscale Docs&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 1: Create the Template
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Download the Script
&lt;/h3&gt;

&lt;p&gt;SSH into Proxmox and download the script directly to avoid copy-paste errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@your-proxmox-host
wget https://raw.githubusercontent.com/Ali-Shaikh/proxmox-toolbox/main/templates/openclaw/debian-13-openclaw-ready-template.sh -O debian-13-openclaw-ready-template.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure It
&lt;/h3&gt;

&lt;p&gt;Open the script and modify these values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano debian-13-openclaw-ready-template.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Change these:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VMID=10010 # Template ID (pick any unused ID; check with `qm list`)
STORAGE="local-lvm" # Your Proxmox storage pool (verify with `pvesm list`)
MEMORY=4096 # 4GB RAM minimum
CORES=2 # CPU cores

CI_USER="debian"
CI_PASSWORD="$(openssl passwd -6 $(pwgen -s 16 1))" # Generate a strong password with pwgen or similar; install pwgen if needed: apt install pwgen

# Replace with YOUR SSH public key
echo "ssh-rsa AAAA... your-email@example.com" &amp;gt; ~/ssh.pub

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Homebrew (Linuxbrew) is installed via cloud-init for certain dependencies; it's not native to Debian but works fine for this setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run It
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x debian-13-openclaw-ready-template.sh
./debian-13-openclaw-ready-template.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Debian 13 cloud image&lt;/li&gt;
&lt;li&gt;Create a VM with UEFI support&lt;/li&gt;
&lt;li&gt;Configure cloud-init to install Node.js, OpenClaw, Tailscale, pnpm, Homebrew&lt;/li&gt;
&lt;li&gt;Convert it to a template&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Time:&lt;/strong&gt; ~2-3 minutes&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 2: Deploy a VM
&lt;/h2&gt;

&lt;p&gt;Clone the template (replace IDs if needed; check for conflicts with &lt;code&gt;qm list&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;qm clone 10010 201 --name openclaw-prod --full
qm start 201

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Wait 5-7 minutes.&lt;/strong&gt; Cloud-init is installing everything. You can watch progress:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get the VM's IP (once it boots)
qm guest cmd 201 network-get-interfaces

# SSH in and watch the install
ssh debian@&amp;lt;vm-ip&amp;gt;
sudo tail -f /var/log/openclaw-install.log

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you see "Installation Complete", you're ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Quick check
check-install-status

# Or check versions manually
cat /root/install-versions.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see Node.js 24, npm, pnpm, OpenClaw, Tailscale, and Homebrew all installed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 3: Configure OpenClaw
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Connect to Tailscale
&lt;/h3&gt;

&lt;p&gt;Get an auth key from &lt;a href="https://login.tailscale.com/admin/settings/keys?ref=alishaikh.me" rel="noopener noreferrer"&gt;https://login.tailscale.com/admin/settings/keys&lt;/a&gt; (set to expire in 30-90 days for security).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tailscale up --authkey=tskey-auth-YOUR_KEY_HERE
tailscale status # Verify connection

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set Up pnpm (Required for Skills)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm setup
source ~/.bashrc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run OpenClaw Onboarding
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw onboard --install-daemon

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Critical configuration choices:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Question&lt;/th&gt;
&lt;th&gt;Answer&lt;/th&gt;
&lt;th&gt;Why?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Setup type&lt;/td&gt;
&lt;td&gt;Local gateway (this machine)&lt;/td&gt;
&lt;td&gt;Keeps everything self-contained.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bind address&lt;/td&gt;
&lt;td&gt;Loopback (127.0.0.1) - &lt;strong&gt;MUST BE LOOPBACK&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Prevents direct exposure to LAN or internet; required for secure Tailscale proxying.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Authentication&lt;/td&gt;
&lt;td&gt;Token (Recommended)&lt;/td&gt;
&lt;td&gt;Stronger than alternatives for access control.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Expose via&lt;/td&gt;
&lt;td&gt;Tailscale Serve&lt;/td&gt;
&lt;td&gt;Secure Tailnet-only access without public exposure.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Never select "LAN (0.0.0.0)" for bind address&lt;/li&gt;
&lt;li&gt;Do not select "Funnel" (exposes to public internet)&lt;/li&gt;
&lt;li&gt;If you get a Tailscale binary warning, choose "No" and continue&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Save Your Token
&lt;/h3&gt;

&lt;p&gt;The wizard will display your gateway token. Save it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.openclaw/openclaw.json | jq -r '.gateway.auth.token'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output: &lt;code&gt;claw_abc123def456...&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write this down.&lt;/strong&gt; You'll need it to access the UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify Tailscale Serve
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tailscale serve status

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://openclaw-host.tail-scale.ts.net/ (Tailscale Serve)
|-- / proxy http://127.0.0.1:18789

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If empty, configure manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tailscale serve --bg --https=443 http://127.0.0.1:18789

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Check Gateway Status
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw status
openclaw health
openclaw gateway logs # Watch for errors

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Part 4: Access the UI
&lt;/h2&gt;

&lt;p&gt;From any device on your Tailscale network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get all access info in one command
openclaw-access-info

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows your Tailscale URL and gateway token. Open the URL in your browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; &lt;code&gt;https://openclaw-host.tail-scale.ts.net/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Enter your gateway token when prompted. If &lt;code&gt;"allowTailscale": true&lt;/code&gt; is enabled in your config, you'll be auto-authenticated - see Configuration File for trade-offs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Based on common troubleshooting (though not explicitly in OpenClaw docs), when using Tailscale Serve, you may need to add &lt;code&gt;"controlUi": { "allowInsecureAuth": true }&lt;/code&gt; to your config file to avoid "pairing required" WebSocket errors. This skips device pairing for compatibility but reduces security slightly; use only if needed and combine with strong token auth.&lt;/p&gt;




&lt;h2&gt;
  
  
  Configuration File
&lt;/h2&gt;

&lt;p&gt;Your OpenClaw config lives at &lt;code&gt;~/.openclaw/openclaw.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "gateway": {
    "port": 18789,
    "bind": "127.0.0.1",
    "tailscale": {
      "mode": "serve"
    },
    "auth": {
      "mode": "token",
      "token": "your-gateway-token-here",
      "allowTailscale": false // Set to true for auto-auth via Tailscale identity (convenience trade-off: trust your Tailnet fully or use token-only for stricter security)
    },
    "controlUi": {
      "allowInsecureAuth": true // May be needed for Tailscale Serve to fix WebSocket errors; skips device pairing (use with caution)
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;bind: "127.0.0.1"&lt;/code&gt; - Gateway only listens on loopback (required for Tailscale Serve)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tailscale.mode: "serve"&lt;/code&gt; - Tailnet-only access&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;auth.allowTailscale: false&lt;/code&gt; - Prefer token auth; enable &lt;code&gt;true&lt;/code&gt; only if you fully trust your Tailnet (trade-off: easier access vs. potential risk if Tailnet is compromised)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;controlUi.allowInsecureAuth: true&lt;/code&gt; - &lt;strong&gt;May be needed for Tailscale Serve&lt;/strong&gt; - Skips device pairing to prevent "pairing required" WebSocket errors (trade-off: fixes compatibility but weakens pairing-based security; not explicitly required in OpenClaw docs but helps with common issues)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;View your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.openclaw/openclaw.json | jq

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Add Chat Integrations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  WhatsApp
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw channels login
# Scan QR code with WhatsApp &amp;gt; Settings &amp;gt; Linked Devices (note: WhatsApp limits linked devices; check your account limits)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Telegram
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw configure --section channels.telegram
# Add bot token from @BotFather

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Discord
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw configure --section channels.discord
# Add bot token from Discord Developer Portal

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For LLM integrations (e.g., Claude, GPT-4o keys), add them securely via &lt;code&gt;openclaw configure --section models&lt;/code&gt;. Store keys in environment variables or a secrets manager for best practices.&lt;/p&gt;




&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Tailscale Serve Not Working
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Check Tailscale status:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tailscale status
sudo systemctl status tailscaled

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Manually configure Serve:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tailscale serve --bg --https=443 http://127.0.0.1:18789
tailscale serve status

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ensure HTTPS is enabled for your Tailnet:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visit &lt;a href="https://login.tailscale.com/admin/dns?ref=alishaikh.me" rel="noopener noreferrer"&gt;https://login.tailscale.com/admin/dns&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Enable HTTPS if prompted&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Can't Access from Remote Device
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Verify both devices are on Tailscale:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tailscale status # Run on both devices

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Test connectivity:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ping openclaw-host
curl -k https://openclaw-host.tail-scale.ts.net/health

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  WebSocket Errors: "pairing required" (1008)
&lt;/h3&gt;

&lt;p&gt;Edit your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano ~/.openclaw/openclaw.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add under the &lt;code&gt;gateway&lt;/code&gt; section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"controlUi": {
  "allowInsecureAuth": true
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Restart the gateway:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl --user restart openclaw-gateway

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This skips device pairing for Tailscale compatibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation Seems Stuck
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check cloud-init progress
sudo tail -100 /var/log/cloud-init-output.log

# Check OpenClaw install progress
sudo tail -100 /var/log/openclaw-install.log

# Current step
cat /var/run/openclaw-install-progress

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  pnpm Global Bin Error
&lt;/h3&gt;

&lt;p&gt;If you see &lt;code&gt;ERR_PNPM_NO_GLOBAL_BIN_DIR&lt;/code&gt; when installing skills:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm setup
source ~/.bashrc
openclaw onboard --install-daemon # Retry

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Proxmox-Specific Issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;VM won't boot: Check UEFI settings in Proxmox VM hardware tab; ensure BIOS is set to OVMF (UEFI).&lt;/li&gt;
&lt;li&gt;No IP assigned: Verify DHCP in your Proxmox network bridge; restart VM if needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Reset Everything
&lt;/h3&gt;

&lt;p&gt;Start over if needed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw gateway stop
rm -rf ~/.openclaw/
openclaw onboard --install-daemon

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Diagnostic Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw doctor # Comprehensive health check
openclaw status --all # Deep status
openclaw health # Quick health probe
openclaw gateway logs -f # Follow gateway logs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;OpenClaw can execute commands and access your system. Here's how to secure it:&lt;/p&gt;

&lt;h3&gt;
  
  
  Gateway Security
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Loopback binding only&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "gateway": { "bind": "127.0.0.1" } }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Token authentication&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "gateway": {
    "auth": {
      "mode": "token",
      "allowTailscale": false // Token-only is stricter; enable true only if Tailnet is highly trusted
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Rotate tokens regularly (every 90 days)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw configure --section gateway.auth

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Monitor access logs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw gateway logs | grep -i "error\|unauthorised\|failed"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tailscale Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use Serve (Tailnet-only), not Funnel (public).&lt;/li&gt;
&lt;li&gt;Generate auth keys with 30-90 day expiration; rotate them.&lt;/li&gt;
&lt;li&gt;Enable firewall (e.g., UFW: &lt;code&gt;sudo ufw enable&lt;/code&gt; and allow only necessary outbound traffic).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  System Security
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Regular updates:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
openclaw update

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consider automating via cron: &lt;code&gt;crontab -e&lt;/code&gt; and add &lt;code&gt;0 2 * * * /usr/bin/apt update &amp;amp;&amp;amp; /usr/bin/apt upgrade -y&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backups:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Take Proxmox snapshots before changes: &lt;code&gt;qm snapshot 201 pre-config --description 'Before updates'&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Also, backup config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -czf openclaw-backup-$(date +%Y%m%d).tar.gz ~/.openclaw/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use token authentication&lt;/li&gt;
&lt;li&gt;Enable Tailscale Serve (not Funnel)&lt;/li&gt;
&lt;li&gt;Regular backups and updates&lt;/li&gt;
&lt;li&gt;Monitor logs for issues&lt;/li&gt;
&lt;li&gt;Rotate credentials periodically&lt;/li&gt;
&lt;li&gt;Never expose gateway on 0.0.0.0&lt;/li&gt;
&lt;li&gt;Never commit tokens to git&lt;/li&gt;
&lt;li&gt;Install monitoring tools like Fail2Ban for SSH: &lt;code&gt;sudo apt install fail2ban&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For least privilege (e.g., running OpenClaw as non-root user), we'll cover it in a future article when we deep dive into securing it.&lt;/li&gt;
&lt;li&gt;Consider sandboxing with Docker inside the VM for extra isolation.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Testing and Validation
&lt;/h2&gt;

&lt;p&gt;After setup, run a smoke test:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Access the UI via Tailscale URL.&lt;/li&gt;
&lt;li&gt;Send a test message via WhatsApp/Telegram: "Hello, what's the weather in Dubai?" (assuming LLM integration).&lt;/li&gt;
&lt;li&gt;Verify browser control: Ask it to open a safe site and scrape non-sensitive data.&lt;/li&gt;
&lt;li&gt;Check logs for errors: &lt;code&gt;openclaw gateway logs&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If issues arise, use the troubleshooting section.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Template Script (Full Code)
&lt;/h2&gt;

&lt;p&gt;Here is the full code for reference (you can copy-paste this into a file if the wget fails, but using the raw URL is recommended):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# ===== PRODUCTION-READY OPENCLAW PROXMOX TEMPLATE =====
# This script creates a Debian 13 VM template with OpenClaw pre-installed
#
# Author: Ali Shaikh &amp;lt;alishaikh.me&amp;gt;
#
# Features:
# - Cloud-init progress indication
# - Login warnings during initialization
# - Automatic pnpm setup
# - Status check commands
# - Access info helper command
# - Proper error handling
#
# ===== CONFIGURABLE PARAMETERS =====

VMID=10010
VM_NAME="debian-13-openclaw-ready-template"
STORAGE="local-lvm"
MEMORY=4096
SOCKETS=1
CORES=2
DISK_ADDITIONAL_SIZE="+15G"

IMAGE_URL="https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2"
IMAGE_PATH="/var/lib/vz/template/iso/$(basename ${IMAGE_URL})"

CI_USER="debian"
CI_PASSWORD="$(openssl passwd -6 debian)" # For production, use a stronger password

# Replace with your actual SSH public key
echo "ssh-rsa " &amp;gt; ~/ssh.pub

# Download Debian 13 Cloud Image
if [! -f "${IMAGE_PATH}"]; then
  echo "Downloading Debian 13 Cloud Image..."
  wget -P /var/lib/vz/template/iso/ "${IMAGE_URL}"
else
  echo "Image already exists at ${IMAGE_PATH}"
fi

set -x

# Destroy existing VM if it exists
qm destroy $VMID 2&amp;gt;/dev/null || true

# Create VM
qm create ${VMID} \
  --name "${VM_NAME}" \
  --ostype l26 \
  --memory ${MEMORY} \
  --cores ${CORES} \
  --agent 1 \
  --bios ovmf --machine q35 --efidisk0 ${STORAGE}:0,pre-enrolled-keys=0 \
  --cpu host --socket ${SOCKETS} --cores ${CORES} \
  --vga serial0 --serial0 socket \
  --net0 virtio,bridge=vmbr0

# Import disk
qm importdisk ${VMID} "${IMAGE_PATH}" ${STORAGE}
qm set $VMID --scsihw virtio-scsi-pci --virtio0 $STORAGE:vm-${VMID}-disk-1,discard=on
qm resize ${VMID} virtio0 ${DISK_ADDITIONAL_SIZE}
qm set $VMID --boot order=virtio0
qm set $VMID --scsi1 $STORAGE:cloudinit

# Create snippets directory
mkdir -p /var/lib/vz/snippets/

# Create Cloud-Init configuration with proper YAML formatting
cat &amp;lt;&amp;lt; 'CLOUDCONFIG' | tee /var/lib/vz/snippets/debian13_openclaw_ready.yaml
#cloud-config

package_update: true
package_upgrade: true

packages:
  - curl
  - wget
  - git
  - build-essential
  - procps
  - file
  - ca-certificates
  - gnupg
  - qemu-guest-agent
  - htop
  - vim
  - tmux
  - jq
  - cron

write_files:
  # Login status checker - runs on every login to show progress
  - path: /etc/profile.d/cloud-init-status.sh
    permissions: '0755'
    owner: root:root
    content: |
      #!/bin/bash
      # Check cloud-init and installation status on login

      RED='\033[0;31m'
      GREEN='\033[0;32m'
      YELLOW='\033[1;33m'
      BLUE='\033[0;34m'
      NC='\033[0m' # No Color

      # Check if cloud-init is still running
      if cloud-init status 2&amp;gt;/dev/null | grep -q "running"; then
        echo ""
        echo -e "${YELLOW}========================================${NC}"
        echo -e "${YELLOW} SYSTEM INITIALISATION IN PROGRESS${NC}"
        echo -e "${YELLOW}========================================${NC}"
        echo ""
        echo -e "${BLUE}Cloud-init is still running. Please wait...${NC}"
        echo ""
        echo "Monitor progress:"
        echo " sudo tail -f /var/log/cloud-init-output.log"
        echo " sudo tail -f /var/log/openclaw-install.log"
        echo ""
        echo "Check status:"
        echo " cloud-init status"
        echo " cat /var/run/openclaw-install-progress"
        echo ""
        echo -e "${YELLOW}Some commands may not be available yet!${NC}"
        echo ""
      elif [-f /var/run/openclaw-install-progress]; then
        PROGRESS=$(cat /var/run/openclaw-install-progress 2&amp;gt;/dev/null)
        if ["$PROGRESS" != "COMPLETE"]; then
          echo ""
          echo -e "${YELLOW}Installation in progress: $PROGRESS${NC}"
          echo "Run: sudo tail -f /var/log/openclaw-install.log"
          echo ""
        fi
      elif [-f /root/.openclaw-ready]; then
        # Only show ready message once per session
        if [-z "$OPENCLAW_READY_SHOWN"]; then
          echo ""
          echo -e "${GREEN}System is ready! Run: setup-openclaw.sh${NC}"
          echo ""
          export OPENCLAW_READY_SHOWN=1
        fi
      fi

  - path: /usr/local/bin/openclaw-access-info
    permissions: '0755'
    owner: root:root
    content: |
      #!/bin/bash
      echo "==========================================="
      echo "OpenClaw Access Information"
      echo "==========================================="
      echo ""
      echo "Tailscale Serve URL:"
      tailscale serve status 2&amp;gt;/dev/null | grep -o 'https://[^]*' || echo " Not configured - run: openclaw onboard --install-daemon"
      echo ""
      if [-f ~/.openclaw/openclaw.json]; then
        echo "Gateway Token:"
        cat ~/.openclaw/openclaw.json | jq -r '.gateway.auth.token' 2&amp;gt;/dev/null || echo " Not found"
        echo ""
        echo "Allow Tailscale Auth:"
        cat ~/.openclaw/openclaw.json | jq -r '.gateway.auth.allowTailscale' 2&amp;gt;/dev/null || echo " Not configured"
      else
        echo "OpenClaw not configured yet"
        echo "Run: openclaw onboard --install-daemon"
      fi
      echo "==========================================="

  - path: /usr/local/bin/setup-openclaw.sh
    permissions: '0755'
    owner: root:root
    content: |
      #!/bin/bash
      echo "======================================"
      echo "OpenClaw Setup Helper"
      echo "======================================"
      echo ""
      echo "Installed versions:"
      echo " Node.js: $(node --version 2&amp;gt;/dev/null || echo 'Not found')"
      echo " npm: $(npm --version 2&amp;gt;/dev/null || echo 'Not found')"
      echo " pnpm: $(pnpm --version 2&amp;gt;/dev/null || echo 'Not installed')"
      echo " OpenClaw: $(openclaw --version 2&amp;gt;/dev/null || echo 'Not installed - run: npm install -g openclaw@latest')"
      echo " Tailscale: $(tailscale version 2&amp;gt;/dev/null || echo 'Not found')"
      echo " Homebrew: $(brew --version 2&amp;gt;/dev/null | head -n1 || echo 'Not found - source ~/.bashrc first')"
      echo ""
      echo "Quick Start Guide:"
      echo "=================="
      echo ""
      echo "1. Connect to Tailscale:"
      echo " Get auth key: https://login.tailscale.com/admin/settings/keys"
      echo " sudo tailscale up --authkey=tskey-auth-YOUR_KEY"
      echo ""
      echo "2. Configure pnpm (required for skills):"
      echo " pnpm setup &amp;amp;&amp;amp; source ~/.bashrc"
      echo ""
      echo "3. Configure OpenClaw:"
      echo " openclaw onboard --install-daemon"
      echo ""
      echo " Configuration choices:"
      echo " - Setup: Local gateway (this machine)"
      echo " - Bind: Loopback (127.0.0.1) &amp;lt;- REQUIRED"
      echo " - Auth: Token (Recommended)"
      echo " - Tailscale: Serve (for HTTPS)"
      echo ""
      echo "4. Get access info:"
      echo " openclaw-access-info"
      echo ""
      echo "5. Verify:"
      echo " openclaw status"
      echo " openclaw health"
      echo " tailscale serve status"
      echo ""
      echo "Docs: https://docs.openclaw.ai"
      echo ""

  # Progress checker command
  - path: /usr/local/bin/check-install-status
    permissions: '0755'
    owner: root:root
    content: |
      #!/bin/bash
      echo "========================================"
      echo "Installation Status Check"
      echo "========================================"
      echo ""

      # Cloud-init status
      echo "Cloud-init status:"
      cloud-init status 2&amp;gt;/dev/null || echo " Unable to check"
      echo ""

      # Progress file (cleared after reboot, so may not exist)
      if [-f /var/run/openclaw-install-progress]; then
        echo "Current step: $(cat /var/run/openclaw-install-progress)"
      fi

      # Ready marker (use sudo to check /root)
      if sudo test -f /root/.openclaw-ready 2&amp;gt;/dev/null; then
        echo "Status: READY"
      else
        echo "Status: IN PROGRESS (or checking permissions...)"
      fi
      echo ""

      # Installed versions (use sudo to read /root)
      if sudo test -f /root/install-versions.txt 2&amp;gt;/dev/null; then
        echo "Installed versions:"
        sudo cat /root/install-versions.txt | sed 's/^/ /'
      fi
      echo ""

      echo "Logs:"
      echo " sudo tail -f /var/log/openclaw-install.log"

  - path: /root/install-openclaw.sh
    permissions: '0755'
    owner: root:root
    content: |
      #!/bin/bash
      # Don't use set -e - we want to continue even if some commands fail
      exec &amp;gt; /var/log/openclaw-install.log 2&amp;gt;&amp;amp;1

      # Progress update function
      update_progress() {
        echo "$1" &amp;gt; /var/run/openclaw-install-progress
        echo "&amp;gt;&amp;gt;&amp;gt; PROGRESS: $1"
      }

      echo "=== OpenClaw Installation Started at $(date) ==="
      update_progress "Starting installation..."

      # Install Tailscale
      update_progress "Installing Tailscale..."
      curl -fsSL https://tailscale.com/install.sh | sh
      if command -v tailscale &amp;amp;&amp;gt; /dev/null; then
        echo "Tailscale version: $(tailscale version)"
      else
        echo "WARNING: Tailscale installation may have failed"
      fi

      # Install Node.js 24
      update_progress "Installing Node.js 24..."
      curl -fsSL https://deb.nodesource.com/setup_24.x | bash -
      apt-get install -y nodejs
      echo "Node.js version: $(node --version)"
      echo "npm version: $(npm --version)"

      # Install pnpm globally
      update_progress "Installing pnpm..."
      npm install -g pnpm@latest
      echo "pnpm version: $(pnpm --version)"

      # Setup pnpm global bin directory (required for OpenClaw skills)
      pnpm setup
      source /root/.bashrc 2&amp;gt;/dev/null || true
      # Also setup for debian user
      sudo -u debian bash -c 'pnpm setup' 2&amp;gt;/dev/null || true

      # Install OpenClaw via npm
      update_progress "Installing OpenClaw..."
      npm install -g openclaw@latest

      # Verify installation and add to PATH if needed
      if command -v openclaw &amp;amp;&amp;gt; /dev/null; then
        echo "OpenClaw version: $(openclaw --version)"
      else
        echo "OpenClaw not in PATH, checking npm global bin..."
        NPM_BIN=$(npm bin -g)
        if [-f "$NPM_BIN/openclaw"]; then
          echo "Found at $NPM_BIN/openclaw, creating symlink..."
          ln -sf "$NPM_BIN/openclaw" /usr/local/bin/openclaw
          echo "OpenClaw version: $(openclaw --version)"
        else
          echo "ERROR: OpenClaw installation failed"
          echo "NPM global bin: $NPM_BIN"
          ls -la "$NPM_BIN/" 2&amp;gt;/dev/null || echo "Cannot list npm bin directory"
        fi
      fi

      # Install Homebrew for debian user (for plugins)
      update_progress "Installing Homebrew (this may take a while)..."
      # Create the debian user's home if it doesn't exist
      mkdir -p /home/debian
      chown debian:debian /home/debian

      # Install Homebrew as debian user
      sudo -u debian bash -c 'NONINTERACTIVE=1 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"' || {
        echo "WARNING: Homebrew installation failed"
      }

      # Add Homebrew to debian user's PATH
      if [-d "/home/linuxbrew/.linuxbrew"]; then
        sudo -u debian bash -c 'echo "# Homebrew" &amp;gt;&amp;gt; ~/.bashrc'
        sudo -u debian bash -c 'echo "eval \"$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)\"" &amp;gt;&amp;gt; ~/.bashrc'
        echo "Homebrew installed successfully"
      else
        echo "WARNING: Homebrew directory not found after installation"
      fi

      update_progress "Finalising..."

      echo "=== Installation Complete at $(date) ==="
      echo "Node.js: $(node --version 2&amp;gt;/dev/null || echo 'FAILED')" &amp;gt; /root/install-versions.txt
      echo "npm: $(npm --version 2&amp;gt;/dev/null || echo 'FAILED')" &amp;gt;&amp;gt; /root/install-versions.txt
      echo "pnpm: $(pnpm --version 2&amp;gt;/dev/null || echo 'FAILED')" &amp;gt;&amp;gt; /root/install-versions.txt
      echo "OpenClaw: $(openclaw --version 2&amp;gt;/dev/null || echo 'FAILED')" &amp;gt;&amp;gt; /root/install-versions.txt
      echo "Tailscale: $(tailscale version 2&amp;gt;/dev/null || echo 'FAILED')" &amp;gt;&amp;gt; /root/install-versions.txt
      echo "Homebrew: $(sudo -u debian /home/linuxbrew/.linuxbrew/bin/brew --version 2&amp;gt;/dev/null | head -n1 || echo 'FAILED')" &amp;gt;&amp;gt; /root/install-versions.txt

      # Mark installation as complete
      update_progress "COMPLETE"
      touch /root/.openclaw-ready

      echo ""
      echo "Check /root/install-versions.txt for results"

runcmd:
  - systemctl enable qemu-guest-agent
  - systemctl start qemu-guest-agent
  - echo "INITIALISING" &amp;gt; /var/run/openclaw-install-progress
  - chmod +x /root/install-openclaw.sh
  - /root/install-openclaw.sh
  - chown debian:debian /usr/local/bin/setup-openclaw.sh
  - |
    cat &amp;gt;&amp;gt; /etc/motd &amp;lt;&amp;lt; 'MOTD'

    ========================================
    OpenClaw Ready Template
    ========================================
    Created by Ali Shaikh
    https://alishaikh.me

    Pre-installed (globally):
      - Node.js 24 (NodeSource)
      - pnpm (latest)
      - OpenClaw CLI
      - Tailscale (official)
      - Homebrew (for plugins)

    Check installation: cat /root/install-versions.txt
    Check status: check-install-status
    Setup guide: setup-openclaw.sh
    Access info: openclaw-access-info (after configuration)

    MOTD
  - apt-get clean
  - apt-get autoremove -y

power_state:
  mode: reboot
  message: "Rebooting after OpenClaw installation"
  timeout: 30
  condition: True

CLOUDCONFIG

# Apply Cloud-Init configuration
qm set $VMID --cicustom "vendor=local:snippets/debian13_openclaw_ready.yaml"
qm set $VMID --tags debian-template,openclaw-ready,nodejs24,production
qm set ${VMID} --ciuser ${CI_USER} --cipassword "${CI_PASSWORD}"
qm set $VMID --sshkeys ~/ssh.pub
qm set $VMID --ipconfig0 ip=dhcp

# Convert to template
qm template ${VMID}

echo ""
echo "=========================================="
echo "PRODUCTION TEMPLATE CREATED SUCCESSFULLY"
echo "=========================================="
echo "Template ID: ${VMID}"
echo "Template Name: ${VM_NAME}"
echo ""
echo "Globally installed:"
echo " - Node.js 24 (NodeSource)"
echo " - pnpm (latest)"
echo " - OpenClaw CLI"
echo " - Tailscale (official)"
echo " - Homebrew (for plugins)"
echo ""
echo "DEPLOYMENT:"
echo " qm clone ${VMID} 201 --name openclaw-prod --full"
echo " qm start 201"
echo ""
echo "IMPORTANT: Wait 5-7 minutes for installation to complete"
echo ""
echo "CHECK INSTALLATION STATUS:"
echo " ssh debian@&amp;lt;vm-ip&amp;gt;"
echo " check-install-status # Quick status check"
echo " tail -f /var/log/openclaw-install.log"
echo " cat /root/install-versions.txt"
echo ""

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You now have a production-ready AI assistant running on your own infrastructure, accessible from anywhere via WhatsApp, Telegram, or your browser, with zero public exposure. Your conversations and command history stay on your server, your files remain under your control, and you decide exactly what the AI can access. However, be clear about the privacy boundary: when you use cloud models like Claude, GPT-4o, or Gemini, your prompts and responses still go to Anthropic, OpenAI, or Google. Self-hosting OpenClaw gives you infrastructure control and data residency for everything except the LLM API calls. For true end-to-end privacy, you'd need to run local models via &lt;a href="https://alishaikh.me/how-to-run-local-llms-with-ollama-quick-setup-guide/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; or &lt;a href="https://alishaikh.me/run-a-local-llm-with-lm-studio-easy-2025-tutorial/" rel="noopener noreferrer"&gt;LM Studio&lt;/a&gt;, though at the cost of performance compared to cloud models.&lt;/p&gt;

&lt;p&gt;Before you start feeding your assistant sensitive data, remember what you've built: a system that can execute arbitrary commands, access files, and interact with external services. The security measures in this guide (loopback binding, token authentication, Tailscale isolation) are foundational, not exhaustive. Treat this like any other privileged service: rotate credentials regularly, monitor access logs, keep the system patched, and never expose the gateway publicly. The Proxmox snapshot feature is your friend - take one before major changes so you can roll back easily.&lt;/p&gt;

&lt;p&gt;The Proxmox template makes this repeatable: clone it whenever you need a fresh instance, whether for testing new configurations or deploying assistants for different use cases. OpenClaw is still evolving rapidly (remember, it went from zero to 100k stars in two months), so expect frequent updates and new capabilities. Consider this deployment a learning platform first, a production tool second. Test destructively in clones, understand the permission model, and gradually increase what you trust it to do. Report issues or contribute on the GitHub repo.&lt;/p&gt;

&lt;p&gt;Self-hosting means self-responsibility, but it also means you're not at the mercy of a vendor's API changes or pricing decisions. Join the community, experiment with custom skills, contribute back what you learn, and most importantly, keep your gateway token secure. Stay vigilant, stay curious, and enjoy having an AI that actually works for you. Happy automating! 🦞&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.openclaw.ai/?ref=alishaikh.me" rel="noopener noreferrer"&gt;OpenClaw Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openclaw/openclaw?ref=alishaikh.me" rel="noopener noreferrer"&gt;OpenClaw GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tailscale.com/kb/?ref=alishaikh.me" rel="noopener noreferrer"&gt;Tailscale Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pve.proxmox.com/wiki/Cloud-Init_Support?ref=alishaikh.me" rel="noopener noreferrer"&gt;Proxmox Cloud-Init&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.debian.org/images/cloud/trixie/?ref=alishaikh.me" rel="noopener noreferrer"&gt;Debian Cloud Images&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Kubernetes 1.35: In-Place Pod Vertical Scaling Reaches GA</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Mon, 29 Dec 2025 20:48:39 +0000</pubDate>
      <link>https://dev.to/alishaikh/kubernetes-135-in-place-pod-vertical-scaling-reaches-ga-4b06</link>
      <guid>https://dev.to/alishaikh/kubernetes-135-in-place-pod-vertical-scaling-reaches-ga-4b06</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbkqtp7ljy8o8mqfrdhs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbkqtp7ljy8o8mqfrdhs.png" alt="Kubernetes 1.35: In-Place Pod Vertical Scaling Reaches GA" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes v1.35, released on 17 December 2025, brings the long-awaited in-place pod vertical scaling feature to general availability (GA). This feature, officially called 'In-Place Pod Resize', allows administrators to adjust CPU and memory resources for running containers without recreating pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Traditional Approach to Resource Scaling
&lt;/h2&gt;

&lt;p&gt;Before this feature, scaling pod resources required deleting and recreating the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Update deployment/pod specification with new resource values
2. Kubernetes terminates the existing pod
3. Scheduler creates a new pod with updated resources
4. Container runtime starts the new pod
5. Application initialises and becomes ready

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach causes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dropped network connections&lt;/li&gt;
&lt;li&gt;Loss of in-memory application state&lt;/li&gt;
&lt;li&gt;Service disruption during pod restart&lt;/li&gt;
&lt;li&gt;Extended downtime for applications with long startup times&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is In-Place Pod Vertical Scaling?
&lt;/h2&gt;

&lt;p&gt;In-Place Pod Vertical Scaling enables runtime modification of CPU and memory resource requests and limits without pod recreation. The Kubelet adjusts cgroup limits for running containers, allowing resource changes with minimal disruption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Development Timeline
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;v1.27 (April 2023)&lt;/strong&gt;: Alpha release&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;v1.33 (May 2025)&lt;/strong&gt;: Beta release&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;v1.35 (December 2025)&lt;/strong&gt;: General availability (stable)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The feature required 6+ years of development to solve complex challenges, including container runtime coordination, scheduler synchronisation, and memory safety guarantees.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features in v1.35
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Memory Limit Decrease
&lt;/h3&gt;

&lt;p&gt;Previous versions prohibited memory limit decreases due to out-of-memory (OOM) concerns. Version 1.35 implements best-effort memory decrease with safety checks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubelet verifies current memory usage is below the new limit&lt;/li&gt;
&lt;li&gt;Resize fails gracefully if usage exceeds the new limit&lt;/li&gt;
&lt;li&gt;Not guaranteed to prevent OOM, but significantly safer than forced decrease&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Previously prohibited, now allowed
resources:
  requests:
    memory: "512Mi" # Decreased from 1Gi
  limits:
    memory: "1Gi" # Decreased from 2Gi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Prioritised Resize Queue
&lt;/h3&gt;

&lt;p&gt;When node capacity is insufficient for all resize requests, Kubernetes prioritises them by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;PriorityClass&lt;/strong&gt; value (higher first)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;QoS class&lt;/strong&gt; (Guaranteed &amp;gt; Burstable &amp;gt; BestEffort)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration deferred&lt;/strong&gt; (oldest first)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This ensures critical workloads receive resources before lower-priority pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Enhanced Observability
&lt;/h3&gt;

&lt;p&gt;New metrics and events improve resize operation tracking:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubelet Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;pod_resource_resize_requests_total&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pod_resource_resize_failures_total&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pod_resource_resize_duration_seconds&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pod Conditions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PodResizePending&lt;/code&gt;: Request cannot be immediately granted&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PodResizeInProgress&lt;/code&gt;: Kubelet is applying changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. VPA Integration
&lt;/h3&gt;

&lt;p&gt;Vertical Pod Autoscaler (VPA) &lt;code&gt;InPlaceOrRecreate&lt;/code&gt; update mode graduated to beta, enabling automatic resource adjustment using in-place resize when possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Resize Policies
&lt;/h3&gt;

&lt;p&gt;Containers specify restart behaviour for each resource type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - name: app
    image: nginx:1.27
    resources:
      requests:
        cpu: "500m"
        memory: "512Mi"
      limits:
        cpu: "1000m"
        memory: "1Gi"
    resizePolicy:
    - resourceName: cpu
      restartPolicy: NotRequired # No restart for CPU changes
    - resourceName: memory
      restartPolicy: RestartContainer # Restart required for memory

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Policy Options:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;NotRequired&lt;/code&gt;: Apply changes without container restart (default for CPU)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RestartContainer&lt;/code&gt;: Restart container to apply changes (often needed for memory)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Applying Resource Changes
&lt;/h3&gt;

&lt;p&gt;Use the &lt;code&gt;--subresource=resize&lt;/code&gt; flag with kubectl (requires kubectl v1.32+):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch pod example --subresource=resize --patch '
{
  "spec": {
    "containers": [
      {
        "name": "app",
        "resources": {
          "requests": {"cpu": "800m"},
          "limits": {"cpu": "1600m"}
        }
      }
    ]
  }
}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Monitoring Resize Status
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Check desired vs actual resources:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Desired (spec)
kubectl get pod example -o jsonpath='{.spec.containers[0].resources}'

# Actual (status)
kubectl get pod example -o jsonpath='{.status.containerStatuses[0].resources}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;View resize conditions:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod example -o jsonpath='{.status.conditions[?(@.type=="PodResizeInProgress")]}'
kubectl describe pod example | grep -A 10 Events

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Peak Hour Resource Scaling
&lt;/h3&gt;

&lt;p&gt;Scale resources to match daily traffic patterns without service disruption. For example, scale up CPU/memory at 08:55 for business hours, then scale down at 18:05 after hours, eliminating the need for pod recreation.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Database Maintenance Operations
&lt;/h3&gt;

&lt;p&gt;Temporarily increase CPU allocation for intensive maintenance tasks like VACUUM or REINDEX operations, then restore normal allocation once complete. This avoids pod restart and preserves warm database caches.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. JIT Compilation Warmup
&lt;/h3&gt;

&lt;p&gt;Provide additional CPU during application startup for JIT compilation warmup, then reduce allocation once the application reaches steady state. Particularly beneficial for Java applications with large codebases.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Cost Optimisation
&lt;/h3&gt;

&lt;p&gt;Dynamically right-size resources to reduce waste by scaling down during low-traffic periods, reducing over-provisioning based on actual usage patterns, and implementing time-based scaling without pod recreation overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and Constraints
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Resource Support:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only CPU and memory can be resized (GPU, ephemeral storage, hugepages remain immutable)&lt;/li&gt;
&lt;li&gt;Init and ephemeral containers cannot be resized&lt;/li&gt;
&lt;li&gt;QoS class cannot change during resize&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Platform Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not supported on Windows nodes&lt;/li&gt;
&lt;li&gt;Requires compatible container runtime (containerd v2.0+, CRI-O v1.25+)&lt;/li&gt;
&lt;li&gt;Cannot resize with static CPU or Memory manager policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important Behaviours:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most runtimes (Java, Python, Node.js) require restart for memory changes&lt;/li&gt;
&lt;li&gt;Updating Deployment specs does not auto-resize existing pods&lt;/li&gt;
&lt;li&gt;Cannot remove requests or limits entirely, only modify values&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Upgrading to Kubernetes v1.35
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes v1.34.x cluster&lt;/li&gt;
&lt;li&gt;kubectl v1.32+ client&lt;/li&gt;
&lt;li&gt;Compatible container runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Upgrade Process
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Add v1.35 repository to all nodes:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# On each node
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | \
  sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-1.35-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-1.35-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | \
  sudo tee /etc/apt/sources.list.d/kubernetes-1.35.list

sudo apt update

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Upgrade control plane:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-mark unhold kubeadm
sudo apt install -y kubeadm=1.35.0-1.1
sudo kubeadm upgrade apply v1.35.0 -y
sudo apt-mark unhold kubelet kubectl
sudo apt install -y kubelet=1.35.0-1.1 kubectl=1.35.0-1.1
sudo systemctl restart kubelet

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Upgrade workers (one at a time):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl drain &amp;lt;worker-name&amp;gt; --ignore-daemonsets --delete-emptydir-data --force

ssh &amp;lt;worker-node&amp;gt;
sudo apt-mark unhold kubeadm kubelet kubectl
sudo apt install -y kubeadm=1.35.0-1.1 kubelet=1.35.0-1.1 kubectl=1.35.0-1.1
sudo kubeadm upgrade node
sudo systemctl restart kubelet

kubectl uncordon &amp;lt;worker-name&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Verify upgrade:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
# All nodes should show v1.35.0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing In-Place Resize
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Basic CPU Resize Test
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: cpu-test
spec:
  containers:
  - name: nginx
    image: nginx:1.27
    resources:
      requests:
        cpu: "250m"
      limits:
        cpu: "500m"
    resizePolicy:
    - resourceName: cpu
      restartPolicy: NotRequired

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create pod and verify baseline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cpu-test.yaml
kubectl get pod cpu-test -o jsonpath='{.status.containerStatuses[0].restartCount}'
# Output: 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Resize CPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch pod cpu-test --subresource=resize --patch '
{
  "spec": {
    "containers": [{
      "name": "nginx",
      "resources": {
        "requests": {"cpu": "800m"},
        "limits": {"cpu": "800m"}
      }
    }]
  }
}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify no restart occurred:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod cpu-test -o jsonpath='{.status.containerStatuses[0].restartCount}'
# Output: 0 (unchanged)

kubectl get pod cpu-test -o jsonpath='{.spec.containers[0].resources.requests.cpu}'
# Output: 800m (updated)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Memory Decrease Test
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create pod with high memory
kubectl apply -f memory-test.yaml

# Increase memory first
kubectl patch pod memory-test --subresource=resize --patch '
{
  "spec": {
    "containers": [{
      "name": "app",
      "resources": {
        "requests": {"memory": "1Gi"},
        "limits": {"memory": "2Gi"}
      }
    }]
  }
}'

# Decrease memory (new in v1.35)
kubectl patch pod memory-test --subresource=resize --patch '
{
  "spec": {
    "containers": [{
      "name": "app",
      "resources": {
        "requests": {"memory": "512Mi"},
        "limits": {"memory": "1Gi"}
      }
    }]
  }
}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Testing:&lt;/strong&gt; Verify application behaviour in non-production before enabling in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resize Policies:&lt;/strong&gt; Use &lt;code&gt;NotRequired&lt;/code&gt; for CPU (usually safe), &lt;code&gt;RestartContainer&lt;/code&gt; for memory (often requires restart).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring:&lt;/strong&gt; Track Kubelet metrics for resize operations, alert on persistent &lt;code&gt;PodResizePending&lt;/code&gt; conditions, monitor OOM events after decreases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Changes:&lt;/strong&gt; Make incremental adjustments, avoid large jumps, verify current usage before decreasing limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Priority:&lt;/strong&gt; Use PriorityClasses for critical workloads to ensure resize priority during capacity constraints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Resize Rejected (stuck in &lt;code&gt;PodResizePending&lt;/code&gt;):&lt;/strong&gt; Check node capacity (&lt;code&gt;kubectl describe node&lt;/code&gt;), verify QoS class compatibility, review kubelet configuration for static resource managers, ensure resize policy allows the change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unexpected Restarts:&lt;/strong&gt; Memory changes often require restart regardless of policy. Use &lt;code&gt;RestartContainer&lt;/code&gt; for memory, test application behaviour, monitor OOM events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OOM After Memory Decrease:&lt;/strong&gt; Verify current usage first (&lt;code&gt;kubectl top pod&lt;/code&gt;), make gradual decreases, ensure application can release memory under pressure, consider using &lt;code&gt;RestartContainer&lt;/code&gt; to force release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anti-Patterns: When NOT to Use In-Place Resize
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; In-place pod resize is a powerful feature that can become an anti-pattern if misused. It should be reserved for specific edge cases, not used as a general scaling strategy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why It Can Be an Anti-Pattern
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Violates Immutable Infrastructure:&lt;/strong&gt; Kubernetes follows "cattle not pets" philosophy with ephemeral, replaceable pods. In-place resize makes pods mutable and long-lived, contradicting this design principle and reducing system resilience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Breaks GitOps:&lt;/strong&gt; Manual pod patching creates drift between Git repository and cluster reality. Next deployment overwrites manual changes, cluster state doesn't match version control, and you cannot reproduce environments from Git.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Configuration Drift:&lt;/strong&gt; Pods in the same Deployment can have different resource allocations, causing inconsistent behaviour across replicas, difficult debugging, and unpredictable load distribution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Reduces Failure Isolation:&lt;/strong&gt; Encourages keeping pods alive longer rather than quick replacement, which can hide underlying issues like memory leaks and delay necessary restarts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Better Alternatives Exist:&lt;/strong&gt; Traffic spikes should use HPA (scale out), resource changes should update Deployments (declarative), and environment differences should use Kustomize/Helm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some Valid Use Cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Stateful Applications with Expensive Startup:&lt;/strong&gt; Databases with 15+ minute startup times (cache warming, index loading). Example: Scale up CPU for PostgreSQL VACUUM/ANALYZE operations, then scale down. Justified because recreation cost exceeds configuration drift cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Single-Instance Workloads:&lt;/strong&gt; Legacy applications with shared file locks, non-distributed state, or connection pooling limitations. Justified because horizontal scaling isn't possible without full rewrite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Emergency Production Incidents:&lt;/strong&gt; Immediate relief for OOM conditions at 3am when proper solutions require hours of testing. Must be temporary (fix properly within 24 hours) and documented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Predictable Temporary Spikes:&lt;/strong&gt; Monthly report generation requiring 4x CPU for 2 hours. Automate with CronJobs to scale up/down on schedule. Justified because running 4x resources 24/7 wastes 96% of allocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Testing and Capacity Planning:&lt;/strong&gt; Non-production experimentation to measure performance at different resource levels and determine optimal production sizing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Enhancements
&lt;/h2&gt;

&lt;p&gt;Kubernetes SIG-Node is working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removing static CPU/Memory manager restrictions&lt;/li&gt;
&lt;li&gt;Support for additional resource types&lt;/li&gt;
&lt;li&gt;Improved safety for memory decreases (runtime-level checks)&lt;/li&gt;
&lt;li&gt;Resource pre-emption (evict lower-priority pods for high-priority resizes)&lt;/li&gt;
&lt;li&gt;Better integration with horizontal autoscaling&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In-Place Pod Vertical Scaling addresses a long-standing Kubernetes limitation, enabling resource adjustments without service disruption. The v1.35 GA release brings production-ready capability with important improvements including memory decrease support, prioritised queuing, and enhanced observability.&lt;/p&gt;

&lt;p&gt;While this feature enables dynamic resource management for stateful workloads and emergency scenarios, it should complement rather than replace traditional scaling patterns. Use HPA for traffic-based scaling, update Deployments for permanent changes, and reserve in-place resize for the specific edge cases where pod recreation cost genuinely exceeds operational complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/blog/2025/12/17/kubernetes-v1-35-release/?ref=alishaikh.me" rel="noopener noreferrer"&gt;Kubernetes v1.35 Release Announcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/blog/2025/12/19/kubernetes-v1-35-in-place-pod-resize-ga/?ref=alishaikh.me" rel="noopener noreferrer"&gt;In-Place Pod Resize GA Blog Post&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/?ref=alishaikh.me" rel="noopener noreferrer"&gt;Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1287-in-place-update-pod-resources?ref=alishaikh.me" rel="noopener noreferrer"&gt;KEP-1287: In-Place Update of Pod Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.35.md?ref=alishaikh.me" rel="noopener noreferrer"&gt;Kubernetes v1.35 Release Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>homelab</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>HTTP Status Codes Explained: Unlock the Web’s Secret Signals</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Fri, 21 Mar 2025 08:55:41 +0000</pubDate>
      <link>https://dev.to/alishaikh/http-status-codes-explained-unlock-the-webs-secret-signals-217o</link>
      <guid>https://dev.to/alishaikh/http-status-codes-explained-unlock-the-webs-secret-signals-217o</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb4k63c2y62n24ecaihu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb4k63c2y62n24ecaihu.png" alt="HTTP Status Codes Explained: Unlock the Web’s Secret Signals" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ever clicked a link and wondered what’s happening behind the scenes? Your browser and the server trade quick messages, and HTTP status codes are the server’s way of spilling the beans. These three-digit numbers might seem odd, but they’re packed with info. Let’s break them down and try a fun trick to see them in action!&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are HTTP Status Codes?
&lt;/h2&gt;

&lt;p&gt;HTTP status codes are short replies from web servers to your browser, summing up what happened with your request. Loading a page, submitting a form, or fetching data—they’ve got a code for it all. Grouped into five categories, they tell you if things went great, got redirected, or hit a snag.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Categories of Status Codes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1xx: Informational Responses
&lt;/h3&gt;

&lt;p&gt;Rarely spotted, these codes mean your request is still cooking.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;100 Continue&lt;/strong&gt; : "Got the first bit—send more!"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;101 Switching Protocols&lt;/strong&gt; : "Switching connections per your request."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2xx: Success Responses
&lt;/h3&gt;

&lt;p&gt;Sweet success! These mean your request nailed it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;200 OK&lt;/strong&gt; : Perfection—your page loaded without a hitch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;201 Created&lt;/strong&gt; : You’ve added something new, like a comment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;204 No Content&lt;/strong&gt; : All good, but nothing to display.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3xx: Redirection Responses
&lt;/h3&gt;

&lt;p&gt;A detour ahead—these point you somewhere else.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;301 Moved Permanently&lt;/strong&gt; : "This page has a new home."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;302 Found&lt;/strong&gt; : "It’s hanging out elsewhere for now."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;304 Not Modified&lt;/strong&gt; : "No updates since last time."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4xx: Client Errors
&lt;/h3&gt;

&lt;p&gt;Uh-oh—something’s funky with your request.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;400 Bad Request&lt;/strong&gt; : "I can’t make sense of that."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;401 Unauthorized&lt;/strong&gt; : "Log in first, please."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;403 Forbidden&lt;/strong&gt; : "No entry allowed here."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;404 Not Found&lt;/strong&gt; : The classic "where’d it go?" error.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;418 I’m a Teapot&lt;/strong&gt; : A goofy joke—servers don’t brew coffee!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5xx: Server Errors
&lt;/h3&gt;

&lt;p&gt;Not your mess—the server’s struggling.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;500 Internal Server Error&lt;/strong&gt; : "We broke something."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;502 Bad Gateway&lt;/strong&gt; : "A middleman dropped the ball."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;503 Service Unavailable&lt;/strong&gt; : "Too busy or down for a tune-up."&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Status Codes Matter
&lt;/h2&gt;

&lt;p&gt;These tiny numbers power the web:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They tell your browser what’s next (reload, redirect, or error out).&lt;/li&gt;
&lt;li&gt;They help developers squash bugs.&lt;/li&gt;
&lt;li&gt;They shape your journey, from snappy loads to clear error pages.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out: Catch a Status Code in the Wild
&lt;/h2&gt;

&lt;p&gt;Want to peek at these codes yourself? Here’s how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Chrome or Firefox on your computer.&lt;/li&gt;
&lt;li&gt;Right-click any webpage and hit “Inspect” for Developer Tools.&lt;/li&gt;
&lt;li&gt;Click the “Network” tab, then reload the page (Ctrl+R or Cmd+R).&lt;/li&gt;
&lt;li&gt;Check the “Status” column—spot 200s, 304s, or a 404 if something’s AWOL.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For kicks, try a fake URL (like &lt;a href="https://alishaikh.me/oops/" rel="noopener noreferrer"&gt;&lt;code&gt;alishaikh.me/oops&lt;/code&gt;&lt;/a&gt;) and watch a 404 appear!&lt;/p&gt;

&lt;h2&gt;
  
  
  Fun Fact
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;418 I’m a Teapot&lt;/strong&gt; code hatched from a 1998 April Fools’ prank. Proof the web’s got a playful side!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Web’s Pulse: What Will You Discover?
&lt;/h2&gt;

&lt;p&gt;From a flawless 200 OK to a cheeky 418, HTTP status codes are the heartbeat of every click. Now that you’ve decoded their signals, you’re no longer just a web surfer—you’re a digital detective. Fire up that “Network” tab and hunt for clues. What’s the wildest code you’ll catch today?&lt;/p&gt;

</description>
      <category>http</category>
      <category>webdev</category>
      <category>errors</category>
    </item>
    <item>
      <title>Creating Simple AI Agents: A Beginner's Guide</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Tue, 04 Mar 2025 08:33:02 +0000</pubDate>
      <link>https://dev.to/alishaikh/creating-simple-ai-agents-a-beginners-guide-2mbg</link>
      <guid>https://dev.to/alishaikh/creating-simple-ai-agents-a-beginners-guide-2mbg</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1516110833967-0b5716ca1387%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDJ8fHJvYm90JTIwYWl8ZW58MHx8fHwxNzQxMDM1MjMyfDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1516110833967-0b5716ca1387%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDJ8fHJvYm90JTIwYWl8ZW58MHx8fHwxNzQxMDM1MjMyfDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="Creating Simple AI Agents: A Beginner's Guide" width="2000" height="1500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In today's rapidly evolving tech landscape, AI agents are becoming increasingly accessible to developers of all skill levels. This article introduces the concept of AI agents and walks through creating a simple yet practical example: a landing page generator powered by language models running locally on your system using Ollama.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are AI Agents?
&lt;/h2&gt;

&lt;p&gt;AI agents are software programs that can perform tasks with varying degrees of autonomy using artificial intelligence techniques. They typically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Take input&lt;/strong&gt; from users or environment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process information&lt;/strong&gt; using AI models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perform actions&lt;/strong&gt; based on this processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provide output or feedback&lt;/strong&gt; to users&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The complexity of AI agents ranges from simple task-specific tools to sophisticated systems with complex reasoning capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites: Running Local LLMs with Ollama
&lt;/h2&gt;

&lt;p&gt;Before diving into our AI agent, make sure you have Ollama set up on your system. If you haven't installed it yet, I've written a comprehensive guide on &lt;a href="https://alishaikh.me/how-to-run-local-llms-with-ollama-quick-setup-guide/" rel="noopener noreferrer"&gt;how to run local LLMs with Ollama&lt;/a&gt;. This will walk you through the installation process and get you ready to build your first AI agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anatomy of a Simple AI Agent
&lt;/h2&gt;

&lt;p&gt;The landing page generator we're examining represents a basic AI agent with these key components:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. User Interface
&lt;/h3&gt;

&lt;p&gt;A command-line interface collecting user requirements and displaying results.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. AI Integration Layer
&lt;/h3&gt;

&lt;p&gt;Code that formats prompts and communicates with the AI model (Ollama's DeepSeek in this case).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Processing Logic
&lt;/h3&gt;

&lt;p&gt;Functions for extracting relevant content from AI responses and saving the output.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Action Execution
&lt;/h3&gt;

&lt;p&gt;Code that opens the generated landing page and handles user workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  How This Landing Page Generator Works
&lt;/h2&gt;

&lt;p&gt;This agent follows a straightforward workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User Input&lt;/strong&gt; : Collects a description of the desired landing page&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Instruction&lt;/strong&gt; : Sends carefully crafted prompts to the language model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Processing&lt;/strong&gt; : Extracts HTML/CSS code from the model's response&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Generation&lt;/strong&gt; : Saves the code as an HTML file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result Delivery&lt;/strong&gt; : Opens the file in a browser and offers to create another&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Design Principles
&lt;/h2&gt;

&lt;p&gt;Looking at the code, we can identify several principles for effective AI agent design:&lt;/p&gt;

&lt;h3&gt;
  
  
  Clear, Specific Prompts
&lt;/h3&gt;

&lt;p&gt;The system prompt defines the agent's role as a "skilled web developer" and provides specific constraints (using Tailwind CSS, delivering ready-to-use code).&lt;/p&gt;

&lt;h3&gt;
  
  
  Error Handling
&lt;/h3&gt;

&lt;p&gt;The code gracefully handles potential failures in AI generation, extraction, and file operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  User Experience Focus
&lt;/h3&gt;

&lt;p&gt;The interface includes progress updates, confirmation prompts, and clear navigation options.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Landing Page Generator Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import ollama
import os
import re
from pathlib import Path
from datetime import datetime

def generate_landing_page(description):
    """Use Ollama to generate HTML with Tailwind CSS for a landing page based on description."""
    messages = [
        {"role": "system",
         "content": "You are a skilled web developer who creates clean, responsive landing pages using HTML and Tailwind CSS. Your code should be complete, valid, and ready to use with just the Tailwind CDN."},
        {"role": "user",
         "content": f"Create a complete landing page based on this description: {description}. Use Tailwind CSS for styling (include the CDN link). Provide only the HTML code with Tailwind classes and minimal internal JavaScript if needed. The code should be complete and ready to be saved as an index.html file."}
    ]

    try:
        print("Generating landing page with Tailwind CSS... This may take a minute or two...")
        response = ollama.chat(
            model='deepseek-r1:8b',
            messages=messages
        )

        # Extract HTML code from response
        html_content = response['message']['content']

        # Extract code between ```

html and

 ``` if present
        code_pattern = re.compile(r'```

(?:html)?\s*([\s\S]*?)\s*

```')
        match = code_pattern.search(html_content)

        if match:
            return match.group(1).strip()
        else:
            # If no code blocks found, try to extract just the HTML
            if "&amp;lt;html" in html_content and "&amp;lt;/html&amp;gt;" in html_content:
                start = html_content.find("&amp;lt;html")
                end = html_content.find("&amp;lt;/html&amp;gt;") + 7
                return html_content[start:end]
            return html_content

    except Exception as e:
        return f"Error generating landing page: {str(e)}"

def save_landing_page(html_content, output_dir="landing_pages"):
    """Save the generated HTML to a file."""
    # Ensure the output directory exists
    Path(output_dir).mkdir(exist_ok=True)

    # Create a filename with timestamp
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"{output_dir}/landing_page_{timestamp}.html"

    # Save the HTML content
    with open(filename, "w", encoding="utf-8") as f:
        f.write(html_content)

    return filename

def main():
    """Run the dedicated landing page generator."""
    print("==========================================")
    print(" TAILWIND LANDING PAGE GENERATOR (OLLAMA) ")
    print("==========================================")

    # Get description for the landing page
    print("\nDescribe the landing page you want (be specific about colors, layout, features):")
    description = input("&amp;gt; ")

    if not description:
        print("Error: You must provide a description for the landing page.")
        return

    # Generate the landing page HTML
    html_content = generate_landing_page(description)

    # Save the landing page
    filename = save_landing_page(html_content)

    print(f"\nSuccess! Landing page created and saved to: {filename}")

    # Ask if user wants to open the file
    open_file = input("Would you like to open the landing page now? (y/n): ")
    if open_file.lower() in ['y', 'yes']:
        # Try to open the file in the default browser
        try:
            import webbrowser
            file_path = os.path.abspath(filename)
            webbrowser.open('file://' + file_path)
            print(f"Opened {filename} in your default browser.")
        except Exception as e:
            print(f"Couldn't open the file automatically: {e}")
            print(f"Please open {filename} manually in your browser.")

    # Ask if user wants to create another landing page
    another = input("\nWould you like to create another landing page? (y/n): ")
    if another.lower() in ['y', 'yes']:
        main() # Recursive call
    else:
        print("Thank you for using the Tailwind Landing Page Generator. Goodbye!")

if __name__ == " __main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am using &lt;strong&gt;deepseek-r1:8b&lt;/strong&gt; and I have also tried &lt;strong&gt;qwen2.5-coder:14b&lt;/strong&gt; which gave me a better result. Feel free to tweak the script to match your setup.&lt;/p&gt;

&lt;p&gt;You can visit my GitHub Repo where I will be adding more examples as I continue my journey in this exciting field of AI.&lt;/p&gt;

&lt;p&gt;[&lt;/p&gt;

&lt;p&gt;GitHub - Ali-Shaikh/ai-playground: AI Playground&lt;/p&gt;

&lt;p&gt;AI Playground. Contribute to Ali-Shaikh/ai-playground development by creating an account on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falishaikh.me%2Fcontent%2Fimages%2Ficon%2Fpinned-octocat-093da3e6fa40.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falishaikh.me%2Fcontent%2Fimages%2Ficon%2Fpinned-octocat-093da3e6fa40.svg" alt="Creating Simple AI Agents: A Beginner's Guide" width="36" height="36"&gt;&lt;/a&gt;GitHubAli-Shaikh&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fopengraph.githubassets.com%2Fce3878da9fcc1d39d730aa28ad165f1660ba05eb87e0677ddec8a2e7a078614f%2FAli-Shaikh%2Fai-playground" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fopengraph.githubassets.com%2Fce3878da9fcc1d39d730aa28ad165f1660ba05eb87e0677ddec8a2e7a078614f%2FAli-Shaikh%2Fai-playground" alt="Creating Simple AI Agents: A Beginner's Guide" width="1200" height="600"&gt;&lt;/a&gt;&lt;br&gt;
](&lt;a href="https://github.com/Ali-Shaikh/ai-playground?ref=alishaikh.me" rel="noopener noreferrer"&gt;https://github.com/Ali-Shaikh/ai-playground?ref=alishaikh.me&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Going Beyond: Enhancing Your AI Agents
&lt;/h2&gt;

&lt;p&gt;Again, this is a simple example to help you get up and running with creating an AI Agent. To create more sophisticated AI agents, consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multiple Model Integration&lt;/strong&gt; : Combine specialised models for different aspects of a task&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory and Context&lt;/strong&gt; : Maintain conversation history for more coherent interactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Preference Learning&lt;/strong&gt; : Store and apply user preferences over time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback Loops&lt;/strong&gt; : Collect user feedback to improve future outputs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validation Layers&lt;/strong&gt; : Add code to validate AI outputs before presenting them&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The landing page generator demonstrates how even simple AI agents can provide significant value. By combining clear prompts, effective processing, and thoughtful user experience design, developers can create tools that leverage AI to solve specific problems.&lt;/p&gt;

&lt;p&gt;As you build your own AI agents, remember that the quality of your prompts, the robustness of your error handling, and the thoughtfulness of your user experience will largely determine your success.&lt;/p&gt;

&lt;p&gt;In a nutshell, an AI agent is basically like calling an API with parameters, processing the result, manipulating it to your requirements and displaying the output. Anyone can get started with building an AI agent using Ollama or &lt;a href="https://alishaikh.me/run-a-local-llm-with-lm-studio-easy-2025-tutorial/" rel="noopener noreferrer"&gt;LM Studio&lt;/a&gt;. If you have a modest system, try using any of the smaller models such as &lt;strong&gt;qwen2.5-coder:3b&lt;/strong&gt; or even a smaller model such as &lt;strong&gt;qwen2.5-coder:1.5b.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Smaller models are a great starting point, and while they may not offer the same quality as larger models, they are perfect for building your skills. As you get more comfortable with creating AI agents, you’ll be ready to work with larger models to unlock even more powerful capabilities!&lt;/p&gt;

&lt;p&gt;Keep creating and innovating, and I would love to hear and see what you create. please feel free to share any comments, questions and examples of what you got done in the comments. If you need any help, feel free to reach out through my &lt;a href="https://www.linkedin.com/in/alishaikh1/?ref=alishaikh.me" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Run Local LLMs with Ollama: Quick Setup Guide</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Sun, 02 Mar 2025 08:50:48 +0000</pubDate>
      <link>https://dev.to/alishaikh/how-to-run-local-llms-with-ollama-quick-setup-guide-5eoj</link>
      <guid>https://dev.to/alishaikh/how-to-run-local-llms-with-ollama-quick-setup-guide-5eoj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1511885663737-eea53f6d6187%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDJ8fGxsYW1hJTIwd2lkZXxlbnwwfHx8fDE3NDA5MDQ5MTR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1511885663737-eea53f6d6187%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDJ8fGxsYW1hJTIwd2lkZXxlbnwwfHx8fDE3NDA5MDQ5MTR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="How to Run Local LLMs with Ollama: Quick Setup Guide" width="2000" height="1500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've already checked out my &lt;a href="https://alishaikh.me/run-a-local-llm-with-lm-studio-easy-2025-tutorial/" rel="noopener noreferrer"&gt;guide on running local LLMs with LM Studio&lt;/a&gt;, you might be interested in another excellent option: Ollama. This tool offers a streamlined, command-line focused approach to running LLMs locally.&lt;/p&gt;

&lt;p&gt;We can also use ollama in our code, integrate with &lt;strong&gt;Open WebUI&lt;/strong&gt; or &lt;strong&gt;Continue (VS Code Extension)&lt;/strong&gt; for a custom cursor IDE-like behaviour. I will publish something on these in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Ollama?
&lt;/h2&gt;

&lt;p&gt;Ollama is a lightweight, command-line tool that makes running open-source LLMs locally incredibly simple. It handles model downloads, optimisation, and inference in a single package with minimal configuration needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Installation Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Windows
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Visit &lt;a href="https://ollama.com/download?ref=alishaikh.me" rel="noopener noreferrer"&gt;Ollama's official website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Download and run the Windows installer&lt;/li&gt;
&lt;li&gt;Follow the installation prompts&lt;/li&gt;
&lt;li&gt;Once installed, Ollama runs as a background service&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  macOS
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Download the macOS app from &lt;a href="https://ollama.com/download?ref=alishaikh.me" rel="noopener noreferrer"&gt;Ollama's website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Open the .dmg file&lt;/li&gt;
&lt;li&gt;Drag Ollama to your Applications folder&lt;/li&gt;
&lt;li&gt;Launch the app (it will run in the background)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Linux
&lt;/h3&gt;

&lt;p&gt;Run this command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://ollama.com/install.sh | sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Ollama (Basic Commands)
&lt;/h2&gt;

&lt;p&gt;Once installed, using Ollama is as simple as opening a terminal or command prompt and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run deepseek-r1:8b

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command downloads the deepseek-r1 8B model (if not already downloaded) and starts an interactive chat session.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr2gg2tpklhb4c3gjqsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr2gg2tpklhb4c3gjqsa.png" alt="How to Run Local LLMs with Ollama: Quick Setup Guide" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If AI thinks this is funny, then there is a very slim chance of them taking over the world!&lt;/p&gt;

&lt;p&gt;If you notice I added a --verbose flag to the run command, this shows us the stats at the end, such as tokens/s.&lt;/p&gt;

&lt;h2&gt;
  
  
  Popular Models to Try
&lt;/h2&gt;

&lt;p&gt;Ollama makes it easy to experiment with different models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run llama3 # Meta's latest Llama 3 8B model
ollama run gemma:2b # Google's lightweight Gemma model
ollama run codellama # Specialised for coding tasks
ollama run neural-chat # Optimised for conversation
deepseek-r1:8b # My go to model
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see all available models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama list remote

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see models you've already downloaded:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama list

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Hardware Considerations
&lt;/h2&gt;

&lt;p&gt;As explained in my &lt;a href="https://alishaikh.me/run-a-local-llm-with-lm-studio-easy-2025-tutorial/" rel="noopener noreferrer"&gt;LM Studio article&lt;/a&gt;, your hardware still matters. The same considerations apply:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smaller models (2B-7B) work on modest hardware&lt;/li&gt;
&lt;li&gt;Larger models (13B+) benefit greatly from a GPU&lt;/li&gt;
&lt;li&gt;At least 16GB RAM recommended for a smooth experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ollama will automatically optimise for your available hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating with Other Tools
&lt;/h2&gt;

&lt;p&gt;Ollama works well with several companion tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Open WebUI&lt;/strong&gt; : A user-friendly chat interface for Ollama&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continue VS Code Extension&lt;/strong&gt; : For coding assistance&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Integrating with Code
&lt;/h2&gt;

&lt;p&gt;One of Ollama's strongest features is its API, making it easy to integrate into your applications:&lt;/p&gt;

&lt;h3&gt;
  
  
  Python Integration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

# Simple completion request
response = requests.post('http://localhost:11434/api/generate', 
    json={
        'model': 'deepseek-r1:8b',
        'prompt': 'Explain quantum computing in simple terms'
    }
)

print(response.json()['response'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  JavaScript/Node.js Integration
&lt;/h3&gt;

&lt;p&gt;You can build complete chat applications, coding assistants, or content generation tools using this simple API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const response = await fetch('http://localhost:11434/api/generate', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    model: 'deepseek-r1:8b',
    prompt: 'Write a function to calculate fibonacci numbers'
  })
});

const data = await response.json();
console.log(data.response);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can build complete chat applications, coding assistants, AI agents or content generation tools using this simple API. I will be covering this in future posts also.&lt;/p&gt;

&lt;h2&gt;
  
  
  For More Details
&lt;/h2&gt;

&lt;p&gt;For a deeper dive into running local LLMs, model selection, and optimisation techniques, check out my &lt;a href="https://alishaikh.me/run-a-local-llm-with-lm-studio-easy-2025-tutorial/" rel="noopener noreferrer"&gt;full guide on running LLMs with LM Studio&lt;/a&gt;. The same principles apply, though the interface is different.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Ollama offers a more developer-oriented approach to local LLMs compared to LM Studio's GUI-focused experience. It's perfect for those comfortable with the command line or looking to integrate local AI into their development workflows.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cli</category>
      <category>llm</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How AI Models Process Text: Understanding Tokens in AI and LLMs with TikTokenizer</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Sat, 01 Mar 2025 07:21:51 +0000</pubDate>
      <link>https://dev.to/alishaikh/how-ai-models-process-text-understanding-tokens-in-ai-and-llms-with-tiktokenizer-5e</link>
      <guid>https://dev.to/alishaikh/how-ai-models-process-text-understanding-tokens-in-ai-and-llms-with-tiktokenizer-5e</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fu7dcpdrjywbk4e0y0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fu7dcpdrjywbk4e0y0r.png" alt="How AI Models Process Text: Understanding Tokens in AI and LLMs with TikTokenizer" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the world of artificial intelligence (AI) and large language models (LLMs), &lt;strong&gt;tokens&lt;/strong&gt; are the essential building blocks that allow these systems to process and understand text. But what exactly are tokens, and how can a tool like &lt;a href="https://tiktokenizer.vercel.app/?ref=alishaikh.me" rel="noopener noreferrer"&gt;TikTokenizer&lt;/a&gt; help us explore them? Let’s dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Tokens?
&lt;/h2&gt;

&lt;p&gt;Tokens are the smallest units of text that an AI language model works with. Depending on the tokenization method, a token could be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;word&lt;/strong&gt; (e.g., "love" or "AI"),&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;subword&lt;/strong&gt; (e.g., "play" and "##ing" for "playing"),&lt;/li&gt;
&lt;li&gt;Or even an &lt;strong&gt;individual character&lt;/strong&gt; (e.g., "A" or "!").&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For instance, the sentence "I love AI" might be tokenized as ["I", "love", "AI"]. Tokenization is the process of breaking down raw text into these units, transforming it into a format that a model can interpret. This step is critical because it determines how the model "sees" the text, influencing its ability to generate responses or understand meaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Tokenization Matters
&lt;/h2&gt;

&lt;p&gt;Tokenization isn’t just a technical detail—it directly affects a model’s performance. The way text is split into tokens impacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency:&lt;/strong&gt; Smaller tokens might mean more processing steps, while larger ones could simplify things.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy:&lt;/strong&gt; Misaligned token boundaries might confuse the model’s interpretation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding tokenization is key for anyone working with LLMs, whether you're crafting prompts, debugging outputs, or optimizing text inputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter TikTokenizer
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://tiktokenizer.vercel.app/?ref=alishaikh.me" rel="noopener noreferrer"&gt;TikTokenizer tool&lt;/a&gt; is a fantastic resource for visualizing and experimenting with tokenization. It’s interactive and user-friendly: simply type in any text, and the tool breaks it down into tokens, showing each one alongside its unique &lt;strong&gt;token ID&lt;/strong&gt; —a numerical value that models use internally. For example, input "I love AI!" and you’ll see how it’s split and numbered.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Use It
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Visit &lt;a href="https://tiktokenizer.vercel.app/?ref=alishaikh.me" rel="noopener noreferrer"&gt;TikTokenizer&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Enter a sentence—like "Hello, world!"—into the text box.&lt;/li&gt;
&lt;li&gt;Watch as it displays the tokens (e.g., ["Hello", ",", "world", "!"]) and their IDs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can play around with it by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding punctuation (e.g., "Hello,world" vs. "Hello, world"),&lt;/li&gt;
&lt;li&gt;Changing capitalization (e.g., "hello" vs. "HELLO"),&lt;/li&gt;
&lt;li&gt;Or trying complex phrases to see how the tokenization adapts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This hands-on approach makes it easy to see how different choices affect the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It’s Useful
&lt;/h2&gt;

&lt;p&gt;TikTokenizer isn’t just a fun toy—it’s a practical tool for anyone interested in AI and LLMs. Here’s why it’s worth exploring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Engineering:&lt;/strong&gt; Learn to craft prompts that fit within token limits for better results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging:&lt;/strong&gt; Spot why a model might misread your input by checking how it’s tokenized.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning:&lt;/strong&gt; Gain a clearer grasp of NLP concepts if you’re new to the field.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimization:&lt;/strong&gt; Tweak your text preprocessing to improve model performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;Alright, let’s wrap this up! Tokens are the core of how AI language models make sense of text, and &lt;strong&gt;TikTokenizer&lt;/strong&gt; lets you peek under the hood. Type something in, and it breaks it down into tokens with their IDs—super straightforward and honestly kinda cool. It’s perfect whether you’re new to AI or a seasoned pro. Want to see how your words get “read” by a machine? Check out &lt;a href="https://tiktokenizer.vercel.app/?ref=alishaikh.me" rel="noopener noreferrer"&gt;TikTokenizer&lt;/a&gt; and give it a spin!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Run a Local LLM with LM Studio</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Fri, 28 Feb 2025 13:54:54 +0000</pubDate>
      <link>https://dev.to/alishaikh/run-a-local-llm-with-lm-studio-38pi</link>
      <guid>https://dev.to/alishaikh/run-a-local-llm-with-lm-studio-38pi</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1591696331111-ef9586a5b17a%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDl8fGFydGlmaWNpYWwlMjBpbnRlbGxpZ2VuY2V8ZW58MHx8fHwxNzQwNzQ1MzI4fDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1591696331111-ef9586a5b17a%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDl8fGFydGlmaWNpYWwlMjBpbnRlbGxpZ2VuY2V8ZW58MHx8fHwxNzQwNzQ1MzI4fDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="Run a Local LLM with LM Studio" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run Llama|Mistral|Phi|Gemma|DeepSeek|Qwen 2.5 locally on your computer!&lt;/p&gt;

&lt;p&gt;Picture this: your own AI buddy, running right on your computer, ready to chat, code, or solve problems—no internet, no fees, just you and your machine. That’s what running a Large Language Model (LLM) locally offers, and with LM Studio, it’s simpler than you’d think.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;So I have been tinkering with AI and I wanted to set up a LLM model such as DeepSeek-R1 locally. I came across LM Studio which allowed me run an AI model fully hosted on my PC within 10 minutes. Pretty amazing! I am using my mini PC UM780 XTX with AMD Ryzen 7 7840HS with 32GB of RAM.&lt;/p&gt;

&lt;p&gt;This guide will show you how to set up an LLM using LM Studio. I’ll also highlight some popular CPU setups to match your hardware and throw in a quick comparison of top models—including the DeepSeek-R1 distilled series. Let’s get started!&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Go Local with an LLM?
&lt;/h3&gt;

&lt;p&gt;Running an LLM on your device means privacy, no subscription costs, and offline access. Plus, you can play with open-source models like a kid in a candy store and it is pretty cool!&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s LM Studio?
&lt;/h3&gt;

&lt;p&gt;LM Studio is a free, all-in-one app for Windows, macOS, and Linux that lets you run LLMs without diving into code (unless you want to). It grabs models from &lt;a href="https://huggingface.co/?ref=alishaikh.me" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt;, offers a chat interface, and can even mimic OpenAI’s API locally. It’s optimised for your CPU, making it perfect for folks like me with a Ryzen 7 and no NVIDIA GPU.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step: Running an LLM with LM Studio
&lt;/h2&gt;

&lt;p&gt;Here’s the playbook—I’ve tested it myself to keep it spot-on and straightforward.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Check Your Gear
&lt;/h3&gt;

&lt;p&gt;Your computer needs to be up to the task. You can check the full system requirements &lt;a href="https://lmstudio.ai/docs/system-requirements?ref=alishaikh.me" rel="noopener noreferrer"&gt;here&lt;/a&gt; but here’s the baseline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CPU&lt;/strong&gt; : Modern, with AVX2 support (e.g., Intel Core i5 or AMD Ryzen 5 from the last 8-10 years) or any Apple Silicon (M1-M4—macOS-ready).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAM&lt;/strong&gt; : 16GB minimum for small models; 32GB+ is ideal for wiggle room.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt; : SSD with 10-20GB free (models vary from a few GB to tens of GB).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU (Optional)&lt;/strong&gt;: NVIDIA with CUDA speeds things up, but CPU-only works fine—my Radeon 780M sits this one out 😢.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ll list some CPU configs later as reference to size up your rig.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Grab LM Studio
&lt;/h3&gt;

&lt;p&gt;Head to &lt;a href="https://lmstudio.ai/?ref=alishaikh.me" rel="noopener noreferrer"&gt;lmstudio.ai&lt;/a&gt;, download the ~400MB installer for your OS, and run it. Follow the steps, launch the app, and you’re in.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Choose and Download a Model
&lt;/h3&gt;

&lt;p&gt;Open LM Studio and hit the “Discover” tab (magnifying glass icon). It’ll suggest models your system can handle. Here are some stars to pick from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mistral 7B&lt;/strong&gt; : Fast and flexible for everyday tasks (~8-10GB RAM, quantized).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLaMA 3.1 8B&lt;/strong&gt; : A powerhouse for chat and reasoning (~10-12GB RAM).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeepSeek-R1-Distill-Qwen-7B&lt;/strong&gt; : A distilled gem for math and logic (~8-12GB RAM).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phi-3 Mini (3.8B)&lt;/strong&gt;: Tiny but mighty, perfect for light setups (~5-6GB RAM).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8hf2sdtcrwtxiq7u2dn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8hf2sdtcrwtxiq7u2dn.png" alt="Run a Local LLM with LM Studio" width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click a model, pick a quantized version (like “Q4_K_M” to save memory), and hit “Download.” Bigger models take a bit, so maybe make some 🍵.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Load It Up and Chat
&lt;/h3&gt;

&lt;p&gt;Switch to the “Chat” tab (speech bubble), select your model from the dropdown, and click “Load.” Your CPU will buzz for a sec as it loads into RAM. Once it’s ready, type something and watch it go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrros4xc7ru0e7182lt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrros4xc7ru0e7182lt8.png" alt="Run a Local LLM with LM Studio" width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Have Fun!
&lt;/h3&gt;

&lt;p&gt;Boom—you’ve got an LLM at your command, offline and private. Test prompts, swap models, or use it to brainstorm. It’s your AI playground.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing the Models
&lt;/h2&gt;

&lt;p&gt;Not sure which model to choose? Here’s a quick rundown of my top picks, including DeepSeek-R1 distilled, based on size, speed, and smarts:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;RAM&lt;/th&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mistral 7B&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;8-10GB&lt;/td&gt;
&lt;td&gt;8-12 t/s&lt;/td&gt;
&lt;td&gt;Writing, Q&amp;amp;A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLaMA 3.1 8B&lt;/td&gt;
&lt;td&gt;8B&lt;/td&gt;
&lt;td&gt;10-12GB&lt;/td&gt;
&lt;td&gt;5-10 t/s&lt;/td&gt;
&lt;td&gt;Chat, Reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DeepSeek-R1 (Qwen-7B)&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;8-12GB&lt;/td&gt;
&lt;td&gt;6-12 t/s&lt;/td&gt;
&lt;td&gt;Math, Coding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phi-3 Mini&lt;/td&gt;
&lt;td&gt;3.8B&lt;/td&gt;
&lt;td&gt;5-6GB&lt;/td&gt;
&lt;td&gt;10-15 t/s&lt;/td&gt;
&lt;td&gt;Light Tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Winner?&lt;/strong&gt; If you’re into reasoning or coding, grab DeepSeek-R1-Distill-Qwen-7B. For everyday chatting, LLaMA 3.1 8B is your pal. Mistral’s a speed demon, and Phi-3’s the lightweight champ.&lt;/p&gt;

&lt;h2&gt;
  
  
  Popular CPU Configurations and What They Can Run
&lt;/h2&gt;

&lt;p&gt;Your CPU and RAM set the stage. Here’s what common setups can handle, checked against real benchmarks:&lt;/p&gt;

&lt;h3&gt;
  
  
  Entry-Level: Intel Core i5-12400 / AMD Ryzen 5 5600X / Apple M1 + 16GB RAM
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cores/Threads&lt;/strong&gt; : 6/12 (Intel/AMD), 8/8 (M1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Models&lt;/strong&gt; : Phi-3 Mini, Mistral 7B (quantized)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; : ~8-12 tokens/sec for 7B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notes&lt;/strong&gt; : M1’s efficiency rocks; 16GB caps bigger models.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Mid-Range: Intel Core i7-12700 / AMD Ryzen 7 5800X / Apple M2 + 32GB RAM
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cores/Threads&lt;/strong&gt; : 8/16 (Ryzen), 12/20 (Intel), 8/10 (M2)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Models&lt;/strong&gt; : Mistral 7B, LLaMA 3.1 8B, DeepSeek-R1-Distill-Qwen-7B, Qwen-14B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; : ~5-10 tokens/sec for 7B-14B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notes&lt;/strong&gt; : M2’s neural engine kicks in; my Ryzen 7 7840HS (8/16) fits here—8B models fly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  High-End: Intel Core i9-13900K / AMD Ryzen 9 7950X / Apple M3 Max or M4 + 64GB RAM
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cores/Threads&lt;/strong&gt; : 16-24/32 (Intel/AMD), 14/14 (M3 Max), 10-12/10-12 (M4 base)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Models&lt;/strong&gt; : LLaMA 3.1 13B, DeepSeek-R1-Distill-Qwen-32B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; : ~3-8 tokens/sec for 13B-32B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notes&lt;/strong&gt; : M3 Max and M4 (10-12 cores) soar with macOS tweaks; 64GB tackles big models easy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tips to Nail It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Quantize Smart&lt;/strong&gt; : Use 4-bit models (e.g., Q4_K_M) to fit more in RAM—they’re still sharp.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free Up Space&lt;/strong&gt; : Shut down Chrome or games to give your LLM breathing room.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mix and Match&lt;/strong&gt; : Download a few models—DeepSeek-R1 for logic, LLaMA for chats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshoot&lt;/strong&gt; : If it lags or crashes, check LM Studio’s logs for hints.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;So this is how you run a LLM locally with LM Studio, it's SUPER COOL!!. With my Ryzen 7 and 32GB RAM, I’m loving DeepSeek-R1-Distill-Qwen-7B and LLaMA 3.1 8B.&lt;/p&gt;

&lt;p&gt;Grab LM Studio and give DeepSeek-R1 a spin on your rig—I’d love to hear how it goes in the comments!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>How to Log In to a Remote Linux Machine Using SSH Keys</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Sat, 22 Feb 2025 07:02:50 +0000</pubDate>
      <link>https://dev.to/alishaikh/how-to-log-in-to-a-remote-linux-machine-using-ssh-keys-2b2n</link>
      <guid>https://dev.to/alishaikh/how-to-log-in-to-a-remote-linux-machine-using-ssh-keys-2b2n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Aw8VRw3mBtP5Y8ieY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Aw8VRw3mBtP5Y8ieY" width="1024" height="683"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Jordan Harrison on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;SSH (Secure Shell) is a common way to securely connect to remote Linux machines. Instead of using passwords, SSH keys make logging in safer and easier. This guide will show you how to install an SSH server, create SSH keys, add your public key to the remote server and then connect using your SSH keys.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Installing the SSH Server
&lt;/h3&gt;

&lt;p&gt;To enable SSH access, the remote Linux machine must have an SSH server installed.&lt;/p&gt;
&lt;h4&gt;
  
  
  For Debian-based Systems (e.g., Ubuntu)
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install openssh-server -y
sudo systemctl enable ssh
sudo systemctl start ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  For Red Hat-based Systems (e.g., CentOS, Rocky Linux)
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum update -y
sudo yum install -y openssh-server
sudo systemctl enable sshd
sudo systemctl start sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Verify SSH Server is Running
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status ssh # For Ubuntu/Debian
sudo systemctl status sshd # For CentOS/RHEL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If the server is running you should see active (running) as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjyuxbmz74b1ioshhp33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjyuxbmz74b1ioshhp33.png" width="669" height="174"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Generating SSH Keys on your Local Machine
&lt;/h3&gt;

&lt;p&gt;SSH keys consist of a &lt;strong&gt;private key&lt;/strong&gt; (kept secure on your local machine) and a &lt;strong&gt;public key&lt;/strong&gt; (placed on the remote server). If you haven’t generated SSH keys before, check out this detailed guide:&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://ali-shaikh.medium.com/generating-ssh-keys-on-windows-linux-and-macos-3bfd55e12d6a" rel="noopener noreferrer"&gt;&lt;strong&gt;Generating SSH Keys on Windows, Linux, and macOS&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve generated your SSH keys, proceed to the next step to configure the server for SSH key-based authentication by adding your public key to the remote server.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Adding your Public Key to the Remote Machine
&lt;/h3&gt;

&lt;p&gt;Now, we need to copy the public key to the remote server so it can allow your key to be used to log in.&lt;/p&gt;
&lt;h4&gt;
  
  
  Copy your Public Key from the Local Machine
&lt;/h4&gt;

&lt;p&gt;On your local machine, display the public key by running cat ~/.ssh/id_rsa.pub or opening the .pub file in Notepad, and copy the contents.&lt;/p&gt;
&lt;h4&gt;
  
  
  Prepare the Remote Server
&lt;/h4&gt;

&lt;p&gt;On the remote machine, create the .ssh directory and authorized_keys file if they do not already exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Add the Public Key to Authorized Keys
&lt;/h4&gt;

&lt;p&gt;Paste the copied key into the ~/.ssh/authorized_keys file on the remote machine. If there's already another key in the file, make sure to add your key &lt;strong&gt;on a new line&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively, you can use echo command on the remote machine and append it directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "PASTE_PUBLIC_KEY_HERE" &amp;gt;&amp;gt; ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Set the correct permissions&lt;/strong&gt; to ensure security:
&lt;/h4&gt;

&lt;p&gt;Some systems will now allow you to use your SSH keys if the permissions are too open or incorrect. Run the following commands to set the recommended permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Connecting to the Remote Machine Using SSH Keys
&lt;/h3&gt;

&lt;p&gt;Now that the remote machine is ready for SSH-based authentication and your public key has successfully been added, you can connect from your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh username@remote_host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your private key exists in a non-default location, then provide the path to the private key by using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i /path/to/private_key username@remote_host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;You have now set up your remote server for SSH access and learned how to log in securely using SSH keys, eliminating the need to enter a password each time. This method enhances security and convenience, making remote access more efficient. With SSH keys in place, you can now connect to your server seamlessly and focus on what matters most.&lt;/p&gt;

</description>
      <category>servers</category>
      <category>linux</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Cloning GitHub Repositories with SSH: A Step-by-Step Guide</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Tue, 18 Feb 2025 04:01:41 +0000</pubDate>
      <link>https://dev.to/alishaikh/cloning-github-repositories-with-ssh-a-step-by-step-guide-5d1h</link>
      <guid>https://dev.to/alishaikh/cloning-github-repositories-with-ssh-a-step-by-step-guide-5d1h</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ak_-DccTGq476Pwx4" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ak_-DccTGq476Pwx4" width="1024" height="669"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Roman Synkevych on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the previous guide, we learnt how to generate SSH Keys on Windows, Linux, and macOS, read &lt;a href="https://ali-shaikh.medium.com/generating-ssh-keys-on-windows-linux-and-macos-3bfd55e12d6a" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll explain how to use your SSH keys to clone GitHub repositories — a common task for developers working on projects locally. And remember, these SSH keys aren’t just for cloning; they also secure your push and pull operations.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before you begin, make sure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSH Key Pair:&lt;/strong&gt; You have generated an SSH key pair, to learn how click &lt;a href="https://ali-shaikh.medium.com/generating-ssh-keys-on-windows-linux-and-macos-3bfd55e12d6a" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git Installed:&lt;/strong&gt; Git is installed on your system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet Access:&lt;/strong&gt; You can access GitHub.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Adding Your SSH Key to GitHub
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Step 1: Copy Your Public Key
&lt;/h4&gt;

&lt;p&gt;Your public key is typically stored in a file like ~/.ssh/id_ed25519.pub (or id_rsa.pub if you’re using RSA). You can simply display the key and copy it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.ssh/id_rsa.pub

OR

cat ~/.ssh/id_ed25519.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Log in to Your GitHub Account
&lt;/h4&gt;

&lt;p&gt;Open your web browser and navigate to &lt;a href="https://github.com" rel="noopener noreferrer"&gt;GitHub.com&lt;/a&gt;. Log in with your credentials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Navigate to SSH Keys Settings
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Click your profile icon&lt;/strong&gt; in the top right corner and select ‘ &lt;strong&gt;Settings.’&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the left sidebar, click ‘ &lt;strong&gt;SSH and GPG keys.’&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79qxb4foo5chfwz683bs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79qxb4foo5chfwz683bs.png" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Add a New SSH Key
&lt;/h4&gt;

&lt;p&gt;Click on the &lt;strong&gt;“New SSH key”&lt;/strong&gt;  button.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Title:&lt;/strong&gt; Enter a descriptive title (e.g., ‘My Laptop Key’).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key:&lt;/strong&gt; Paste the copied public key into the key field.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;“Add SSH key.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp7kjulkbnbvosorcqwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp7kjulkbnbvosorcqwm.png" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloning a Repository Using SSH
&lt;/h3&gt;

&lt;p&gt;Now you are ready to enter the realm of passwordless authentication!&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Verify Your SSH Connection
&lt;/h4&gt;

&lt;p&gt;Before cloning, verify that your SSH configuration is working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -T git@github.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it’s your first time connecting, you might be asked to confirm the authenticity of GitHub’s host. Type ‘ &lt;strong&gt;yes’&lt;/strong&gt; to continue.&lt;/p&gt;

&lt;p&gt;If all goes well then you will see the following success message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuek9x6tyj67so2qvujo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuek9x6tyj67so2qvujo.png" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This means you are now good to go and use SSH to Clone GitHub repositories.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Obtain the Repository’s SSH URL
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to your repository on GitHub.&lt;/li&gt;
&lt;li&gt;Click the ‘ &lt;strong&gt;Code’&lt;/strong&gt;  button.&lt;/li&gt;
&lt;li&gt;Select the ‘ &lt;strong&gt;SSH’&lt;/strong&gt; tab and copy the URL&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw1ce5keiios8a6b4n67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw1ce5keiios8a6b4n67.png" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Clone the Repository
&lt;/h4&gt;

&lt;p&gt;In your terminal, change to the directory where you want the repository and type &lt;strong&gt;‘git clone’&lt;/strong&gt; followed by the ‘ &lt;strong&gt;SSH repository URL’&lt;/strong&gt; you copied earlier and hit enter.&lt;/p&gt;

&lt;p&gt;Aaaand that's it! You have successfully learnt how to clone GitHub repositories without having to enter your username and password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e1ymr377sl2taxe5iht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e1ymr377sl2taxe5iht.png" width="718" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By following these steps, you’ve not only added your SSH key to GitHub but also learned how to clone repositories securely over SSH.&lt;/p&gt;

&lt;p&gt;This method works seamlessly for all Git operations, including pushing new commits and pulling updates from remote repositories. You can follow the same steps to add your SSH keys in GitLab, BitBucket, and more.&lt;/p&gt;

&lt;p&gt;This is a very convenient way of accessing Git repositories and will make your development workflows more efficient.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;In a future tutorial, we will learn how to SSH into a remote server.&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>engineering</category>
      <category>development</category>
    </item>
    <item>
      <title>Mastering Proxmox Automation: Build an Ubuntu 24.04 Cloud‑Init Template for Rapid VM Deployment</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Mon, 17 Feb 2025 04:01:44 +0000</pubDate>
      <link>https://dev.to/alishaikh/mastering-proxmox-automation-build-an-ubuntu-2404-cloud-init-template-for-rapid-vm-deployment-2c31</link>
      <guid>https://dev.to/alishaikh/mastering-proxmox-automation-build-an-ubuntu-2404-cloud-init-template-for-rapid-vm-deployment-2c31</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AD4YBoZeeCgIQ16P4" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AD4YBoZeeCgIQ16P4" width="1024" height="768"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Gabriel Heinzer on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Recently I embarked on a journey to set up Kubernetes Cluster on my home lab running Proxmox. From my experience designing K8s architectures on AWS EKS and Azure AKS, I knew I would need to create multiple VMs so instead of having to create them 1 by 1 and repeating the same steps I decided to leverage automation and Cloud‑Init enabled VM templates to streamline the process.&lt;/p&gt;
&lt;h3&gt;
  
  
  How Cloud‑Init and Proxmox Automation Help
&lt;/h3&gt;

&lt;p&gt;Cloud‑Init is a powerful tool designed for early initialisation of cloud instances. By leveraging Cloud‑Init with Proxmox, you can automate the initial configuration of your VMs, installing necessary packages, setting up SSH keys, and even pre-configuring network settings. Once the VM template is created, deploying new VMs becomes a matter of minutes rather than hours, ensuring that each instance is consistently configured and ready to run immediately.&lt;/p&gt;
&lt;h3&gt;
  
  
  More Automation Queue in Bash — Scripting 😍
&lt;/h3&gt;

&lt;p&gt;Below is the Bash script that automates the entire process, streamlining the configuration and setup of your VM template. Why Type When You Can Script, Eh? 😏&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# VM and storage identifiers
VMID=10002
STORAGE=local-lvm

# Define the VM name and Ubuntu image parameters
VM_NAME="ubuntu-2404-cloudinit-template"
STORAGE="local-lvm" # Adjust if using a different storage backend
IMAGE_URL="https://cloud-images.ubuntu.com/daily/server/releases/24.04/release/ubuntu-24.04-server-cloudimg-amd64.img"
IMAGE_PATH="/var/lib/vz/template/iso/$(basename ${IMAGE_URL})"

# Cloud‑Init credentials (consider stronger hashing for production)
CI_USER="ubuntu"
CI_PASSWORD="$(openssl passwd -6 ubuntu)"

# Create an SSH public key file for injecting into the VM
echo "ssh-rsa PASTE YOUR PUBLIC KEY HERE" &amp;gt; ssh.pub

# 1. Download the Ubuntu 24.04 Cloud Image if not already present
if [! -f "${IMAGE_PATH}"]; then
  echo "Downloading Ubuntu 24.04 Cloud Image..."
  wget -P /var/lib/vz/template/iso/ "${IMAGE_URL}"
else
  echo "Image already exists at ${IMAGE_PATH}"
fi

set -x # Enable command echoing for debugging

# Destroy any existing VM with the same VMID to avoid conflicts
qm destroy $VMID 2&amp;gt;/dev/null || true

# Create a new VM with UEFI support and custom hardware configurations
echo "Creating VM ${VM_NAME} with VMID ${VMID}..."
qm create ${VMID} \
  --name "${VM_NAME}" \
  --ostype l26 \
  --memory 2048 \
  --cores 2 \
  --agent 1 \
  --bios ovmf --machine q35 --efidisk0 ${STORAGE}:0,pre-enrolled-keys=0 \
  --cpu host --socket 1 --cores 2 \
  --vga serial0 --serial0 socket \
  --net0 virtio,bridge=vmbr0

# Import the downloaded disk image into Proxmox storage
echo "Importing disk image..."
qm importdisk ${VMID} "${IMAGE_PATH}" ${STORAGE}

# Attach the imported disk as a VirtIO disk using the SCSI controller
qm set $VMID --scsihw virtio-scsi-pci --virtio0 $STORAGE:vm-${VMID}-disk-1,discard=on

# Resize the disk by adding an additional 8G
qm resize ${VMID} virtio0 +8G

# Configure the VM to boot from the VirtIO disk
qm set $VMID --boot order=virtio0

# Attach a Cloud‑Init drive (recommended on UEFI systems using SCSI)
qm set $VMID --scsi1 $STORAGE:cloudinit

# Create a custom Cloud‑Init configuration snippet
cat &amp;lt;&amp;lt; EOF | tee /var/lib/vz/snippets/ubuntu.yaml
#cloud-config
runcmd:
    - apt-get update
    - apt-get install -y qemu-guest-agent htop
    - systemctl enable ssh
    - reboot
EOF

# Apply Cloud‑Init customizations and additional VM settings
qm set $VMID --cicustom "vendor=local:snippets/ubuntu.yaml"
qm set $VMID --tags ubuntu-template,noble,cloudinit
qm set ${VMID} --ciuser ${CI_USER} --cipassword "${CI_PASSWORD}"
qm set $VMID --sshkeys ~/ssh.pub
qm set $VMID --ipconfig0 ip=dhcp

# Finally, convert the configured VM into a template for rapid future deployment
echo "Converting VM to template..."
qm template ${VMID}

echo "Template ${VM_NAME} (VMID: ${VMID}) created successfully."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Make It Your Own
&lt;/h3&gt;

&lt;p&gt;Before you run the script there are some things you would need to modify to tailor it to your environment. Here’s a quick rundown of the main components you might consider editing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VM and storage identifiers
VMID=10002
STORAGE=local-lvm

# Define the VM name and Ubuntu image parameters
VM_NAME="ubuntu-2404-cloudinit-template"
STORAGE="local-lvm" # Adjust if using a different storage backend
IMAGE_URL="https://cloud-images.ubuntu.com/daily/server/releases/24.04/release/ubuntu-24.04-server-cloudimg-amd64.img"
IMAGE_PATH="/var/lib/vz/template/iso/$(basename ${IMAGE_URL})"

# Cloud‑Init credentials (consider stronger hashing for production)
CI_USER="ubuntu"
CI_PASSWORD="$(openssl passwd -6 ubuntu)"

# Create an SSH public key file for injecting into the VM
echo "ssh-rsa PASTE YOUR PUBLIC KEY HERE" &amp;gt; ssh.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VM Identification &amp;amp; Naming:&lt;/strong&gt;
Customize the VMID and VM_NAME to assign a unique identifier and a descriptive name for your template.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage Settings:&lt;/strong&gt;
The STORAGE variable lets you define the backend storage (e.g., local-lvm, local-zfs). Adjust this parameter based on your storage setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Source &amp;amp; Path:&lt;/strong&gt;
The IMAGE_URL points to the cloud image you wish to use, while IMAGE_PATH determines where the image will be stored locally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud‑Init Credentials:&lt;/strong&gt;
Parameters like CI_USER and CI_PASSWORD are set to pre-configure the default login credentials. Please modify these as per your requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH Key Injection:&lt;/strong&gt;
The script injects an &lt;a href="https://medium.com/@ali-shaikh/generating-ssh-keys-on-windows-linux-and-macos-3bfd55e12d6a" rel="noopener noreferrer"&gt;SSH&lt;/a&gt; public key file for secure access to your VM, saving you the hassle of manually copying SSH keys after deployment. Please add your SSH key here echo "ssh-rsa PASTE YOUR PUBLIC KEY HERE" &amp;gt; ssh.pub. If you need to create a new SSH Key click &lt;a href="https://medium.com/@ali-shaikh/generating-ssh-keys-on-windows-linux-and-macos-3bfd55e12d6a" rel="noopener noreferrer"&gt;here&lt;/a&gt; to learn how.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to Use This Script
&lt;/h3&gt;

&lt;p&gt;It’s really simple — just SSH into your Proxmox server, create a new file (e.g., create-ubuntu-2404-template.sh), paste the script into it, make it executable, and run it. The script handles everything for you, from downloading the Ubuntu image to converting the VM into a reusable template.&lt;/p&gt;

&lt;p&gt;Here’s a step-by-step guide:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;SSH into Your Proxmox Server:&lt;/strong&gt;
ssh your_user@proxmox-server-ip&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create the Script File:&lt;/strong&gt;
Use your preferred text editor to create a new file. For example, using Nano: nano create-ubuntu-2404-template.sh&lt;/li&gt;
&lt;li&gt;Paste the script into this file, then save and exit the editor.
&lt;em&gt;In Nano, press&lt;/em&gt; &lt;em&gt;Ctrl+O to save, then&lt;/em&gt; &lt;em&gt;Ctrl+X to exit.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make the Script Executable:&lt;/strong&gt;
Change the file permissions to make it executable:
chmod +x create-ubuntu-2404-template.sh&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run the Script:&lt;/strong&gt;
Execute the script with:
./create&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s it! With these commands, your Proxmox server will automatically handle downloading the Ubuntu image, setting up the VM, integrating Cloud‑Init, and converting the VM into a reusable template.&lt;/p&gt;

&lt;h3&gt;
  
  
  Profit!
&lt;/h3&gt;

&lt;p&gt;Awesome! So now you have a shiny template ready to be used. You can use this template to create as many VMs as you want and you will have a brand new VM up and running in minutes. Now let's see how to use this template to create a new VM.&lt;/p&gt;

&lt;p&gt;Login to Proxmox and locate the template that you just created&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcpgt55wnoij8t6t56cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcpgt55wnoij8t6t56cd.png" width="391" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, I will use the ‘ubuntu-2404-cloudinit-k8s-template’. Right-click on the template and click Clone. Fill in the VM ID and Name and click Clone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4unvqfcrvs21kn1vbrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4unvqfcrvs21kn1vbrg.png" width="604" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the cloning is complete, the new VM will appear in your list. You can modify the hardware settings if required, then click &lt;strong&gt;Start&lt;/strong&gt;. The VM will boot up, and Cloud‑Init will automatically perform the initial configuration tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ytlk1ztvakoaq8f6ga2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ytlk1ztvakoaq8f6ga2.png" width="748" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And voila, the VM is created.&lt;/p&gt;

&lt;p&gt;In a future tutorial, we will look at how to configure the Cloud-Init configuration to set things like a new user, password, static IP etc. and also I will teach you how to create a Kubernetes-ready template so you can quickly set up as many nodes as you wish.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe8ccr354deeeyz0l3dm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe8ccr354deeeyz0l3dm.png" width="674" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to leave any feedback, suggestions or your thoughts down in the comments.&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>virtualization</category>
      <category>automation</category>
      <category>ubuntu</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Generating SSH Keys on Windows, Linux, and macOS</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Sat, 15 Feb 2025 16:24:28 +0000</pubDate>
      <link>https://dev.to/alishaikh/generating-ssh-keys-on-windows-linux-and-macos-17ip</link>
      <guid>https://dev.to/alishaikh/generating-ssh-keys-on-windows-linux-and-macos-17ip</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2ARcL9hVVRvmKpo1fU" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2ARcL9hVVRvmKpo1fU" width="1024" height="683"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Jakub Żerdzicki on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;SSH keys are a secure way to authenticate when connecting to remote servers and services such as GitHub Repositories. In this article, we’ll walk through the process of generating SSH keys on three popular operating systems: Windows, Linux, and macOS.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Use SSH Keys?
&lt;/h3&gt;

&lt;p&gt;Using SSH keys enhances security by replacing less secure password authentication. Keys are generated as a pair: a public key (which you share with remote servers) and a private key (which you keep secret). Once set up, you can log into your servers or repositories with ease.&lt;/p&gt;
&lt;h2&gt;
  
  
  Generating SSH Keys on Windows
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Git for Windows:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Download and install &lt;a href="https://gitforwindows.org/" rel="noopener noreferrer"&gt;Git for Windows&lt;/a&gt;, which includes Git Bash—a command-line environment that supports SSH.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open Git Bash:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Launch Git Bash from the Start menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generate the Key Pair:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Replace &lt;code&gt;email@cool-domain.com&lt;/code&gt; with your email address in the ssh-keygen command.&lt;br&gt;&lt;br&gt;
For example, you might run:&lt;br&gt;&lt;br&gt;
ssh-keygen -t rsa -b 4096 -C "&lt;a href="mailto:email@cool-domain.com"&gt;email@cool-domain.com&lt;/a&gt;"  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-t rsa&lt;/code&gt;: Specifies the type of key to create.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-b 4096&lt;/code&gt;: Sets the key size to 4096 bits for enhanced security.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-C "email@cool-domain.com"&lt;/code&gt;: Adds a comment to help identify the key. This can be your email address, computer or device name, or any text that identifies what the key is for.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Follow the Prompts:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Press Enter to accept the default file location (typically &lt;code&gt;C:\Users\{USERNAME}\.ssh\id_rsa&lt;/code&gt;). Choose a secure passphrase when prompted, or press Enter to continue without a passphrase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Complete the Process:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
After confirmation, your SSH key pair is created and stored in your user’s &lt;code&gt;.ssh&lt;/code&gt; directory. Your public key will have a &lt;code&gt;.pub&lt;/code&gt; extension. To view the public key, run:&lt;br&gt;&lt;br&gt;
cat ~/.ssh/id_rsa.pub&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Generating SSH Keys on Windows
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Using Git Bash
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Git for Windows:&lt;/strong&gt;
Download and install &lt;a href="https://gitforwindows.org/" rel="noopener noreferrer"&gt;Git for Windows&lt;/a&gt;, which includes Git Bash — a command-line environment that supports SSH.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open Git Bash:&lt;/strong&gt;
Launch Git Bash from the Start menu.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate the Key Pair:&lt;/strong&gt;
Run the following command, replacing '&lt;a href="mailto:email@cool-domain.com"&gt;email@cool-domain.com&lt;/a&gt;' with your email address:
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;ssh-keygen -t rsa -b 4096 -C 'email@cool-domain.com'&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;-t rsa: Specifies the type of key to create.&lt;br&gt;&lt;br&gt;
-b 4096: Sets the key size to 4096 bits for enhanced security.&lt;br&gt;&lt;br&gt;
-C '&lt;a href="mailto:email@cool-domain.com"&gt;email@cool-domain.com&lt;/a&gt;': Adds a comment to help identify the key. This can be your email address, computer or device name or anything that lets you know what this key is for&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Follow the Prompts:&lt;/strong&gt;
Press Enter to accept the default file location (typically C:\Users{USERNAME}.ssh\id_rsa). Choose a secure passphrase when prompted or press enter again to continue without a passphrase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete the Process:&lt;/strong&gt;
After confirmation, your SSH key pair is created and stored in your user’s .ssh directory. Your public key will have a .pub extension. You can view the public key by running cat ~/.ssh/id_rsa.pub.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Generating SSH Keys on Linux
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Using the Terminal
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Open a Terminal Window:&lt;/strong&gt;
Most Linux distributions include a built-in terminal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate the Key Pair:&lt;/strong&gt;
Enter the following command:
ssh-keygen -t rsa -b 4096 -C '&lt;a href="mailto:email@cool-domain.com"&gt;email@cool-domain.com&lt;/a&gt;'
command options are identical to those used in Windows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Respond to Prompts:&lt;/strong&gt;
Press Enter to accept the default file location (typically ~/.ssh/id_rsa). Choose a secure passphrase when prompted or press enter again to continue without a passphrase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Locate Your Keys:&lt;/strong&gt;
Your new SSH key pair will be stored in the ~/.ssh/ directory. You can view the public key using:
cat ~/.ssh/id_rsa.pub&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Generating SSH Keys on macOS
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Using the Terminal
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Open Terminal:&lt;/strong&gt;
Find Terminal in your Applications &amp;gt; Utilities folder, or use Spotlight to search for it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate the Key Pair:&lt;/strong&gt;
Run the following command in the terminal:
ssh-keygen -t rsa -b 4096 -C '&lt;a href="mailto:email@cool-domain.com"&gt;email@cool-domain.com&lt;/a&gt;'&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Follow the Prompts:&lt;/strong&gt;
Press Enter to accept the default file location (typically ~/.ssh/id_rsa). Choose a secure passphrase when prompted or press enter again to continue without a passphrase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify Your SSH Keys:&lt;/strong&gt;
To display your public key, type: cat ~/.ssh/id_rsa.pub.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Congratulations you have successfully learnt how to generate SSH key pairs. This is a fundamental skill for your IT Toolbox. Remember to keep your private key safe and secure. In a later tutorial, we will learn different ways we can use these keys to enhance your IT workflows.&lt;/p&gt;

&lt;p&gt;I encourage you to share your thoughts and ask questions in the comments!&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>macos</category>
      <category>linux</category>
      <category>ssh</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Upgrade AWS Elastic Beanstalk from PHP 7.4 to PHP 8.0</title>
      <dc:creator>Ali Shaikh</dc:creator>
      <pubDate>Tue, 11 May 2021 16:28:09 +0000</pubDate>
      <link>https://dev.to/alishaikh/upgrade-aws-elastic-beanstalk-from-php-7-4-to-php-8-0-536o</link>
      <guid>https://dev.to/alishaikh/upgrade-aws-elastic-beanstalk-from-php-7-4-to-php-8-0-536o</guid>
      <description>&lt;h1&gt;
  
  
  Upgrade AWS Elastic Beanstalk from PHP 7.4 to PHP 8.0
&lt;/h1&gt;

&lt;p&gt;The AWS Elastic Beanstalk Console currently only permits you to change between minor platform versions (e.g. from 64bit Amazon Linux 2 v3.1.6 running PHP 7.4 to 64bit Amazon Linux 2 v3.2.1 running PHP 7.4), but does not allow changes between major versions (e.g. from 64bit Amazon Linux 2 v3.2.1 running PHP 7.4 to 64bit Amazon Linux 2 v3.2.1 running PHP 8.0).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AC2YZik2va_hI-hlm29cyHQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AC2YZik2va_hI-hlm29cyHQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But, fret not!!!&lt;/em&gt; It is possible to update to a major platform version using the &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS Command Line Interface (CLI)&lt;/a&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws elasticbeanstalk update-environment --solution-stack-name "64bit Amazon Linux 2 v3.2.1 running PHP 8.0" --environment-id "x-xxxxxxxxxx" --region "us-west-1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To find the newest PHP platform version AWS Elastic Beanstalk supports view the &lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.PHP" rel="noopener noreferrer"&gt;Elastic Beanstalk Supported Platforms&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For previous versions see the &lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platform-history-php.html" rel="noopener noreferrer"&gt;PHP Platform History&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>elasticbeanstalk</category>
      <category>php</category>
      <category>laravel</category>
    </item>
  </channel>
</rss>
