<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexei Ledenev</title>
    <description>The latest articles on DEV Community by Alexei Ledenev (@alexeiled).</description>
    <link>https://dev.to/alexeiled</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexeiled"/>
    <language>en</language>
    <item>
      <title>CCGram - Control AI Coding Agents from Your Phone via Telegram and tmux</title>
      <dc:creator>Alexei Ledenev</dc:creator>
      <pubDate>Mon, 16 Mar 2026 12:27:11 +0000</pubDate>
      <link>https://dev.to/alexeiled/ccgram-control-ai-coding-agents-from-your-phone-via-telegram-and-tmux-4jjd</link>
      <guid>https://dev.to/alexeiled/ccgram-control-ai-coding-agents-from-your-phone-via-telegram-and-tmux-4jjd</guid>
      <description>&lt;p&gt;AI coding agents — Claude Code, Codex CLI, Gemini CLI — run in your terminal. When you step away from your desk, the session keeps working, but you lose visibility and control. Especially when the agent hits a permission prompt or needs your input.&lt;/p&gt;

&lt;p&gt;CCGram fixes this. It bridges Telegram to tmux so you can monitor and control your agents from your phone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The key insight: operate on tmux, not on SDKs
&lt;/h2&gt;

&lt;p&gt;Your agent runs in a tmux window on your machine. CCGram reads its transcript output and forwards it to a Telegram Forum topic. You type in Telegram — keystrokes go to the agent's tmux pane. Walk away from your laptop, keep the conversation going. Come back, &lt;code&gt;tmux attach&lt;/code&gt;, full scrollback. Nothing lost.&lt;/p&gt;

&lt;p&gt;Other Telegram bots wrap agent SDKs to create isolated API sessions that can't be resumed in your terminal. CCGram is different — it's a thin control layer over tmux.&lt;/p&gt;

&lt;h2&gt;
  
  
  One topic, one window, one agent
&lt;/h2&gt;

&lt;p&gt;Each Telegram Forum topic binds to one tmux window running one agent session. You can run Claude Code, Codex CLI, and Gemini CLI in parallel across different topics.&lt;/p&gt;

&lt;p&gt;Creating a session: open a Telegram topic, send any message. A directory browser appears — pick your project directory, choose the agent (Claude, Codex, or Gemini), choose the mode (Standard or YOLO), and you're connected.&lt;/p&gt;

&lt;p&gt;Or create a tmux window manually, start an agent — CCGram auto-detects the provider and creates a matching topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interactive prompts as inline keyboards
&lt;/h2&gt;

&lt;p&gt;When your agent asks for permission, approval, or input, CCGram renders the prompt as Telegram inline keyboard buttons. Tap to approve — no typing "yes" or copying options.&lt;/p&gt;

&lt;p&gt;This works across providers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude Code: AskUserQuestion, ExitPlanMode, permissions&lt;/li&gt;
&lt;li&gt;Codex: edit approvals (reformatted with compact summary + diff preview), selection prompts&lt;/li&gt;
&lt;li&gt;Gemini: action-required prompts from &lt;code&gt;@inquirer/select&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Claude Code: deepest integration
&lt;/h2&gt;

&lt;p&gt;Claude Code gets 7 hook event types — SessionStart, Notification, Stop, SubagentStart, SubagentStop, TeammateIdle, TaskCompleted. These provide instant session tracking and notifications rather than polling.&lt;/p&gt;

&lt;p&gt;Multi-pane support for agent teams: blocked panes in non-active panes are auto-surfaced as inline keyboard alerts. &lt;code&gt;/panes&lt;/code&gt; command shows all panes with status and per-pane screenshot buttons.&lt;/p&gt;

&lt;h2&gt;
  
  
  Session management
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Recovery&lt;/strong&gt;: when a session dies, the bot offers Fresh (new session), Continue (last conversation), or Resume (pick from past sessions). Buttons adapt to provider capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sessions dashboard&lt;/strong&gt; (&lt;code&gt;/sessions&lt;/code&gt;): overview of all active sessions with status and kill buttons.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message history&lt;/strong&gt; (&lt;code&gt;/history&lt;/code&gt;): paginated browsing of past messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terminal screenshots&lt;/strong&gt;: capture the current pane as a PNG image — useful for visual context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-close&lt;/strong&gt;: done topics close after 30 minutes, dead sessions after 10. Configurable or disabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provider commands&lt;/strong&gt;: &lt;code&gt;/commands&lt;/code&gt; shows all slash commands available for that topic's agent. Menus auto-switch per provider context.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Operations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-instance&lt;/strong&gt;: run separate bots per Telegram group on the same machine, sharing a single bot token. Each instance has its own tmux session, config, and state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tmux auto-detection&lt;/strong&gt;: start ccgram inside an existing tmux session — it discovers all agent windows, no creation needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Emdash integration&lt;/strong&gt;: auto-discovers emdash-managed tmux sessions with zero configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Diagnostics&lt;/strong&gt;: &lt;code&gt;ccgram doctor&lt;/code&gt; validates setup, checks hooks, finds orphan processes. &lt;code&gt;ccgram doctor --fix&lt;/code&gt; auto-fixes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent state&lt;/strong&gt;: thread bindings, read offsets, window states survive restarts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run as service&lt;/strong&gt;: systemd unit, launchd plist, or detached tmux.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  uv tool &lt;span class="nb"&gt;install &lt;/span&gt;ccgram
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatives: &lt;code&gt;pipx install ccgram&lt;/code&gt;, &lt;code&gt;brew install alexei-led/tap/ccgram&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Configure a Telegram bot token via BotFather, enable Topics, add to a group, set your user ID in ~/.ccgram/.env, run ccgram.&lt;/p&gt;

&lt;p&gt;For Claude Code, install hooks for best experience: &lt;code&gt;ccgram hook --install&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Links&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/alexei-led/ccgram" rel="noopener noreferrer"&gt;https://github.com/alexei-led/ccgram&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;PyPI: &lt;a href="https://pypi.org/project/ccgram" rel="noopener noreferrer"&gt;https://pypi.org/project/ccgram&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Documentation: &lt;a href="https://github.com/alexei-led/ccgram/blob/main/docs/guides.md" rel="noopener noreferrer"&gt;https://github.com/alexei-led/ccgram/blob/main/docs/guides.md&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MIT licensed, Python. If you run AI coding agents and want mobile access without losing your terminal workflow — give it a try. Contributions and feedback welcome.&lt;/p&gt;

&lt;p&gt;Hope you find this useful.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>coding</category>
      <category>cli</category>
    </item>
    <item>
      <title>Kubernetes 1.33: Resizing Pods Without the Drama (Finally!) 🎉</title>
      <dc:creator>Alexei Ledenev</dc:creator>
      <pubDate>Fri, 16 May 2025 13:32:12 +0000</pubDate>
      <link>https://dev.to/alexeiled/kubernetes-133-resizing-pods-without-the-drama-finally-3cm5</link>
      <guid>https://dev.to/alexeiled/kubernetes-133-resizing-pods-without-the-drama-finally-3cm5</guid>
      <description>&lt;p&gt;Ever found yourself in that classic Kubernetes predicament? You meticulously set up your pod resources, patted yourself on the back, deployed to production... and then reality hits. Your application is either gasping for resources like a marathon runner at mile 25, or it's hoarding CPU like it's preparing for the computing apocalypse.&lt;/p&gt;

&lt;p&gt;What happens next? In the dark ages of pre-1.33 Kubernetes, you'd have to &lt;em&gt;restart&lt;/em&gt; the poor pod! It was like performing open-heart surgery just to adjust someone's diet. &lt;/p&gt;

&lt;p&gt;But rejoice, fellow Kubernetes wranglers! Version 1.33 has arrived bearing gifts, and the showstopper is &lt;strong&gt;in-place pod vertical scaling&lt;/strong&gt; in beta and enabled by default! 🎁&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is This Sorcery? 🧙‍♂️
&lt;/h2&gt;

&lt;p&gt;In-place pod resizing allows you to change the CPU and memory allocations of running pods without restarting them. Let that sink in for a second.&lt;/p&gt;

&lt;p&gt;No restarts. No connection drops. No disruption. Just smooth resource adjustment while your application keeps humming along.&lt;/p&gt;

&lt;p&gt;It works by making the &lt;code&gt;resources.requests&lt;/code&gt; and &lt;code&gt;resources.limits&lt;/code&gt; in your pod spec mutable - they can be changed on the fly without triggering a pod recreation. This is made possible through the new resize subresource feature and kubelet's ability to dynamically reconfigure container cgroups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Should You Care? 🤔
&lt;/h2&gt;

&lt;p&gt;If you're thinking "neat trick, but why does it matter?", consider these scenarios:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stateful Applications&lt;/strong&gt; - Your database pod suddenly needs more memory during a heavy analytics query. Previously: restart required, connections dropped, cache flushed. Now: bump the memory, carry on!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Optimization&lt;/strong&gt; - Over-provisioning "just in case" becomes unnecessary. Start conservative, scale up only when needed, scale down when the traffic subsides.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Tuning&lt;/strong&gt; - Experiment with different resource allocations to find the sweet spot, without annoying your users or operation teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Spiky Workloads&lt;/strong&gt; - That batch job that needs tons of resources for 5 minutes every hour? No more choosing between wasting resources or constant restarts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Under the Hood: How It Works 🔧
&lt;/h2&gt;

&lt;p&gt;When you resize a pod, here's what happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You update the pod's resource specification using the new &lt;code&gt;resize&lt;/code&gt; subresource&lt;/li&gt;
&lt;li&gt;Kubelet validates the request against node capacity&lt;/li&gt;
&lt;li&gt;If approved, kubelet instructs the container runtime to adjust cgroups&lt;/li&gt;
&lt;li&gt;Container runtime updates the limits without restarting&lt;/li&gt;
&lt;li&gt;The pod status is updated to reflect the new allocations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The pod can report its resize status through conditions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PodResizePending&lt;/strong&gt; - Can't resize right now (reasons include &lt;code&gt;Infeasible&lt;/code&gt; or &lt;code&gt;Deferred&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PodResizeInProgress&lt;/strong&gt; - Changes are being applied&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Show Me the Code! 💻
&lt;/h2&gt;

&lt;p&gt;Here's a simple example that demonstrates this feature in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a resource-monitoring pod&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: v1
kind: Pod
metadata:
  name: resize-demo
spec:
  containers:
  - name: resource-watcher
    image: ubuntu:22.04
    command:
    - "/bin/bash"
    - "-c"
    - |
      apt-get update &amp;amp;&amp;amp; apt-get install -y procps bc
      echo "=== Pod Started: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt; ==="

      # Functions to read container resource limits
      get_cpu_limit() {
        if [ -f /sys/fs/cgroup/cpu.max ]; then
          # cgroup v2
          local cpu_data=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /sys/fs/cgroup/cpu.max&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
          local quota=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$cpu_data&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $1}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
          local period=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$cpu_data&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $2}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;

          if [ "&lt;/span&gt;&lt;span class="nv"&gt;$quota&lt;/span&gt;&lt;span class="sh"&gt;" = "max" ]; then
            echo "unlimited"
          else
            echo "&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"scale=3; &lt;/span&gt;&lt;span class="nv"&gt;$quota&lt;/span&gt;&lt;span class="s2"&gt; / &lt;/span&gt;&lt;span class="nv"&gt;$period&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | bc&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt; cores"
          fi
        else
          # cgroup v1
          local quota=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /sys/fs/cgroup/cpu/cpu.cfs_quota_us&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
          local period=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /sys/fs/cgroup/cpu/cpu.cfs_period_us&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;

          if [ "&lt;/span&gt;&lt;span class="nv"&gt;$quota&lt;/span&gt;&lt;span class="sh"&gt;" = "-1" ]; then
            echo "unlimited"
          else
            echo "&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"scale=3; &lt;/span&gt;&lt;span class="nv"&gt;$quota&lt;/span&gt;&lt;span class="s2"&gt; / &lt;/span&gt;&lt;span class="nv"&gt;$period&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | bc&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt; cores"
          fi
        fi
      }

      get_memory_limit() {
        if [ -f /sys/fs/cgroup/memory.max ]; then
          # cgroup v2
          local mem=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /sys/fs/cgroup/memory.max&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
          if [ "&lt;/span&gt;&lt;span class="nv"&gt;$mem&lt;/span&gt;&lt;span class="sh"&gt;" = "max" ]; then
            echo "unlimited"
          else
            echo "&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;mem &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="m"&gt;1048576&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;&lt;span class="sh"&gt; MiB"
          fi
        else
          # cgroup v1
          local mem=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /sys/fs/cgroup/memory/memory.limit_in_bytes&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
          echo "&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;mem &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="m"&gt;1048576&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;&lt;span class="sh"&gt; MiB"
        fi
      }

      # Print resource info every 5 seconds
      while true; do
        echo "---------- Resource Check: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt; ----------"
        echo "CPU limit: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;get_cpu_limit&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"
        echo "Memory limit: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;get_memory_limit&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"
        echo "Available memory: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;free &lt;span class="nt"&gt;-h&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;Mem | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $7}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"
        sleep 5
      done
    resizePolicy:
    - resourceName: cpu
      restartPolicy: NotRequired
    - resourceName: memory
      restartPolicy: NotRequired
    resources:
      requests:
        memory: "128Mi"
        cpu: "100m"
      limits:
        memory: "128Mi"
        cpu: "100m"
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After your pod is running, double the CPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch pod resize-demo &lt;span class="nt"&gt;--subresource&lt;/span&gt; resize &lt;span class="nt"&gt;--patch&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s1"&gt;'{"spec":{"containers":[{"name":"resource-watcher", "resources":{"requests":{"cpu":"200m"}, "limits":{"cpu":"200m"}}}]}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the pod logs, and you'll see the CPU limit magically change from 0.100 cores to 0.200 cores without any restart!&lt;/p&gt;

&lt;h2&gt;
  
  
  Control the Resize Behavior with resizePolicy ⚙️
&lt;/h2&gt;

&lt;p&gt;Sometimes you &lt;em&gt;do&lt;/em&gt; want a restart when changing certain resources. For example, many applications can't dynamically adjust to memory changes without restarting. That's where &lt;code&gt;resizePolicy&lt;/code&gt; comes in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resizePolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;resourceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NotRequired&lt;/span&gt;     &lt;span class="c1"&gt;# Live CPU tweaks!&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;resourceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;memory&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RestartContainer&lt;/span&gt;  &lt;span class="c1"&gt;# Safer memory changes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration, CPU changes happen in-place, but memory changes trigger a container restart.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fine Print: Limitations 📜
&lt;/h2&gt;

&lt;p&gt;While this feature is cooler than a penguin in sunglasses, it does have some limitations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Windows Support&lt;/strong&gt;: None yet, only Linux containers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Types&lt;/strong&gt;: Only CPU and memory can be resized in-place&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;QoS Class Immutability&lt;/strong&gt;: You can't change a pod's QoS class through resizing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Decrease&lt;/strong&gt;: You can't decrease memory limits without a restart (unless using &lt;code&gt;RestartContainer&lt;/code&gt; policy)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Special Pods&lt;/strong&gt;: Pods with static CPU/memory management policies can't use this feature&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Requirements&lt;/strong&gt;: Requires Kubernetes 1.33+ with a compatible container runtime&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What This Means for VPA 🤖
&lt;/h2&gt;

&lt;p&gt;The Vertical Pod Autoscaler has long been the awkward cousin at the Kubernetes scaling family reunion. Its main limitation? It had to recreate pods to adjust resources.&lt;/p&gt;

&lt;p&gt;While VPA doesn't yet support in-place resizing in Kubernetes 1.33, this feature lays the groundwork for future integration. Soon, we might see VPA attempting in-place resizes first, falling back to recreation only when necessary.&lt;/p&gt;

&lt;p&gt;Until then, you can use VPA in "Off" mode to get recommendations, then apply them manually via in-place resizing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: A Smoother Vertical Scaling Future 🚀
&lt;/h2&gt;

&lt;p&gt;In-place pod resizing in Kubernetes 1.33 is a game-changer for vertical scaling workflows. It removes one of the most significant barriers to efficient resource management - the disruption caused by pod recreation.&lt;/p&gt;

&lt;p&gt;Whether you're fine-tuning performance, responding to traffic spikes, or optimizing cloud costs, the ability to adjust resources without service interruption opens up new possibilities for Kubernetes-based applications.&lt;/p&gt;

&lt;p&gt;Give it a try in your non-production clusters, and prepare to say goodbye to the "restart to resize" era of Kubernetes!&lt;/p&gt;




&lt;p&gt;For the full blog post including more details, examples, and the future of vertical scaling in Kubernetes, check out &lt;a href="https://medium.com/@alexeiled/kubernetes-1-33-resizing-pods-without-the-drama-finally-88e4791be8d1" rel="noopener noreferrer"&gt;my article on Medium&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>vpa</category>
      <category>containers</category>
    </item>
    <item>
      <title>KubeIP v2: Assigning Static Public IPs to Kubernetes Nodes Across Cloud Providers</title>
      <dc:creator>Alexei Ledenev</dc:creator>
      <pubDate>Thu, 23 May 2024 14:55:38 +0000</pubDate>
      <link>https://dev.to/alexeiled/kubeip-v2-assigning-static-public-ips-to-kubernetes-nodes-across-cloud-providers-1a57</link>
      <guid>https://dev.to/alexeiled/kubeip-v2-assigning-static-public-ips-to-kubernetes-nodes-across-cloud-providers-1a57</guid>
      <description>&lt;p&gt;Kubernetes nodes can benefit from having dedicated static public IP addresses in certain scenarios. &lt;a href="https://github.com/doitintl/kubeip"&gt;KubeIP&lt;/a&gt;, an open-source utility, fulfills this need by assigning static public IPs to Kubernetes nodes. The latest version, KubeIP v2, extends support from Google Cloud's GKE to Amazon's EKS, with a design ready to accommodate other cloud providers. It operates as a DaemonSet, offering improved reliability, configuration flexibility, and user-friendliness over the previous Kubernetes controller method. KubeIP v2 supports assigning both IPv4 and IPv6 addresses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Gaming Applications
&lt;/h3&gt;

&lt;p&gt;In gaming scenarios, a console may need to connect directly to a cloud VM to minimize network hops and latency. Assigning a dedicated public IP to the gaming server's node allows the console to connect directly, improving the gaming experience by reducing latency and packet loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  Whitelisting Agent IPs
&lt;/h3&gt;

&lt;p&gt;If you have multiple agents or services running on Kubernetes that require direct connections to an external server and that server needs to whitelist the agents' IP addresses, using KubeIP to assign stable public IPs to the nodes makes this easier to manage than allowing broader CIDR ranges. This is particularly useful when the external server has strict IP-based access controls.&lt;/p&gt;

&lt;h3&gt;
  
  
  Avoiding SNAT for Select Pods
&lt;/h3&gt;

&lt;p&gt;By default, pods are assigned private IPs from the VPC CIDR range. When they communicate with external IPv4 addresses, the Amazon VPC CNI plugin translates the pod's IP to the primary private IP of the node's network interface using SNAT (source network address translation). Sometimes, you may want to avoid SNAT for certain pods so that external services see the actual pod IPs. Assigning public IPs to nodes with KubeIP and setting &lt;code&gt;hostNetwork: true&lt;/code&gt; on the pod spec achieves this. The pod can communicate directly with external services using the node's public IP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Direct Inbound Connections and Custom Networking Scenarios
&lt;/h3&gt;

&lt;p&gt;Assigning public IPs to nodes with KubeIP enables a variety of networking scenarios. For instance, you can forward traffic directly to pods running on those nodes, which is useful when you need to expose services on the node to the internet without using a traditional load balancer. An example would be running a web server on a pod and forwarding traffic to it using the node's public IP.&lt;/p&gt;

&lt;p&gt;In addition, KubeIP can be used to implement custom networking scenarios that require public IPs on nodes. For example, you could create a custom load balancer that forwards traffic to specific nodes based on the public IP. This flexibility makes KubeIP a powerful tool for testing or deploying custom networking solutions in Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  IPv6 Support
&lt;/h3&gt;

&lt;p&gt;KubeIP extends its functionality beyond IPv4 by supporting the assignment of static public IPv6 addresses to nodes. This feature is increasingly important as the internet continues transitioning towards IPv6 due to the exhaustion of IPv4 addresses. With KubeIP's IPv6 support, you can assign static public IPv6 addresses to your Kubernetes nodes, enabling them to communicate directly with external services over IPv6. This is particularly beneficial for applications that require IPv6 connectivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;KubeIP v2 is a powerful tool for assigning static public IPs to Kubernetes nodes across cloud providers. It enables a wide range of use cases, from gaming applications to custom networking scenarios, and supports both IPv4 and IPv6 addresses. The extensible design and simplified DaemonSet model make it easy to deploy and manage in your environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Involved
&lt;/h2&gt;

&lt;p&gt;As an open-source &lt;a href="https://github.com/doitintl/kubeip"&gt;project&lt;/a&gt;, we welcome contributions! Submit pull requests, open issues, help with documentation, or spread the word.&lt;/p&gt;

&lt;p&gt;For more details, check out the original &lt;a href="https://engineering.doit.com/kubeip-v2-assigning-static-public-ips-to-kubernetes-nodes-across-cloud-providers-0616f684ef28"&gt;Medium post&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>devops</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>spotinfo - cli for exploring AWS Spot instances</title>
      <dc:creator>Alexei Ledenev</dc:creator>
      <pubDate>Sun, 16 May 2021 15:17:23 +0000</pubDate>
      <link>https://dev.to/alexeiled/spotinfo-cli-for-exploring-aws-spot-instances-1o1c</link>
      <guid>https://dev.to/alexeiled/spotinfo-cli-for-exploring-aws-spot-instances-1o1c</guid>
      <description>&lt;p&gt;The &lt;a href="https://github.com/alexei-led/spotinfo"&gt;spotinfo&lt;/a&gt; is an open-source command-line tool you can use for exploring AWS Spot instances across multiple AWS regions.&lt;/p&gt;

&lt;p&gt;Under the hood, the tool uses the same data feeds as AWS Spot Advisor and AWS Spot Pricing.&lt;/p&gt;

&lt;p&gt;Compared with the &lt;a href="https://aws.amazon.com/ec2/spot/instance-advisor/"&gt;AWS Spot Advisor&lt;/a&gt;, the &lt;code&gt;spotinfo&lt;/code&gt; supports advanced filtering and sorting (also with regex), in-place spot prices, cross-region comparison, and multiple output formats (plain text, JSON, table, csv).&lt;/p&gt;

&lt;p&gt;A short overview is available on DoiT blogs &lt;a href="https://blog.doit-intl.com/spotinfo-a-new-cli-for-aws-spot-a9748bbe338f"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
