<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cznorth</title>
    <description>The latest articles on DEV Community by Cznorth (@cznorth).</description>
    <link>https://dev.to/cznorth</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cznorth"/>
    <language>en</language>
    <item>
      <title>WinkTerm: AI That Shares Your Terminal Session (Not Just Command Suggestions)</title>
      <dc:creator>Cznorth</dc:creator>
      <pubDate>Mon, 04 May 2026 09:36:21 +0000</pubDate>
      <link>https://dev.to/cznorth/winkterm-ai-that-shares-your-terminal-session-not-just-command-suggestions-8p9</link>
      <guid>https://dev.to/cznorth/winkterm-ai-that-shares-your-terminal-session-not-just-command-suggestions-8p9</guid>
      <description>&lt;h2&gt;
  
  
  The Problem with AI Terminals Today
&lt;/h2&gt;

&lt;p&gt;Every AI terminal tool works the same way: you describe what you want, the AI suggests a command, you copy it, alt-tab, paste it, run it, check the output, alt-tab back, describe the next thing... rinse and repeat.&lt;/p&gt;

&lt;p&gt;There is a cognitive cost to every context switch. When you are debugging a production issue at 2 AM, those seconds add up.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Different Approach: Shared PTY
&lt;/h2&gt;

&lt;p&gt;WinkTerm takes a different approach. Instead of suggesting commands in a separate chat window, the AI writes directly into your shell input line inside the same PTY session. You press Enter to execute, backspace to edit, or Ctrl+C to cancel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# why is nginx returning 502?&lt;/span&gt;
&lt;span class="go"&gt;[WinkTerm] Let me check the nginx error logs...
[WinkTerm] I can see the upstream is unreachable. Try this:
&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;curl &lt;span class="nt"&gt;-I&lt;/span&gt; http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;When you type something starting with &lt;code&gt;#&lt;/code&gt; in your terminal, it gets intercepted by the backend agent (LangGraph) instead of being sent to the shell. The agent can read terminal context, execute commands, or write commands to your input line.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Backend: Python 3.12 + FastAPI + LangGraph&lt;/li&gt;
&lt;li&gt;Frontend: Next.js 14 + TypeScript + xterm.js&lt;/li&gt;
&lt;li&gt;Deployment: Docker Compose or desktop app&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 &lt;span class="nt"&gt;-p&lt;/span&gt; 8000:8000 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-key ghcr.io/cznorth/winkterm:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why WinkTerm?
&lt;/h2&gt;

&lt;p&gt;Unlike other AI terminals (Warp, Tabby, Claude Code), WinkTerm lets the AI share your actual PTY session. No copy-paste, no context switching. AI writes directly into your input line, you decide to execute, edit, or cancel.&lt;/p&gt;

&lt;p&gt;MIT licensed, bring your own LLM, SSH support included.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Cznorth/winkterm" rel="noopener noreferrer"&gt;https://github.com/Cznorth/winkterm&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>opensource</category>
      <category>terminal</category>
    </item>
  </channel>
</rss>
