<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MT</title>
    <description>The latest articles on DEV Community by MT (@ccbikai).</description>
    <link>https://dev.to/ccbikai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ccbikai"/>
    <language>en</language>
    <item>
      <title>hink - A short link system with just 10 lines of code.</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sun, 31 Aug 2025 04:50:55 +0000</pubDate>
      <link>https://dev.to/ccbikai/hink-a-short-link-system-with-just-10-lines-of-code-49bb</link>
      <guid>https://dev.to/ccbikai/hink-a-short-link-system-with-just-10-lines-of-code-49bb</guid>
      <description>&lt;p&gt;I’d like to share a small tool I recently created—&lt;a href="https://github.com/ccbikai/hink" rel="noopener noreferrer"&gt;&lt;strong&gt;hink&lt;/strong&gt;&lt;/a&gt;, a short link system built with fewer than 10 lines of code.&lt;/p&gt;

&lt;p&gt;This tool leverages Git and Serverless platforms to enable short link generation and access analytics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Concept: The Clever Use of Git Commit Hashes
&lt;/h2&gt;

&lt;p&gt;The core idea behind hink is simple: it uses the hash value of an empty Git commit as a unique identifier for the short link, while storing the original long URL in the commit message. When a short link is accessed, the system retrieves the commit message via GitHub’s &lt;code&gt;.patch&lt;/code&gt; file interface, extracts the long URL, and redirects to it. Paired with a cloud platform’s WAF (Web Application Firewall) analytics dashboard, it also provides access statistics.&lt;/p&gt;

&lt;p&gt;This solution has been successfully tested on the following platforms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloudflare Workers/Snippets + Cloudflare WAF (Pro)&lt;/li&gt;
&lt;li&gt;Tencent Cloud EdgeOne (Free)&lt;/li&gt;
&lt;li&gt;Alibaba Cloud ESA (Free)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code: Minimalist Implementation
&lt;/h2&gt;

&lt;p&gt;The code for hink is extremely concise, with the core logic boiled down to just a few lines. Below are the implementations for different platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloudflare Workers / Alibaba Cloud ESA Version
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;GIT_REPO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://github.com/ccbikai/hink&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gitPatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;GIT_REPO&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/commit&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.patch`&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;patch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;gitPatch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;cf&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;cacheEverything&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;cacheTtlByStatus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;200-299&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}}).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;GIT_REPO&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/^Subject:&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\[&lt;/span&gt;&lt;span class="sr"&gt;PATCH&lt;/span&gt;&lt;span class="se"&gt;\](&lt;/span&gt;&lt;span class="sr"&gt;.*&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;$/m&lt;/span&gt;&lt;span class="p"&gt;)?.[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]?.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;GIT_REPO&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tencent Cloud EdgeOne Version
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;GIT_REPO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://github.com/ccbikai/hink&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fetch&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gitPatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;GIT_REPO&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/commit&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.patch`&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;patch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;gitPatch&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;GIT_REPO&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/^Subject:&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\[&lt;/span&gt;&lt;span class="sr"&gt;PATCH&lt;/span&gt;&lt;span class="se"&gt;\](&lt;/span&gt;&lt;span class="sr"&gt;.*&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;$/m&lt;/span&gt;&lt;span class="p"&gt;)?.[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]?.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;respondWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;Location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;GIT_REPO&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}))&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once deployed to a Serverless platform and bound to a domain, you’ll have your own short link service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build hink?
&lt;/h2&gt;

&lt;p&gt;Short link services aren’t new—there are plenty of tools out there. However, hink’s goal was to explore a minimalist and creative approach. By using Git commit hashes, it eliminates the need for database management, relies on GitHub for storage, and utilizes cloud platform WAF features for easy access analytics. For me, this was a technical experiment to solve a real problem with the least amount of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo: Performance Across Three Platforms
&lt;/h2&gt;

&lt;p&gt;I’ve deployed hink on Cloudflare Workers, Alibaba Cloud ESA, and Tencent Cloud EdgeOne, and monitored access stats via their WAF dashboards. Below are screenshots of the performance on each platform:&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloudflare Workers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.miantiao.me%2Fshare%2F2025%2FePMX6q%2FZsb50p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.miantiao.me%2Fshare%2F2025%2FePMX6q%2FZsb50p.png" alt="Cloudflare" width="800" height="2194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Alibaba Cloud ESA
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.miantiao.me%2Fshare%2F2025%2FJiOSmY%2F6oDC36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.miantiao.me%2Fshare%2F2025%2FJiOSmY%2F6oDC36.png" alt="Alibaba" width="800" height="2108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tencent Cloud EdgeOne
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.miantiao.me%2Fshare%2F2025%2FdyzFzs%2FefOSKl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.miantiao.me%2Fshare%2F2025%2FdyzFzs%2FefOSKl.png" alt="Tencent" width="800" height="2000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Repository
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/ccbikai/hink" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.html.zone%2Fccbikai%2Fhink" alt="ccbikai/hink - GitHub" width="1200" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Demystifying SSH AI Chat: How It Works</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Fri, 01 Aug 2025 14:07:03 +0000</pubDate>
      <link>https://dev.to/ccbikai/demystifying-ssh-ai-chat-how-it-works-14c5</link>
      <guid>https://dev.to/ccbikai/demystifying-ssh-ai-chat-how-it-works-14c5</guid>
      <description>&lt;p&gt;Hello everyone, I'm MT, and today I'd like to share my recent project - SSH AI Chat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n9slursnw50n2zkafsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n9slursnw50n2zkafsf.png" alt="SSH AI Chat" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Introduction
&lt;/h2&gt;

&lt;p&gt;SSH AI Chat is an AI chat application that you can connect to directly via SSH. Using it is incredibly simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh username@chat.aigc.ing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note⚠️: Replace username with your GitHub username&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's right, it's that simple! You don't need to install any client or open a browser. Just an SSH client is all you need to chat with AI.&lt;/p&gt;

&lt;p&gt;As a developer with a strong interest in TUI applications, I've always felt that chatting in the terminal is a really cool concept. I was initially amazed by itter.sh - a social network you could access via SSH! This made me realize that SSH isn't just for connecting to servers; it can be used for many interesting things.&lt;/p&gt;

&lt;p&gt;That's when I had this idea: how cool would it be to chat with AI via SSH! No software installation, no browser needed, just type &lt;code&gt;ssh yourname@chat.aigc.ing&lt;/code&gt; in your terminal to start chatting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Core Technology Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSH Server&lt;/strong&gt;: Node.js + ssh2 module&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI Framework&lt;/strong&gt;: React + Ink (for terminal rendering)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database&lt;/strong&gt;: PostgreSQL / PGLite (optional)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache&lt;/strong&gt;: Redis / ioredis-mock (optional)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Integration&lt;/strong&gt;: Vercel AI SDK&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  System Architecture Diagram
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   SSH Client    │    │   SSH Server    │    │   React App     │
│                 │    │                 │    │                 │
│  ssh username@  │───▶│  Node.js +      │───▶│  Ink UI +       │
│  chat.aigc.ing  │    │  ssh2           │    │  React Hooks    │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                               │
                               ▼
                       ┌─────────────────┐
                       │   AI Services   │
                       │                 │
                       │  OpenAI API     │
                       │  Gemini API     │
                       │  DeepSeek API   │
                       └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Core Module Analysis
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. SSH Server Module
&lt;/h3&gt;

&lt;p&gt;This is the core of the entire application, responsible for handling SSH connections and authentication. The system automatically handles key verification, GitHub public key authentication, login restrictions, and rate limiting.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Authentication System
&lt;/h3&gt;

&lt;p&gt;The most clever design is the use of GitHub public key authentication. Users don't need to register; they can log in directly using their GitHub SSH keys. The system retrieves the user's GitHub public key for verification, caching it every 6 hours, making it both secure and efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Terminal UI System
&lt;/h3&gt;

&lt;p&gt;Using the Ink framework to render React components in the terminal. Imagine the React components you usually write, now rendering not in a browser but displaying in the terminal! It supports multilingual interfaces, real-time chat, history, model selection, and responsive layouts.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Chat System
&lt;/h3&gt;

&lt;p&gt;Using Vercel AI SDK to handle AI conversations. When you type a message in the terminal, the system receives the message, loads conversation history, selects a model, displays streaming responses in real-time, and saves conversation records. It supports streaming responses, multi-model support, chain-of-thought display, and conversation history management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Challenges and Solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Terminal Rendering Challenges
&lt;/h3&gt;

&lt;p&gt;The biggest challenge was implementing complex UI interfaces in the terminal. Using the Ink framework to render React components to the terminal and implementing a virtual PTY to handle terminal I/O. Displaying AI responses in Markdown in the terminal requires a dedicated worker process to handle the conversion to display bold, italic, and code blocks.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. SSH Session Management
&lt;/h3&gt;

&lt;p&gt;Managing multiple SSH sessions and states requires creating independent React application instances for each connection, using Context API to manage global state, and implementing session lifecycle management.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Real-time Streaming Responses
&lt;/h3&gt;

&lt;p&gt;AI responses are streamed, and if the interface refreshes with every byte received, the terminal would freeze. Using Vercel AI SDK's streamText with throttled updates, refreshing every 300ms, ensures smooth performance without lag.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Flexibility in Data Storage
&lt;/h3&gt;

&lt;p&gt;The project supports both PostgreSQL and PGLite databases, as well as Redis and in-memory caching, allowing the project to run independently or be deployed in production environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interesting Design Details
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. GitHub Authentication
&lt;/h3&gt;

&lt;p&gt;The coolest design! Users don't need to register; they can log in directly using their GitHub SSH keys, which is both convenient and secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Multi-model Support
&lt;/h3&gt;

&lt;p&gt;Supports multiple AI models including DeepSeek-V3/DeepSeek-R1, Gemini-2.5-Flash/Gemini-2.5-Pro, including chain-of-thought display. It requires handling differences between various model APIs while providing a unified interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Internationalization Support
&lt;/h3&gt;

&lt;p&gt;Complete i18n support, automatically detecting user language preferences through the &lt;code&gt;LANG&lt;/code&gt; environment variable, with support for Chinese and English switching.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Keyboard Shortcuts
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Ctrl+C&lt;/code&gt; to exit the application, &lt;code&gt;N&lt;/code&gt; for new conversation, &lt;code&gt;I&lt;/code&gt; to focus on input box, &lt;code&gt;?&lt;/code&gt; to view help. There are also some Easter egg features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Insights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Possibilities of Terminal Applications
&lt;/h3&gt;

&lt;p&gt;This project showed me the enormous potential of terminal applications. Through the Ink framework, we can implement complex interactive interfaces in the terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Creative Uses of SSH
&lt;/h3&gt;

&lt;p&gt;SSH is not just a remote management tool; it's also a powerful application platform. Through SSH, we can create cross-platform client applications without users needing to install any additional software.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Modern Technology Stack
&lt;/h3&gt;

&lt;p&gt;Although this is a terminal application, we used the most modern technology stack: React, TypeScript, Vercel AI SDK, etc. This proves that terminal applications can also be "modern."&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Pitfalls Encountered
&lt;/h3&gt;

&lt;p&gt;SSH terminal compatibility issues, performance problems with streaming output, and performance challenges in conversation history management were all difficulties that needed to be overcome during development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Outlook
&lt;/h2&gt;

&lt;p&gt;Plans include supporting more AI models, including local models like Ollama, and supporting the MCP (Model Context Protocol) to allow users to extend functionality through plugins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;SSH AI Chat is an innovative project that integrates multiple technologies. It demonstrates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The modern possibilities of terminal applications&lt;/li&gt;
&lt;li&gt;Flexible applications of the SSH protocol&lt;/li&gt;
&lt;li&gt;React's adaptability across different platforms&lt;/li&gt;
&lt;li&gt;The popularization of AI technology&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project made me realize that technology isn't just for solving problems; it can also be fun. Combining SSH and AI created an unexpected experience.&lt;/p&gt;

&lt;p&gt;I hope this project brings some inspiration to everyone, and let's explore the boundaries of technology together!&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;If you want to try it, you can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh username@chat.aigc.ing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Project URL: &lt;a href="https://github.com/ccbikai/ssh-ai-chat" rel="noopener noreferrer"&gt;https://github.com/ccbikai/ssh-ai-chat&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you have any questions or suggestions about this project, feel free to discuss them on GitHub. You're also welcome to Star this project - your support is my motivation to continue developing!&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Run MCP Server in a Docker sandbox</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Wed, 23 Apr 2025 14:01:48 +0000</pubDate>
      <link>https://dev.to/ccbikai/run-mcp-server-in-a-docker-sandbox-1opc</link>
      <guid>https://dev.to/ccbikai/run-mcp-server-in-a-docker-sandbox-1opc</guid>
      <description>&lt;p&gt;MCP is a hot protocol in the AI development industry this year, but its Client/Server (C/S) architecture requires users to run the MCP Server locally.&lt;/p&gt;

&lt;p&gt;Common ways to run MCP Server include stdio methods like npx (NPM ecosystem), uvx (Python ecosystem), Docker, and HTTP (SSE/Streaming) methods. However, running commands with npx and uvx carries significant risks. Accidentally executing a malicious package could lead to sensitive data exposure, posing a major security threat. For details, you can refer to Invariant's article &lt;a href="https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks" rel="noopener noreferrer"&gt;MCP Security Notification: Tool Poisoning Attacks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As a software industry professional, I have a high degree of concern for security. I asked ChatGPT to compile a list of NPM and PyPI supply chain attack incidents from the past 5 years, and it was chilling.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Time&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Event&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Summary and Scope of Impact&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;February 2021&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;"Dependency Confusion" Vulnerability Disclosure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Security researcher Alex Birsan utilized the &lt;strong&gt;Dependency Confusion&lt;/strong&gt; technique to upload packages to NPM/PyPI with the same names as internal libraries used by multiple companies, successfully infiltrating the internal servers of 35 major companies including Apple and Microsoft (&lt;a href="https://www.sonatype.com/blog/pypi-flooded-with-over-1200-dependency-confusion-packages#:~:text=Dependency%20confusion%3A%20Year%20in%20review" rel="noopener noreferrer"&gt;PyPI flooded with 1,275 dependency confusion packages&lt;/a&gt;). This demonstration sparked high concern within the industry regarding supply chain risks.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;October 2021&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;UAParser.js Library Hijacked&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The popular library &lt;em&gt;ua-parser-js&lt;/em&gt; on NPM, with over 7 million weekly downloads, was compromised by attackers via the maintainer's account to publish malicious versions (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=%23%23%20%20Popular%20%22ua,Attacked" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;). Infected versions implanted &lt;strong&gt;password-stealing trojans&lt;/strong&gt; and &lt;strong&gt;cryptocurrency miners&lt;/strong&gt; upon installation, affecting a large number of developer systems.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;October 2021&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Poisoning via Fake Roblox Libraries&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Attackers uploaded multiple packages impersonating Roblox API on NPM (e.g., &lt;em&gt;noblox.js-proxy&lt;/em&gt;), containing obfuscated malicious code. These packages would implant &lt;strong&gt;trojans and ransomware&lt;/strong&gt; payloads after installation (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=,and%20has%20a%20Spooky%20Surprise" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;). These packages were downloaded thousands of times, demonstrating attackers used &lt;strong&gt;typosquatting&lt;/strong&gt; to trick game developers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;November 2021&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;COA and RC Libraries Successively Hijacked&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Popular libraries on NPM, &lt;em&gt;coa&lt;/em&gt; (millions of weekly downloads) and &lt;em&gt;rc&lt;/em&gt; (14 million weekly downloads), were successively compromised to publish malicious versions. The affected versions executed &lt;strong&gt;credential-stealing trojans&lt;/strong&gt; similar to the UAParser.js case, at one point causing build pipelines to break for numerous projects globally using frameworks like React (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=,js" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;) (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=,Is%20Hijacked%2C%20Too" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;). Official investigations determined the cause in both cases was compromised maintainer accounts.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;January 2022&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Colors/Faker Open Source Libraries "Suicide"&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The authors of the famous color formatting library &lt;em&gt;colors.js&lt;/em&gt; and test data generation library &lt;em&gt;faker.js&lt;/em&gt;, out of protest, injected destructive code like infinite loops in the latest versions, causing thousands of projects, including those at companies like Meta (Facebook) and Amazon, to crash (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=Thousands%20of%20open%20source%20projects,companies%20exploiting%20open%20source" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;) (While not an external attack, it falls within the scope of supply chain poisoning).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;January 2022&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;PyPI: 1,275 Malicious Packages Deployed in Bulk&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A single user frantically published &lt;strong&gt;1,275 malicious packages&lt;/strong&gt; to PyPI in one day on January 23rd (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=,Than%201%2C200%20Dependency%20Confusion%20Packages" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;). Most of these packages impersonated the names of well-known projects or companies (e.g., &lt;em&gt;xcryptography&lt;/em&gt;, &lt;em&gt;Sagepay&lt;/em&gt;, etc.). After installation, they collected fingerprint information like hostname, IP, etc., and exfiltrated it to the attackers via DNS/HTTP (&lt;a href="https://www.sonatype.com/blog/pypi-flooded-with-over-1200-dependency-confusion-packages#:~:text=The%20,of%20these%20components%20are%20installed" rel="noopener noreferrer"&gt;PyPI flooded with 1,275 dependency confusion packages&lt;/a&gt;) (&lt;a href="https://www.sonatype.com/blog/pypi-flooded-with-over-1200-dependency-confusion-packages#:~:text=For%20DNS%3A%20.sub.deliverycontent,online" rel="noopener noreferrer"&gt;PyPI flooded with 1,275 dependency confusion packages&lt;/a&gt;). PyPI administrators took down all related packages within an hour of receiving the report (&lt;a href="https://www.sonatype.com/blog/pypi-flooded-with-over-1200-dependency-confusion-packages#:~:text=All%20of%20the%201%2C275%20were,an%20hour%20of%20our%20report" rel="noopener noreferrer"&gt;PyPI flooded with 1,275 dependency confusion packages&lt;/a&gt;).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;March 2022&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Node-ipc "Protestware" Incident&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The author of &lt;em&gt;node-ipc&lt;/em&gt;, a commonly used front-end build library, added malicious code in versions v10.1.1–10.1.3: when detecting client IPs belonging to Russia or Belarus, it would &lt;strong&gt;wipe the file system&lt;/strong&gt; and overwrite files with heart emojis ([Corrupted open-source software enters the Russian battlefield&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;October 2022&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;LofyGang Large-Scale Poisoning Campaign&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Security companies discovered a group named "LofyGang" distributed nearly &lt;strong&gt;200 malicious packages&lt;/strong&gt; on NPM (&lt;a href="https://thehackernews.com/2022/10/lofygang-distributed-200-malicious-npm.html#:~:text=Multiple%20campaigns%20that%20distributed%20trojanized,single%20threat%20actor%20dubbed%20LofyGang" rel="noopener noreferrer"&gt;LofyGang Distributed ~200 Malicious NPM Packages to Steal Credit Card Data&lt;/a&gt;). These packages implanted &lt;strong&gt;trojans&lt;/strong&gt; through &lt;strong&gt;typosquatting&lt;/strong&gt; and by impersonating common library names, stealing developers' credit card information, Discord accounts, and game service login credentials, accumulating thousands of installations (&lt;a href="https://thehackernews.com/2022/10/lofygang-distributed-200-malicious-npm.html#:~:text=Multiple%20campaigns%20that%20distributed%20trojanized,single%20threat%20actor%20dubbed%20LofyGang" rel="noopener noreferrer"&gt;LofyGang Distributed ~200 Malicious NPM Packages to Steal Credit Card Data&lt;/a&gt;). This was an organized cybercrime activity that lasted over a year.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;December 2022&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;PyTorch-nightly Dependency Chain Attack&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Well-known deep learning framework PyTorch disclosed that its nightly version suffered a &lt;strong&gt;dependency confusion&lt;/strong&gt; supply chain attack between December 25-30 ([Malicious PyTorch dependency ‘torchtriton’ on PyPI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;March 2023&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;"W4SP Stealer" Trojan Rampant on PyPI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Security researchers successively discovered a large number of malicious packages carrying the &lt;strong&gt;W4SP Stealer&lt;/strong&gt; information-stealing trojan appearing on PyPI (&lt;a href="https://thehackernews.com/2022/12/w4sp-stealer-discovered-in-multiple.html#:~:text=Threat%20actors%20have%20published%20yet,malware%20on%20compromised%20developer%20machines" rel="noopener noreferrer"&gt;W4SP Stealer Discovered in Multiple PyPI Packages Under Various Names&lt;/a&gt;). These trojans have many aliases (e.g., ANGEL Stealer, PURE Stealer, etc.) but essentially all belong to the W4SP family, specifically designed to steal information like user passwords, cryptocurrency wallets, and Discord tokens (&lt;a href="https://thehackernews.com/2022/12/w4sp-stealer-discovered-in%20multiple.html#:~:text=Interestingly%2C%20while%20the%20malware%20goes,be%20copies%20of%20W4SP%20Stealer" rel="noopener noreferrer"&gt;W4SP Stealer Discovered in Multiple PyPI Packages Under Various Names&lt;/a&gt;). A single report revealed 16 such malicious packages (e.g., &lt;em&gt;modulesecurity&lt;/em&gt;, &lt;em&gt;easycordey&lt;/em&gt;, etc.) (&lt;a href="https://thehackernews.com/2022/12/w4sp-stealer-discovered-in-multiple.html#:~:text=The%2016%20rogue%20modules%20are,nowsys%2C%20upamonkws%2C%20captchaboy%2C%20and%20proxybooster" rel="noopener noreferrer"&gt;W4SP Stealer Discovered in Multiple PyPI Packages Under Various Names&lt;/a&gt;). PyPI initiated a cleanup targeting such trojans and strengthened upload detection.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;August 2023&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lazarus Group Attacks PyPI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ReversingLabs reported that a branch of the North Korean hacking group Lazarus published over two dozen (more than 24) malicious packages disguised as popular libraries on PyPI (codenamed "VMConnect" operation) (&lt;a href="https://www.reversinglabs.com/blog/a-partial-history-of-software-supply-chain-attacks#:~:text=" rel="noopener noreferrer"&gt;Software Supply Chain Attacks: A (partial) History&lt;/a&gt;). These packages attempted to target users in specific industries (e.g., finance) to implant remote access trojans. It is claimed this attack is linked to previous similar activities targeting NuGet, showing state-sponsored hackers' interest in the open-source supply chain.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2024 and Beyond&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ongoing Supply Chain Threats&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Since 2024, new poisoning incidents continue to emerge on NPM and PyPI. For example, in early 2024, fake VS Code-related NPM packages were found to contain remote control spyware (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=,altered%20ScreenConnect%20utility%20as%20spyware" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;), and PyPI packages impersonating Solana libraries to steal crypto wallet keys (&lt;a href="https://www.sonatype.com/resources/vulnerability-timeline#:~:text=%23%23%20%20Ideal%20typosquat%20%27solana,steals%20your%20crypto%20wallet%20keys" rel="noopener noreferrer"&gt;A Timeline of SSC Attacks, Curated by Sonatype&lt;/a&gt;) were discovered. This indicates that supply chain attacks have become a normalized threat, requiring the ecosystem to continuously raise vigilance and defense capabilities.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I complained a bit on Twitter, and while complaining, I saw a tweet from a friend who had just encountered a supply chain attack incident.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/tcdwww/status/1914202659210359108" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cocbt87kh4t4khzdfiu.jpeg" alt="Twitter" width="800" height="719"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fortunately, &lt;a href="https://x.com/TBXark" rel="noopener noreferrer"&gt;@TBXark&lt;/a&gt; recommended his &lt;strong&gt;MCP Proxy&lt;/strong&gt; project, which makes it very convenient to run MCP Server in Docker. His initial goal was to run MCP Server on a server to reduce client load and facilitate mobile client calls. However, Docker's inherent isolation features perfectly aligned with my requirement for a sandbox.&lt;/p&gt;

&lt;p&gt;MCP Proxy runs MCP Servers in Docker and converts the protocol to MCP SSE, allowing users to make all calls via the SSE protocol from the MCP client. This can significantly reduce the risk of arbitrary file reading caused by directly running npx and uvx. &lt;em&gt;If deployed on an overseas server, it can also help solve network issues.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, it is currently still possible to read the &lt;code&gt;/config/config.json&lt;/code&gt; configuration file of MCP Proxy, but the risk is manageable. I have also raised a feature request with the developer to configure the config file with 400 permissions and run the npx and uvx commands as the nobody user. If this can be implemented, it will perfectly solve the arbitrary file reading issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running MCP Proxy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/TBXark/mcp-proxy" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.html.zone%2FTBXark%2Fmcp-proxy" alt="MCP Proxy" width="1200" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have your own VPS with Docker deployed, you can use the following command to run MCP Proxy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 9090:9090 -v /path/to/config.json:/config/config.json ghcr.io/tbxark/mcp-proxy:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't have your own VPS, you can use the free container service provided by &lt;a href="https://404.li/claw" rel="noopener noreferrer"&gt;&lt;strong&gt;claw.cloud&lt;/strong&gt;&lt;/a&gt; ($5 credit per month, GitHub registration must be older than 180 days).&lt;/p&gt;

&lt;p&gt;Since Claw has container size limitations, we need to use the following environment variables to configure the cache directories for npx and uvx to prevent container crashes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UV_CACHE_DIR=/cache/uv
npm_config_cache=/cache/npm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simultaneously mount 10GB of storage under the &lt;code&gt;/cache&lt;/code&gt; path. Refer to my configuration: 0.5c CPU, 512M Memory, 10G Disk.&lt;/p&gt;

&lt;p&gt;The final configuration is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8flmjpefm4wl7jqozkfb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8flmjpefm4wl7jqozkfb.jpg" alt="Claw" width="800" height="1394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring MCP Proxy
&lt;/h2&gt;

&lt;p&gt;The configuration file needs to be mounted at the &lt;code&gt;/config/config.json&lt;/code&gt; path. For the complete configuration, please refer to &lt;a href="https://github.com/TBXark/mcp-proxy?tab=readme-ov-file#configurationonfiguration" rel="noopener noreferrer"&gt;https://github.com/TBXark/mcp-proxy?tab=readme-ov-file#configurationonfiguration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Below is my configuration, for your reference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mcpProxy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"baseURL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://mcp.miantiao.me"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"addr"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;":9090"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MCP Proxy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"panicIfInvalid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"logEnabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"authTokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"miantiao.me"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"github"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"@modelcontextprotocol/server-github"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"GITHUB_PERSONAL_ACCESS_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;YOUR_TOKEN&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"fetch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"mcp-server-fetch"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"amap"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://mcp.amap.com/sse?key=&amp;lt;YOUR_TOKEN&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Calling MCP proxy
&lt;/h2&gt;

&lt;p&gt;Taking &lt;a href="https://404.li/chatwise" rel="noopener noreferrer"&gt;&lt;strong&gt;ChatWise&lt;/strong&gt;&lt;/a&gt; calling fetch as an example, just configure the SSE protocol directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft42f302419hjgqjzdhbq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft42f302419hjgqjzdhbq.jpg" alt="fetch" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Isn't it simple? When &lt;a href="https://404.li/chatwise" rel="noopener noreferrer"&gt;&lt;strong&gt;ChatWise&lt;/strong&gt;&lt;/a&gt; releases its mobile version, calling it this way will also be fully usable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43k7sojzh76w1chhfkrb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43k7sojzh76w1chhfkrb.jpg" alt="ChatWise" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" alt="stat" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Use Cloudflare Workers to concat audio files</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sat, 19 Apr 2025 11:09:12 +0000</pubDate>
      <link>https://dev.to/ccbikai/use-cloudflare-workers-to-concat-audio-files-816</link>
      <guid>https://dev.to/ccbikai/use-cloudflare-workers-to-concat-audio-files-816</guid>
      <description>&lt;p&gt;I recently updated the &lt;a href="https://hacker-news.agi.li/" rel="noopener noreferrer"&gt;Hacker News Chinese Podcast&lt;/a&gt; to use a dual-speaker format. Since current speech synthesis models don't handle two-person dialogues very well, I needed a way to merge the audio files for each speaker.&lt;/p&gt;

&lt;p&gt;The project runs on the Cloudflare Workers runtime, which lacks many Node.js features and cannot call C++ extensions. Furthermore, Cloudflare Containers aren't generally available yet. This meant I had to use the Browser Rendering API for the audio merging task.&lt;/p&gt;

&lt;p&gt;FFmpeg is the standard tool for merging audio files, and fortunately, it can now run in the browser via WASM. So, the overall technical approach is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Use a Worker Binding to launch a browser instance (via the Browser Rendering API).&lt;/li&gt;
&lt;li&gt; Have the browser navigate to an audio merging page, perform the merge operation on the audio files, and return the result as a Blob.&lt;/li&gt;
&lt;li&gt; Receive the Blob back in the Worker and upload it to R2 storage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The overall code footprint for this isn't large, but debugging was tricky because Browser Rendering runs remotely.&lt;/p&gt;

&lt;p&gt;Here's the final implementation code:&lt;/p&gt;

&lt;h3&gt;
  
  
  Browser-Side Audio Merging Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!doctype html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
  &amp;lt;head&amp;gt;
    &amp;lt;meta charset="UTF-8" /&amp;gt;
    &amp;lt;meta name="viewport" content="width=device-width, initial-scale=1.0" /&amp;gt;
    &amp;lt;title&amp;gt;Audio&amp;lt;/title&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;script&amp;gt;
      const concatAudioFilesOnBrowser = async (audioFiles) =&amp;gt; {
        const script = document.createElement('script')
        script.src = 'https://unpkg.com/@ffmpeg/ffmpeg@0.11.6/dist/ffmpeg.min.js'
        document.head.appendChild(script)
        await new Promise((resolve) =&amp;gt; (script.onload = resolve))

        const { createFFmpeg, fetchFile } = FFmpeg
        const ffmpeg = createFFmpeg({ log: true })

        await ffmpeg.load()

        // Download and write each file to FFmpeg's virtual file system
        for (const [index, audioFile] of audioFiles.entries()) {
          const audioData = await fetchFile(audioFile)
          ffmpeg.FS('writeFile', `input${index}.mp3`, audioData)
        }

        // Create a file list for ffmpeg concat
        const fileList = audioFiles.map((_, i) =&amp;gt; `file 'input${i}.mp3'`).join('\n')
        ffmpeg.FS('writeFile', 'filelist.txt', fileList)

        // Execute FFmpeg command to concatenate files
        await ffmpeg.run(
          '-f',
          'concat',
          '-safe',
          '0',
          '-i',
          'filelist.txt',
          '-c:a',
          'libmp3lame',
          '-q:a',
          '5',
          'output.mp3',
        )

        // Read the output file
        const data = ffmpeg.FS('readFile', 'output.mp3')

        // Create a downloadable link
        const blob = new Blob([data.buffer], { type: 'audio/mp3' })

        // Clean up
        audioFiles.forEach((_, i) =&amp;gt; {
          ffmpeg.FS('unlink', `input${i}.mp3`)
        })
        ffmpeg.FS('unlink', 'filelist.txt')
        ffmpeg.FS('unlink', 'output.mp3')

        return blob
      }
    &amp;lt;/script&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Worker Codes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export async function concatAudioFiles(audioFiles: string[], BROWSER: Fetcher, { workerUrl }: { workerUrl: string }) {
  const browser = await puppeteer.launch(BROWSER)
  const page = await browser.newPage()
  await page.goto(`${workerUrl}/audio`)

  console.info('start concat audio files', audioFiles)
  const fileUrl = await page.evaluate(async (audioFiles) =&amp;gt; {
    // JS runs here in the browser.
    // @ts-expect-error Objects in the browser
    const blob = await concatAudioFilesOnBrowser(audioFiles)

    const result = new Promise((resolve, reject) =&amp;gt; {
      const reader = new FileReader()
      reader.onloadend = () =&amp;gt; resolve(reader.result)
      reader.onerror = reject
      reader.readAsDataURL(blob)
    })
    return await result
  }, audioFiles) as string

  console.info('concat audio files result', fileUrl.substring(0, 100))

  await browser.close()

  const response = await fetch(fileUrl)
  return await response.blob()
}

const audio = await concatAudioFiles(audioFiles, env.BROWSER, { workerUrl: env.HACKER_NEWS_WORKER_URL })
return new Response(audio)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code is basically written by Cursor, and the final effect can be viewed at &lt;a href="https://github.com/ccbikai/hacker-news/tree/main/worker" rel="noopener noreferrer"&gt;Hacker News Code Repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" alt="stat" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudflare</category>
      <category>cloudflareworkers</category>
      <category>browserrendering</category>
    </item>
    <item>
      <title>RSS.Beauty - Make Your RSS Beautiful!</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Tue, 31 Dec 2024 14:50:48 +0000</pubDate>
      <link>https://dev.to/ccbikai/rssbeauty-make-your-rss-beautiful-4j8l</link>
      <guid>https://dev.to/ccbikai/rssbeauty-make-your-rss-beautiful-4j8l</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The tool that has been delayed for nearly half a year is finally completed.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://rss.beauty/" rel="noopener noreferrer"&gt;RSS.Beauty&lt;/a&gt; is an RSS beautification tool based on XSLT technology that transforms ordinary RSS/Atom feeds into elegant reading interfaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci8amg44t330iql15ea4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci8amg44t330iql15ea4.png" alt="RSS.Beauty" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🎨 Beautiful reading interface&lt;/li&gt;
&lt;li&gt;🔄 Support for RSS 2.0 and Atom 1.0&lt;/li&gt;
&lt;li&gt;📱 Responsive design, mobile-friendly&lt;/li&gt;
&lt;li&gt;🔌 One-click subscription to major RSS readers&lt;/li&gt;
&lt;li&gt;🖥 Self-hosting support&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;

&lt;p&gt;Visit &lt;a href="https://rss.beauty" rel="noopener noreferrer"&gt;RSS.Beauty&lt;/a&gt; and enter any RSS feed URL to try it out.&lt;/p&gt;

&lt;p&gt;Or visit &lt;a href="https://rss.beauty/rss?url=https%3A%2F%2Fgithub.com%2Fccbikai%2FRSS.Beauty%2Freleases.atom" rel="noopener noreferrer"&gt;https://rss.beauty/rss?url=https%3A%2F%2Fgithub.com%2Fccbikai%2FRSS.Beauty%2Freleases.atom&lt;/a&gt; to try it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://astro.build" rel="noopener noreferrer"&gt;Astro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tailwindcss.com" rel="noopener noreferrer"&gt;TailwindCSS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.w3.org/TR/xslt/" rel="noopener noreferrer"&gt;XSLT&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;Detailed deployment guide can be found in &lt;a href="//./docs/deployment-guide.md"&gt;Deployment Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless
&lt;/h3&gt;

&lt;p&gt;Support deployment to Cloudflare Pages, Vercel, Netlify, etc. After &lt;a href="https://github.com/ccbikai/RSS.Beauty/fork" rel="noopener noreferrer"&gt;Fork&lt;/a&gt; this project, follow the platform tutorial to deploy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ghcr.io/ccbikai/rss.beauty:main
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; rss-beauty &lt;span class="nt"&gt;-p&lt;/span&gt; 4321:4321 ghcr.io/ccbikai/rss.beauty:main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://html.tailus.io/" rel="noopener noreferrer"&gt;Tailus UI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sponsor
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://404.li/kai" rel="noopener noreferrer"&gt;Follow me on 𝕏&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/sponsors/ccbikai" rel="noopener noreferrer"&gt;Sponsor me on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" alt="stat" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rss</category>
      <category>github</category>
    </item>
    <item>
      <title>Run Python programs easily in the browser.</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sat, 21 Dec 2024 09:57:34 +0000</pubDate>
      <link>https://dev.to/ccbikai/run-python-programs-easily-in-the-browser-5gal</link>
      <guid>https://dev.to/ccbikai/run-python-programs-easily-in-the-browser-5gal</guid>
      <description>&lt;p&gt;Microsoft recently open-sourced &lt;a href="https://github.com/microsoft/markitdown" rel="noopener noreferrer"&gt;MarkItDown&lt;/a&gt;, a program that converts Office files to Markdown format. The project quickly climbed to GitHub's trending list upon release.&lt;/p&gt;

&lt;p&gt;However, since MarkItDown is a Python program, it might be challenging for non-technical users to use. To address this issue, I thought of using WebAssembly technology to run Python code directly in the browser.&lt;/p&gt;

&lt;p&gt;Pyodide is an open-source program that runs Python in the browser, using WebAssembly to port CPython, so it supports all Python syntax. Cloudflare's Python Workers also use Pyodide.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pyodide is a port of CPython to WebAssembly/Emscripten.&lt;/p&gt;

&lt;p&gt;Pyodide makes it possible to install and run Python packages in the browser using micropip. Any pure Python package with wheels available on PyPI is supported.&lt;/p&gt;

&lt;p&gt;Many packages with C extensions have also been ported for use with Pyodide. These include common packages like regex, PyYAML, lxml, and scientific Python packages including NumPy, pandas, SciPy, Matplotlib, and scikit-learn. Pyodide comes with a robust JavaScript ⟺ Python foreign function interface that allows you to freely mix these languages in your code with minimal friction. This includes comprehensive support for error handling, async/await, and more.&lt;/p&gt;

&lt;p&gt;When used in the browser, Python has full access to the Web APIs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Trying to run MarkItDown was surprisingly smooth, proving that WebAssembly is truly the future of browsers.&lt;/p&gt;

&lt;p&gt;The main challenges faced and solutions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;File Transfer Issue&lt;/strong&gt;: How to pass user-selected files to the Python runtime in the Worker?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dependency Installation Issue&lt;/strong&gt;: Limited access to PyPI in mainland China.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Eventually, we successfully implemented a MarkItDown tool that runs entirely in the browser. Feel free to try it out at &lt;a href="https://www.html.zone/markitdown/" rel="noopener noreferrer"&gt;Office File to Markdown&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.html.zone/markitdown/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9a3q3u1a10ip4hutwam.png" alt="Office File to Markdown" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's the core code for running Python in the Worker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// eslint-disable-next-line no-undef
importScripts('https://testingcf.jsdelivr.net/pyodide/v0.26.4/full/pyodide.js')


async function loadPyodideAndPackages() {
  // eslint-disable-next-line no-undef
  const pyodide = await loadPyodide()
  globalThis.pyodide = pyodide

  await pyodide.loadPackage('micropip')

  const micropip = pyodide.pyimport('micropip')

  // micropip.set_index_urls([
  // 'https://pypi.your.domains/pypi/simple',  
  // ])

  await micropip.install('markitdown==0.0.1a2')
}

const pyodideReadyPromise = loadPyodideAndPackages()

globalThis.onmessage = async (event) =&amp;gt; {
  await pyodideReadyPromise

  const file = event.data
  try {
    console.log('file', file)
    const startTime = Date.now()
    globalThis.pyodide.FS.writeFile(`/${file.filename}`, file.buffer)

    await globalThis.pyodide.runPythonAsync(`
from markitdown import MarkItDown

markitdown = MarkItDown()

result = markitdown.convert("/${file.filename}")
print(result.text_content)

with open("/${file.filename}.md", "w") as file:
  file.write(result.text_content)
`)
    globalThis.postMessage({
      filename: `${file.filename}.md`,
      content: globalThis.pyodide.FS.readFile(`/${file.filename}.md`, { encoding: 'utf8' }),
      time: Date.now() - startTime,
    })
  }
  catch (error) {
    globalThis.postMessage({ error: error.message || 'convert error', filename: file.filename })
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" alt="stat" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>webassembly</category>
    </item>
    <item>
      <title>Use Cloudflare Snippets to set up a Docker Registry Mirror</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sat, 21 Dec 2024 08:17:40 +0000</pubDate>
      <link>https://dev.to/ccbikai/use-cloudflare-snippets-to-set-up-a-docker-registry-mirror-a0m</link>
      <guid>https://dev.to/ccbikai/use-cloudflare-snippets-to-set-up-a-docker-registry-mirror-a0m</guid>
      <description>&lt;p&gt;Using Cloudflare Workers to set up Docker image proxies works fine for personal use with low request volumes. However, if made public, high request volumes can incur significant costs.&lt;/p&gt;

&lt;p&gt;Actually, Cloudflare has an even lighter JS Runtime called Cloudflare Snippets, though it comes with stricter limitations: 5ms CPU execution time, 2MB memory limit, and 32KB code size limit. Still, it's sufficient for request rewriting purposes.&lt;/p&gt;

&lt;p&gt;Unfortunately, Cloudflare Snippets isn't currently available for Free plans, although &lt;a href="https://blog.cloudflare.com/zh-cn/snippets-announcement/" rel="noopener noreferrer"&gt;their blog mentions that Free plans can create 5 Snippets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you have a Pro plan, you can slightly modify the Cloudflare Workers code to run it. It supports Docker Hub, Google Container Registry, GitHub Container Registry, Amazon Elastic Container Registry, Kubernetes Container Registry, Quay, and Cloudsmith.&lt;/p&gt;

&lt;p&gt;Modified code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Raw Codes: https://github.com/ciiiii/cloudflare-docker-proxy/blob/master/src/index.js

const CUSTOM_DOMAIN = 'your.domains'
const MODE = 'production'

const dockerHub = 'https://registry-1.docker.io'

const routes = {
    // production
    [`docker.${CUSTOM_DOMAIN}`]: dockerHub,
    [`quay.${CUSTOM_DOMAIN}`]: 'https://quay.io',
    [`gcr.${CUSTOM_DOMAIN}`]: 'https://gcr.io',
    [`k8s-gcr.${CUSTOM_DOMAIN}`]: 'https://k8s.gcr.io',
    [`k8s.${CUSTOM_DOMAIN}`]: 'https://registry.k8s.io',
    [`ghcr.${CUSTOM_DOMAIN}`]: 'https://ghcr.io',
    [`cloudsmith.${CUSTOM_DOMAIN}`]: 'https://docker.cloudsmith.io',
    [`ecr.${CUSTOM_DOMAIN}`]: 'https://public.ecr.aws',

    // staging
    [`docker-staging.${CUSTOM_DOMAIN}`]: dockerHub,
}

async function handleRequest(request) {
    const url = new URL(request.url)
    const upstream = routeByHosts(url.hostname)
    if (upstream === '') {
        return new Response(
            JSON.stringify({
                routes,
            }), {
                status: 404,
            },
        )
    }
    const isDockerHub = upstream === dockerHub
    const authorization = request.headers.get('Authorization')
    if (url.pathname === '/v2/') {
        const newUrl = new URL(`${upstream}/v2/`)
        const headers = new Headers()
        if (authorization) {
            headers.set('Authorization', authorization)
        }
        // check if need to authenticate
        const resp = await fetch(newUrl.toString(), {
            method: 'GET',
            headers,
            redirect: 'follow',
        })
        if (resp.status === 401) {
            return responseUnauthorized(url)
        }
        return resp
    }
    // get token
    if (url.pathname === '/v2/auth') {
        const newUrl = new URL(`${upstream}/v2/`)
        const resp = await fetch(newUrl.toString(), {
            method: 'GET',
            redirect: 'follow',
        })
        if (resp.status !== 401) {
            return resp
        }
        const authenticateStr = resp.headers.get('WWW-Authenticate')
        if (authenticateStr === null) {
            return resp
        }
        const wwwAuthenticate = parseAuthenticate(authenticateStr)
        let scope = url.searchParams.get('scope')
        // autocomplete repo part into scope for DockerHub library images
        // Example: repository:busybox:pull =&amp;gt; repository:library/busybox:pull
        if (scope &amp;amp;&amp;amp; isDockerHub) {
            const scopeParts = scope.split(':')
            if (scopeParts.length === 3 &amp;amp;&amp;amp; !scopeParts[1].includes('/')) {
                scopeParts[1] = `library/${scopeParts[1]}`
                scope = scopeParts.join(':')
            }
        }
        return await fetchToken(wwwAuthenticate, scope, authorization)
    }
    // redirect for DockerHub library images
    // Example: /v2/busybox/manifests/latest =&amp;gt; /v2/library/busybox/manifests/latest
    if (isDockerHub) {
        const pathParts = url.pathname.split('/')
        if (pathParts.length === 5) {
            pathParts.splice(2, 0, 'library')
            const redirectUrl = new URL(url)
            redirectUrl.pathname = pathParts.join('/')
            return Response.redirect(redirectUrl, 301)
        }
    }
    // foward requests
    const newUrl = new URL(upstream + url.pathname)
    const newReq = new Request(newUrl, {
        method: request.method,
        headers: request.headers,
        redirect: 'follow',
    })
    const resp = await fetch(newReq)
    if (resp.status === 401) {
        return responseUnauthorized(url)
    }
    return resp
}

function routeByHosts(host) {
    if (host in routes) {
        return routes[host]
    }
    if (MODE === 'debug') {
        return dockerHub
    }
    return ''
}

function parseAuthenticate(authenticateStr) {
    // sample: Bearer realm="https://auth.ipv6.docker.com/token",service="registry.docker.io"
    // match strings after =" and before "
    const re = /(?&amp;lt;==")(?:\\.|[^"\\])*(?=")/g
    const matches = authenticateStr.match(re)
    if (matches == null || matches.length &amp;lt; 2) {
        throw new Error(`invalid Www-Authenticate Header: ${authenticateStr}`)
    }
    return {
        realm: matches[0],
        service: matches[1],
    }
}

async function fetchToken(wwwAuthenticate, scope, authorization) {
    const url = new URL(wwwAuthenticate.realm)
    if (wwwAuthenticate.service.length) {
        url.searchParams.set('service', wwwAuthenticate.service)
    }
    if (scope) {
        url.searchParams.set('scope', scope)
    }
    const headers = new Headers()
    if (authorization) {
        headers.set('Authorization', authorization)
    }
    return await fetch(url, {
        method: 'GET',
        headers
    })
}

function responseUnauthorized(url) {
    const headers = new(Headers)()
    if (MODE === 'debug') {
        headers.set(
            'Www-Authenticate',
            `Bearer realm="http://${url.host}/v2/auth",service="cloudflare-docker-proxy"`,
        )
    } else {
        headers.set(
            'Www-Authenticate',
            `Bearer realm="https://${url.hostname}/v2/auth",service="cloudflare-docker-proxy"`,
        )
    }
    return new Response(JSON.stringify({
        message: 'UNAUTHORIZED'
    }), {
        status: 401,
        headers,
    })
}

export default {
    fetch: handleRequest,
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" alt="stat" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudflare</category>
      <category>docker</category>
    </item>
    <item>
      <title>Cloudflare PyPI Mirror</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Wed, 18 Dec 2024 13:06:14 +0000</pubDate>
      <link>https://dev.to/ccbikai/cloudflare-pypi-mirror-45p3</link>
      <guid>https://dev.to/ccbikai/cloudflare-pypi-mirror-45p3</guid>
      <description>&lt;p&gt;&lt;a href="https://micropip.pyodide.org/en/stable/index.html" rel="noopener noreferrer"&gt;Pyodide&lt;/a&gt; is a library that runs Python in WebAssembly, using &lt;a href="https://micropip.pyodide.org/en/stable/index.html" rel="noopener noreferrer"&gt;Micropip&lt;/a&gt; to install packages from PyPI. Due to WebAssembly's requirements for CORS and PEP 691 when running in browsers, and the fact that Tsinghua's TUNA mirror doesn't support CORS, this creates some challenges.&lt;/p&gt;

&lt;p&gt;PyPI is not directly accessible in mainland China, but there are many mirrors available. Institutions like Tsinghua University, Alibaba Cloud, Tencent Cloud, and Huawei Cloud provide mirror services. However, except for Tsinghua's TUNA mirror, none of them support the JSON-based Simple API for Python (&lt;a href="https://peps.python.org/pep-0691/" rel="noopener noreferrer"&gt;PEP 691&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Since WebAssembly requires both CORS support and PEP 691 compliance when running in browsers, and Tsinghua's TUNA mirror doesn't support CORS, there might not be any suitable PyPI mirrors available in mainland China for Micropip.&lt;/p&gt;

&lt;p&gt;Given this situation, I've set up a Cloudflare-based mirror that supports both PEP 691 and CORS.&lt;/p&gt;

&lt;p&gt;You can build this using either Workers or Snippets, each with their own advantages and disadvantages:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://workers.cloudflare.com/" rel="noopener noreferrer"&gt;Workers&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Pros: Available with the free plan.&lt;/p&gt;

&lt;p&gt;Cons: Generates many Worker requests, which might exceed free plan limits and require payment or become unusable.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://developers.cloudflare.com/rules/snippets/" rel="noopener noreferrer"&gt;Snippets&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Pros: Doesn't generate Worker requests, supports high usage volumes.&lt;br&gt;
Cons: Currently only available for Pro plans and above, not available on Free tier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The corresponding code has been open-sourced and is available at:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ccbikai/cloudflare-pypi-mirror" rel="noopener noreferrer"&gt;https://github.com/ccbikai/cloudflare-pypi-mirror&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ccbikai/cloudflare-pypi-mirror" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.html.zone%2Fccbikai%2Fcloudflare-pypi-mirror" alt="Cloudflare PyPI Mirror" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.statcounter.com%2F9823304%2F0%2F92a1d06c%2F1%2F" alt="stat" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudflare</category>
      <category>pypi</category>
      <category>mirror</category>
    </item>
    <item>
      <title>Minimal Docker Image Packaging for Vite SSR Projects</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sat, 31 Aug 2024 05:14:00 +0000</pubDate>
      <link>https://dev.to/ccbikai/minimal-docker-image-packaging-for-vite-ssr-projects-1m7p</link>
      <guid>https://dev.to/ccbikai/minimal-docker-image-packaging-for-vite-ssr-projects-1m7p</guid>
      <description>&lt;p&gt;Recently, I've been preparing to migrate projects hosted on Cloudflare, Vercel, and Netlify to my own VPS to run via Docker. I revisited Docker image packaging. However, even a small project ended up being packaged into a 1.05GB image, which is clearly unacceptable. So, I researched minimal Docker image packaging for Node.js projects, reducing the image size from 1.06GB to 135MB.&lt;/p&gt;

&lt;p&gt;The example project is an Astro project using Vite as the build tool, running in SSR mode.&lt;/p&gt;

&lt;h2&gt;
  
  
  Version 0
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The main idea is to use a minimal system image, opting for the Alpine Linux image.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Following the &lt;a href="https://docs.astro.build/en/recipes/docker/#ssr" rel="noopener noreferrer"&gt;Astro official documentation for Server-Side Rendering (SSR)&lt;/a&gt;, I replaced the base image with node:lts-alpine, and switched from NPM to PNPM. The resulting image size was 1.06GB, which is the worst-case scenario.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:lts-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PNPM_HOME="/pnpm"&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH="$PNPM_HOME:$PATH"&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;corepack &lt;span class="nb"&gt;enable&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pnpm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--frozen-lockfile&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; .env.example&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pnpm run build

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; HOST=0.0.0.0&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PORT=4321&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 4321&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; node ./dist/server/entry.mjs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t v0 .
[+] Building 113.8s (11/11) FINISHED                                                                                                                                        docker:orbstack
 =&amp;gt; [internal] load build definition from Dockerfile                                                                                                                                   0.0s
 =&amp;gt; =&amp;gt; transferring dockerfile: 346B                                                                                                                                                   0.0s
 =&amp;gt; [internal] load metadata for docker.io/library/node:lts-alpine                                                                                                                     1.1s
 =&amp;gt; [internal] load .dockerignore                                                                                                                                                      0.0s
 =&amp;gt; =&amp;gt; transferring context: 89B                                                                                                                                                       0.0s
 =&amp;gt; [1/6] FROM docker.io/library/node:lts-alpine@sha256:1a526b97cace6b4006256570efa1a29cd1fe4b96a5301f8d48e87c5139438a45                                                               0.0s
 =&amp;gt; [internal] load build context                                                                                                                                                      0.2s
 =&amp;gt; =&amp;gt; transferring context: 240.11kB                                                                                                                                                  0.2s
 =&amp;gt; CACHED [2/6] RUN corepack enable                                                                                                                                                   0.0s
 =&amp;gt; CACHED [3/6] WORKDIR /app                                                                                                                                                          0.0s
 =&amp;gt; [4/6] COPY . .                                                                                                                                                                     2.0s
 =&amp;gt; [5/6] RUN pnpm install --frozen-lockfile                                                                                                                                          85.7s
 =&amp;gt; [6/6] RUN export $(cat .env.example) &amp;amp;&amp;amp; pnpm run build                                                                                                      11.1s
 =&amp;gt; exporting to image                                                                                                                                                                13.4s
 =&amp;gt; =&amp;gt; exporting layers                                                                                                                                                               13.4s
 =&amp;gt; =&amp;gt; writing image sha256:653236defcbb8d99d83dc550f1deb55e48b49d7925a295049806ebac8c104d4a                                                                                           0.0s
 =&amp;gt; =&amp;gt; naming to docker.io/library/v0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Version 1
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The main idea is to first install production dependencies, creating the first layer. Then install all dependencies, package to generate JavaScript artifacts, creating the second layer. Finally, copy the production dependencies and JavaScript artifacts to the runtime environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Following the &lt;a href="https://docs.astro.build/en/recipes/docker/#multi-stage-build-using-ssr" rel="noopener noreferrer"&gt;multi-stage build (using SSR)&lt;/a&gt; approach, I reduced the image size to 306MB. This is a significant reduction, but the drawback is that &lt;strong&gt;it requires explicitly specifying production dependencies; if any are missed, runtime errors will occur&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:lts-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PNPM_HOME="/pnpm"&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH="$PNPM_HOME:$PATH"&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;corepack &lt;span class="nb"&gt;enable&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json pnpm-lock.yaml ./&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod-deps&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,id&lt;span class="o"&gt;=&lt;/span&gt;pnpm,target&lt;span class="o"&gt;=&lt;/span&gt;/pnpm/store pnpm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--prod&lt;/span&gt; &lt;span class="nt"&gt;--frozen-lockfile&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build-deps&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,id&lt;span class="o"&gt;=&lt;/span&gt;pnpm,target&lt;span class="o"&gt;=&lt;/span&gt;/pnpm/store pnpm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--frozen-lockfile&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build-deps&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; .env.example&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pnpm run build

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=prod-deps /app/node_modules ./node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist ./dist&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; HOST=0.0.0.0&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PORT=4321&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 4321&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; node ./dist/server/entry.mjs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t v1 .
[+] Building 85.5s (15/15) FINISHED                                                                                                                                         docker:orbstack
 =&amp;gt; [internal] load build definition from Dockerfile                                                                                                                                   0.1s
 =&amp;gt; =&amp;gt; transferring dockerfile: 680B                                                                                                                                                   0.0s
 =&amp;gt; [internal] load metadata for docker.io/library/node:lts-alpine                                                                                                                     1.8s
 =&amp;gt; [internal] load .dockerignore                                                                                                                                                      0.0s
 =&amp;gt; =&amp;gt; transferring context: 89B                                                                                                                                                       0.0s
 =&amp;gt; [base 1/4] FROM docker.io/library/node:lts-alpine@sha256:1a526b97cace6b4006256570efa1a29cd1fe4b96a5301f8d48e87c5139438a45                                                          0.0s
 =&amp;gt; [internal] load build context                                                                                                                                                      0.3s
 =&amp;gt; =&amp;gt; transferring context: 240.44kB                                                                                                                                                  0.2s
 =&amp;gt; CACHED [base 2/4] RUN corepack enable                                                                                                                                              0.0s
 =&amp;gt; CACHED [base 3/4] WORKDIR /app                                                                                                                                                     0.0s
 =&amp;gt; [base 4/4] COPY package.json pnpm-lock.yaml ./                                                                                                                                     0.2s
 =&amp;gt; [prod-deps 1/1] RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile                                                                           35.1s
 =&amp;gt; [build-deps 1/1] RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile                                                                                 65.5s
 =&amp;gt; [runtime 1/2] COPY --from=prod-deps /app/node_modules ./node_modules                                                                                                               5.9s
 =&amp;gt; [build 1/2] COPY . .                                                                                                                                                               0.8s
 =&amp;gt; [build 2/2] RUN export $(cat .env.example) &amp;amp;&amp;amp; pnpm run build                                                                                                                       7.5s
 =&amp;gt; [runtime 2/2] COPY --from=build /app/dist ./dist                                                                                                                                   0.1s
 =&amp;gt; exporting to image                                                                                                                                                                 4.2s
 =&amp;gt; =&amp;gt; exporting layers                                                                                                                                                                4.1s
 =&amp;gt; =&amp;gt; writing image sha256:8ae6b2bddf0a7ac5f8ad45e6abb7d36a633e384cf476e45fb9132bdf70ed0c5f                                                                                           0.0s
 =&amp;gt; =&amp;gt; naming to docker.io/library/v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Version 2
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The main idea is to inline node_modules into the JavaScript files, ultimately copying only the JavaScript files to the runtime environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When I looked into Next.js, I remembered that node_modules could be inlined into JavaScript files, eliminating the need for node_modules. So, I researched and found that Vite SSR also supports this. Therefore, I decided to use the inlining method in the Docker environment, avoiding the need to copy node_modules, and only copying the final dist artifacts, reducing the image size to 135MB.&lt;/p&gt;

&lt;p&gt;Changes to the packaging script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;vite&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;ssr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;noExternal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DOCKER&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DOCKER&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The final Dockerfile is as follows&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:lts-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PNPM_HOME="/pnpm"&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH="$PNPM_HOME:$PATH"&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;corepack &lt;span class="nb"&gt;enable&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json pnpm-lock.yaml ./&lt;/span&gt;

&lt;span class="c"&gt;# FROM base AS prod-deps&lt;/span&gt;
&lt;span class="c"&gt;# RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build-deps&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,id&lt;span class="o"&gt;=&lt;/span&gt;pnpm,target&lt;span class="o"&gt;=&lt;/span&gt;/pnpm/store pnpm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--frozen-lockfile&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build-deps&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; .env.example&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DOCKER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pnpm run build

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="c"&gt;# COPY --from=prod-deps /app/node_modules ./node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist ./dist&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; HOST=0.0.0.0&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PORT=4321&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 4321&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; node ./dist/server/entry.mjs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; docker build -t v2 .
[+] Building 24.9s (13/13) FINISHED                                                                                                                                         docker:orbstack
 =&amp;gt; [internal] load build definition from Dockerfile                                                                                                                                   0.0s
 =&amp;gt; =&amp;gt; transferring dockerfile: 708B                                                                                                                                                   0.0s
 =&amp;gt; [internal] load metadata for docker.io/library/node:lts-alpine                                                                                                                     1.7s
 =&amp;gt; [internal] load .dockerignore                                                                                                                                                      0.0s
 =&amp;gt; =&amp;gt; transferring context: 89B                                                                                                                                                       0.0s
 =&amp;gt; [base 1/4] FROM docker.io/library/node:lts-alpine@sha256:1a526b97cace6b4006256570efa1a29cd1fe4b96a5301f8d48e87c5139438a45                                                          0.0s
 =&amp;gt; [internal] load build context                                                                                                                                                      0.3s
 =&amp;gt; =&amp;gt; transferring context: 240.47kB                                                                                                                                                  0.2s
 =&amp;gt; CACHED [base 2/4] RUN corepack enable                                                                                                                                              0.0s
 =&amp;gt; CACHED [base 3/4] WORKDIR /app                                                                                                                                                     0.0s
 =&amp;gt; CACHED [base 4/4] COPY package.json pnpm-lock.yaml ./                                                                                                                              0.0s
 =&amp;gt; CACHED [build-deps 1/1] RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile                                                                           0.0s
 =&amp;gt; [build 1/2] COPY . .                                                                                                                                                               1.5s
 =&amp;gt; [build 2/2] RUN export $(cat .env.example) &amp;amp;&amp;amp; export DOCKER=true &amp;amp;&amp;amp; pnpm run build                                                                                                15.0s
 =&amp;gt; [runtime 1/1] COPY --from=build /app/dist ./dist                                                                                                                                   0.1s
 =&amp;gt; exporting to image                                                                                                                                                                 0.1s
 =&amp;gt; =&amp;gt; exporting layers                                                                                                                                                                0.1s
 =&amp;gt; =&amp;gt; writing image sha256:0ed5c10162d1faf4208f5ea999fbcd133374acc0e682404c8b05220b38fd1eaf                                                                                           0.0s
 =&amp;gt; =&amp;gt; naming to docker.io/library/v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the end, the size was reduced from 1.06GB to 135MB, and the build time was reduced from 113.8s to 24.9s.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
REPOSITORY                         TAG         IMAGE ID       CREATED          SIZE
v2                                 latest      0ed5c10162d1   5 minutes ago    135MB
v1                                 latest      8ae6b2bddf0a   6 minutes ago    306MB
v0                                 latest      653236defcbb   11 minutes ago   1.06GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example project is open-source and can be viewed on &lt;a href="https://github.com/ccbikai/BroadcastChannel/pkgs/container/broadcastchannel" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ccbikai/BroadcastChannel" rel="noopener noreferrer"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vA-hBi8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.html.zone/ccbikai/BroadcastChannel" alt="BroadcastChannel" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fh28RQrY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://c.statcounter.com/9823304/0/92a1d06c/1/" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fh28RQrY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://c.statcounter.com/9823304/0/92a1d06c/1/" alt="stat" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vite</category>
      <category>node</category>
      <category>docker</category>
    </item>
    <item>
      <title>BroadcastChannel - Turn your Telegram Channel into a MicroBlog</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sun, 11 Aug 2024 09:18:12 +0000</pubDate>
      <link>https://dev.to/ccbikai/broadcastchannel-turn-your-telegram-channel-into-a-microblog-5gjc</link>
      <guid>https://dev.to/ccbikai/broadcastchannel-turn-your-telegram-channel-into-a-microblog-5gjc</guid>
      <description>&lt;p&gt;I have been sharing some interesting tools on &lt;a href="https://x.com/ccbikai" rel="noopener noreferrer"&gt;X&lt;/a&gt; and also synchronizing them to my Telegram Channel. I saw that &lt;a href="https://x.com/austinit/status/1817832660758081651" rel="noopener noreferrer"&gt;Austin mentioned he is preparing to create a website&lt;/a&gt; to compile all the shared content. This reminded me of a template I recently came across called &lt;a href="https://github.com/Planetable/SiteTemplateSepia" rel="noopener noreferrer"&gt;Sepia&lt;/a&gt;, and I thought about converting the Telegram Channel into a microblog.&lt;/p&gt;

&lt;p&gt;The difficulty wasn't high; I completed the main functionality over a weekend. During the process, I achieved a browser-side implementation with zero JavaScript and would like to share some interesting technical points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The anti-spoiler mode and the hidden display of the mobile search box were implemented using the CSS ":checked pseudo-class" and the "+ adjacent sibling combinator." &lt;a href="https://www.tpisoftware.com/tpu/articleDetails/2744" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The transition animations utilized CSS View Transitions. &lt;a href="https://liruifengv.com/posts/zero-js-view-transitions/" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The image lightbox used the HTML popover attribute. &lt;a href="https://developer.mozilla.org/zh-CN/docs/Web/HTML/Global_attributes/popover" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The display and hiding of the "back to top" feature were implemented using CSS animation-timeline, exclusive to Chrome version 115 and above. &lt;a href="https://developer.mozilla.org/zh-CN/docs/Web/CSS/animation-timeline/view" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The multi-image masonry layout was achieved using grid layout. &lt;a href="https://www.smashingmagazine.com/native-css-masonry-layout-css-grid/" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The visit statistics were tracked using a 1px transparent image as the logo background, an ancient technique that is now rarely supported by visit statistics software.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;JavaScript execution on the browser side was prohibited using the Content-Security-Policy's script-src 'none'. &lt;a href="https://developer.mozilla.org/zh-CN/docs/Web/HTTP/Headers/Content-Security-Policy/script-src" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After completing the project, I open-sourced it, and I was pleasantly surprised by the number of people who liked it; I received over 800 stars in just a week.&lt;/p&gt;

&lt;p&gt;If you're interested, you can check it out on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ccbikai/BroadcastChannel" rel="noopener noreferrer"&gt;https://github.com/ccbikai/BroadcastChannel&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ccbikai/BroadcastChannel" rel="noopener noreferrer"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vA-hBi8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.html.zone/ccbikai/BroadcastChannel" alt="BroadcastChannel repository on GitHub" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>astro</category>
      <category>microblog</category>
      <category>github</category>
    </item>
    <item>
      <title>How to Replace Google Safe Browsing with Cloudflare Zero Trust</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sun, 07 Jul 2024 14:48:53 +0000</pubDate>
      <link>https://dev.to/ccbikai/how-to-replace-google-safe-browsing-with-cloudflare-zero-trust-edo</link>
      <guid>https://dev.to/ccbikai/how-to-replace-google-safe-browsing-with-cloudflare-zero-trust-edo</guid>
      <description>&lt;p&gt;So, get this, right? I built the first version of &lt;a href="https://loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo.ong/" rel="noopener noreferrer"&gt;L(O*62).ONG&lt;/a&gt; using server-side redirects, but Google slapped me with a security warning the very next day. Talk about a buzzkill! I had to scramble and switch to local redirects with a warning message before sending folks on their way. Then came the fun part – begging Google for forgiveness.&lt;/p&gt;

&lt;p&gt;Now, the smart money would've been on using Google Safe Browsing for redirects. But here's the catch: Safe Browsing's got a daily limit – 10,000 calls, and that's it. Plus, no custom lists. And since I'm all about keeping things simple and sticking with Cloudflare, Safe Browsing was a no-go.&lt;/p&gt;

&lt;p&gt;Fast forward to a while back, I was chewing the fat with someone online, and bam! It hit me like a bolt of lightning. Why not use a secure DNS server with built-in filters for adult content and all that shady stuff to check if a domain's on the up-and-up?  Figured I'd give &lt;a href="https://blog.cloudflare.com/zh-cn/introducing-1-1-1-1-for-families-zh-cn/" rel="noopener noreferrer"&gt;Family 1.1.1.1&lt;/a&gt; a shot, and guess what? It actually worked!  Problem was, no custom lists there either.  Then I remembered messing around with Cloudflare Zero Trust Gateway back in my &lt;a href="https://www.awesome-homelab.com/" rel="noopener noreferrer"&gt;HomeLab&lt;/a&gt; days.  Turns out, that was the golden ticket – a solution so good, it's almost criminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's the deal: Cloudflare Zero Trust's Gateway comes packing a built-in DNS (DoH) server and lets you set up firewall rules like a boss. You can block stuff based on how risky a domain is, what kind of content it has, and even use your own custom naughty-and-nice lists. And get this – it pulls data from Cloudflare's own stash, over 30 open intelligence sources, fancy machine learning models, and even feedback from the community. Talk about covering all the bases! Want the nitty-gritty?  Hit up the &lt;a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/domain-categories/#docs-content" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, I went ahead and blocked all the high-risk categories – adult stuff, gambling sites, government domains, anything NSFW, newly registered domains, you name it. Plus, I've got my own little blacklists and whitelists that I keep nice and tidy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R8vxA77n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/ROJmki/CleanShot%25202024-07-07%2520at%252022.22.25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R8vxA77n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/ROJmki/CleanShot%25202024-07-07%2520at%252022.22.25.png" alt="Risk List" width="800" height="1318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I was done tweaking the settings, I got myself a shiny new DoH address:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mBleGIIo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/iY5dK8/CleanShot%25202024-07-07%2520at%252022.26.23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mBleGIIo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/iY5dK8/CleanShot%25202024-07-07%2520at%252022.26.23.png" alt="DoH" width="423" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To hook it up to my project, I used this handy-dandy code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function isSafeUrl(
  url,
  DoH = "https://family.cloudflare-dns.com/dns-query"
) {
  let safe = false;
  try {
    const { hostname } = new URL(url);
    const res = await fetch(`${DoH}?type=A&amp;amp;name=${hostname}`, {
      headers: {
        accept: "application/dns-json",
      },
      cf: {
        cacheEverything: true,
        cacheTtlByStatus: { "200-299": 86400 },
      },
    });
    const dnsResult = await res.json();
    if (dnsResult &amp;amp;&amp;amp; Array.isArray(dnsResult.Answer)) {
      const isBlock = dnsResult.Answer.some(
        answer =&amp;gt; answer.data === "0.0.0.0"
      );
      safe = !isBlock;
    }
  } catch (e) {
    console.warn("isSafeUrl fail: ", url, e);
  }
  return safe;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here's the kicker: Cloudflare Zero Trust's management panel has this sweet visualization interface that lets you see what's getting blocked and what's not.  You can see for yourself – it's got the kibosh on some adult sites and those brand-spanking-new domains.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oJMbilKt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/5hOp5X/CleanShot%25202024-07-07%2520at%252022.30.36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oJMbilKt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/5hOp5X/CleanShot%25202024-07-07%2520at%252022.30.36.png" alt="Visualization Interface" width="800" height="1068"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Oh, and if a domain ends up on the wrong side of the tracks, you can always check the log to see what went down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qyo5mb4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/EmRMB3/52WCkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qyo5mb4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.miantiao.me/share/2024/EmRMB3/52WCkd.png" alt="Log" width="406" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fh28RQrY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://c.statcounter.com/9823304/0/92a1d06c/1/" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fh28RQrY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://c.statcounter.com/9823304/0/92a1d06c/1/" alt="stat" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>google</category>
      <category>cloudflare</category>
      <category>safebrowsing</category>
      <category>zerotrust</category>
    </item>
    <item>
      <title>Browser locally uses AI to remove image backgrounds</title>
      <dc:creator>MT</dc:creator>
      <pubDate>Sun, 07 Jul 2024 13:28:51 +0000</pubDate>
      <link>https://dev.to/ccbikai/browser-locally-uses-ai-to-remove-image-backgrounds-481e</link>
      <guid>https://dev.to/ccbikai/browser-locally-uses-ai-to-remove-image-backgrounds-481e</guid>
      <description>&lt;p&gt;Yo, so I've been digging into this whole AI thing for front-end development lately, and stumbled upon this cool Transformers.js example.  Turned it into a sweet little tool, check it out!&lt;/p&gt;

&lt;p&gt;Basically, it uses Transformers.js in a WebWorker to tap into WebGPU and run this RMBG-1.4 model.  Long story short, you can now use AI to nuke image backgrounds right in your browser. And get this, it only takes half a second to process a 4K image on my M1 PRO!&lt;/p&gt;

&lt;p&gt;Here's the link to the tool: &lt;a href="https://html.zone/background-remover" rel="noopener noreferrer"&gt;https://html.zone/background-remover&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://html.zone/background-remover" rel="noopener noreferrer"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5ZmNRBcz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://og-image.html.zone/https://html.zone/background-remover" alt="AI background remover" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Wanna build it yourself?  Head over to &lt;a href="https://github.com/xenova/transformers.js/tree/main/examples/remove-background-client" rel="noopener noreferrer"&gt;https://github.com/xenova/transformers.js/tree/main/examples/remove-background-client&lt;/a&gt; for the source code.  Oh, and heads up, you gotta be on Transformers.js V3 to mess with WebGPU. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fh28RQrY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://c.statcounter.com/9823304/0/92a1d06c/1/" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fh28RQrY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://c.statcounter.com/9823304/0/92a1d06c/1/" alt="stat" width="1" height="1"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>browser</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
