<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kohei Aoki</title>
    <description>The latest articles on DEV Community by Kohei Aoki (@coa00).</description>
    <link>https://dev.to/coa00</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/coa00"/>
    <language>en</language>
    <item>
      <title>How a Custom Docker Image Made My AWS Amplify Builds 10–20% Faster (and Killed Flaky Build Failures)</title>
      <dc:creator>Kohei Aoki</dc:creator>
      <pubDate>Tue, 14 Apr 2026 07:54:35 +0000</pubDate>
      <link>https://dev.to/coa00/how-a-custom-docker-image-made-my-aws-amplify-builds-10-20-faster-and-killed-flaky-build-failures-49n9</link>
      <guid>https://dev.to/coa00/how-a-custom-docker-image-made-my-aws-amplify-builds-10-20-faster-and-killed-flaky-build-failures-49n9</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;If you keep running an app on AWS Amplify for long enough, you'll hit this problem: &lt;strong&gt;your &lt;code&gt;amplify.yml&lt;/code&gt; quietly gets more complex and your builds quietly get slower&lt;/strong&gt;. Every new feature or tool swap adds one more line to &lt;code&gt;preBuild&lt;/code&gt;, and before you know it, no one on the team has the whole build graph in their head. Builds creep up, external downloads pile up, and flaky failures start showing up at 3 a.m.&lt;/p&gt;

&lt;p&gt;This is the kind of thing you have to &lt;strong&gt;revisit periodically&lt;/strong&gt;, and this post is about one of those revisits.&lt;/p&gt;

&lt;p&gt;Concretely: I swapped our AWS Amplify Gen2 build environment for a &lt;strong&gt;custom Docker image&lt;/strong&gt;, and our staging build time dropped from a median of &lt;strong&gt;9m43s to 8m45s&lt;/strong&gt; — about &lt;strong&gt;10% per build, roughly 1 minute&lt;/strong&gt;. Depending on your framework, the realistic range is &lt;strong&gt;10–20%, and Next.js apps can hit 50–70%&lt;/strong&gt;. On top of the raw speed, &lt;strong&gt;download-caused build failures dropped to zero&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This post walks through the design, implementation, real-world measurements, and operational tips from running this in production. If you operate a monorepo on AWS Amplify Gen2 and you're thinking "our builds are slower than they should be" or "I want to kill these flaky download-caused failures," this is for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I built a custom build image
&lt;/h2&gt;

&lt;p&gt;Our app is a pnpm workspace monorepo: &lt;code&gt;apps/web-app&lt;/code&gt; (Vite + React) plus &lt;code&gt;packages/gen2-shared-backend&lt;/code&gt; (Amplify Gen2 backend, Hono Lambda, CDK custom resources). With the default Amplify build image, every single build was paying these costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;nvm install 22&lt;/code&gt; to reinstall Node.js: ~15s&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;corepack enable &amp;amp;&amp;amp; corepack prepare pnpm@10.28.1 --activate&lt;/code&gt;: ~10s&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;curl&lt;/code&gt;'ing a 64 MB Chromium Lambda Layer ZIP from GitHub Releases: ~20–40s&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;pnpm install --frozen-lockfile&lt;/code&gt; from an empty store: 60–90s&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's 2–3 minutes of pure setup every build. And as I'll show, the raw time wasn't even the biggest problem — &lt;strong&gt;stability was&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The trigger: a midnight build failure
&lt;/h3&gt;

&lt;p&gt;The real push came on &lt;code&gt;2026-03-22&lt;/code&gt; when our staging backend build broke. Digging in, I found the &lt;code&gt;curl&lt;/code&gt;'d Chromium Layer ZIP was corrupt — &lt;code&gt;Could not unzip uploaded file&lt;/code&gt;. GitHub Releases was apparently having a bad minute, and our &lt;code&gt;curl&lt;/code&gt; didn't even have &lt;code&gt;--fail&lt;/code&gt;, so the HTTP error body had been dutifully written to disk as a "ZIP file."&lt;/p&gt;

&lt;p&gt;We filed it in our build-fix logs as "P4: silent failure on external asset download." It's a pattern we've hit multiple times.&lt;/p&gt;

&lt;p&gt;At that point I had two choices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Patch the &lt;code&gt;amplify.yml&lt;/code&gt;: add &lt;code&gt;--fail --retry 3 --retry-delay 5&lt;/code&gt;, add file size validation, make &lt;code&gt;curl&lt;/code&gt; careful.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bake the Chromium Layer into the build image itself and remove the download entirely.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Option 1 still leaves 20–40 seconds of downloading every build, and the network-failure risk never really goes to zero. Option 2 burns the file in once, at image build time, and also lets us delete the &lt;code&gt;nvm&lt;/code&gt; and &lt;code&gt;pnpm&lt;/code&gt; setup steps. I picked option 2.&lt;/p&gt;

&lt;h2&gt;
  
  
  The multi-stage Dockerfile design
&lt;/h2&gt;

&lt;p&gt;I split &lt;code&gt;docker/Dockerfile&lt;/code&gt; into multi-stage, producing two images from one base:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Stage&lt;/th&gt;
&lt;th&gt;Tag&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;build&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Amplify custom build + GitHub Actions lint/build&lt;/td&gt;
&lt;td&gt;~800 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;e2e&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;e2e&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;GitHub Actions Playwright E2E&lt;/td&gt;
&lt;td&gt;~1.4 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  The base image mistake I made
&lt;/h3&gt;

&lt;p&gt;I started with &lt;code&gt;node:22-bookworm-slim&lt;/code&gt;. Debian-based, lightweight, official Node image — seemed like the obvious safe pick.&lt;/p&gt;

&lt;p&gt;Except builds got flaky once Amplify ran it. The root cause: &lt;strong&gt;Amplify's default build image is Amazon Linux 2 (now 2023)&lt;/strong&gt;. Different glibc versions mean &lt;code&gt;ampx&lt;/code&gt; and &lt;code&gt;aws-cdk&lt;/code&gt; pull different native binaries, and the difference can bite you in non-obvious ways.&lt;/p&gt;

&lt;p&gt;So I rewrote it to &lt;code&gt;amazonlinux:2023&lt;/code&gt;. &lt;strong&gt;Match Amplify's default OS — it's the simplest, safest choice.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;amazonlinux:2023&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PNPM_HOME="/root/.local/share/pnpm" \&lt;/span&gt;
    PNPM_STORE_DIR="/root/.local/share/pnpm/store" \
    BUN_INSTALL="/root/.bun" \
    NVM_DIR="/root/.nvm" \
    CHROMIUM_LAYER_DIR="/opt/chromium-layer" \
    CHROMIUM_LAYER_VERSION="v127.0.0"

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH="$PNPM_HOME:$BUN_INSTALL/bin:$PATH"&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;dnf update &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    git openssh-clients bash jq &lt;span class="nb"&gt;tar &lt;/span&gt;wget zip unzip &lt;span class="nb"&gt;gzip&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    which findutils procps-ng ca-certificates &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; dnf clean all &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/cache/dnf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What's in the image
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node.js 22 LTS&lt;/strong&gt; (via nvm)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pnpm 10.28.1&lt;/strong&gt; (via corepack — still pnpm for workspace management)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;bun&lt;/strong&gt; — we can't fully migrate (no &lt;code&gt;--filter&lt;/code&gt;, CDK/ampx compatibility is unclear), but we use bun for hono Lambda's isolated install and tsx script execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI v2&lt;/strong&gt; — required for &lt;code&gt;ampx&lt;/code&gt;, &lt;code&gt;appsync&lt;/code&gt;, &lt;code&gt;ssm&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;aws-cdk&lt;/strong&gt; (global)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chromium Lambda Layer v127.0.0&lt;/strong&gt; — pre-baked to &lt;code&gt;/opt/chromium-layer/&lt;/code&gt;. We validate the file size at image build time and hard-fail if it's under 1 MB:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CHROMIUM_LAYER_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-fSL&lt;/span&gt; &lt;span class="nt"&gt;--retry&lt;/span&gt; 3 &lt;span class="nt"&gt;--retry-delay&lt;/span&gt; 5 &lt;span class="se"&gt;\
&lt;/span&gt;       &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CHROMIUM_LAYER_DIR&lt;/span&gt;&lt;span class="s2"&gt;/chromium-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CHROMIUM_LAYER_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-layer.zip"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;       &lt;span class="s2"&gt;"https://github.com/Sparticuz/chromium/releases/download/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CHROMIUM_LAYER_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/chromium-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CHROMIUM_LAYER_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-layer.zip"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nv"&gt;LAYER_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;stat&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt;%s &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CHROMIUM_LAYER_DIR&lt;/span&gt;&lt;span class="s2"&gt;/chromium-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CHROMIUM_LAYER_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-layer.zip"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LAYER_SIZE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-lt&lt;/span&gt; 1000000 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;         &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: Chromium layer is only &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;LAYER_SIZE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; bytes"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;       &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trick is running &lt;strong&gt;&lt;code&gt;curl -fSL --retry 3&lt;/code&gt; + size validation at image build time&lt;/strong&gt;, not production build time. If something fails here, that image version just never makes it to ECR — our production Amplify builds never see the failure. We cut off the "3 a.m. build break" path at the root.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lock down the pnpm store path across every context
&lt;/h3&gt;

&lt;p&gt;This is the single biggest speedup. pnpm install speed is determined by whether the tarball is already in the content-addressable store:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Empty store → 60–90 seconds (full download every time)&lt;/li&gt;
&lt;li&gt;Store populated → &lt;strong&gt;5–15 seconds&lt;/strong&gt; (hardlinks only)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You need the same &lt;code&gt;PNPM_STORE_DIR&lt;/code&gt; in &lt;strong&gt;Dockerfile, Amplify build, and GitHub Actions&lt;/strong&gt;, and you need to list that path in Amplify's &lt;code&gt;cache.paths&lt;/code&gt;. If they don't agree, your second build still pays the full download cost.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NVM_DIR&lt;/span&gt;&lt;span class="s2"&gt;/nvm.sh"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; corepack &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; corepack prepare pnpm@10.28.1 &lt;span class="nt"&gt;--activate&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pnpm config &lt;span class="nb"&gt;set &lt;/span&gt;store-dir &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PNPM_STORE_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pnpm &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Simplifying amplify.yml
&lt;/h2&gt;

&lt;p&gt;The clearest before/after is in the &lt;code&gt;amplify.yml&lt;/code&gt; itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;phases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;preBuild&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;nvm install 22 &amp;amp;&amp;amp; nvm use &lt;/span&gt;&lt;span class="m"&gt;22&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;corepack enable &amp;amp;&amp;amp; corepack prepare pnpm@10.28.1 --activate&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;curl -L \&lt;/span&gt;
            &lt;span class="s"&gt;-o packages/gen2-shared-backend/layer/chromium-v127.0.0-layer.zip \&lt;/span&gt;
            &lt;span class="s"&gt;https://github.com/Sparticuz/chromium/releases/download/v127.0.0/chromium-v127.0.0-layer.zip&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pnpm install --frozen-lockfile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every build: reinstall Node, set up pnpm, download a 64 MB ZIP from GitHub Releases, &lt;em&gt;then&lt;/em&gt; finally run the real install. No &lt;code&gt;--fail&lt;/code&gt;, no retries, no size check on that &lt;code&gt;curl&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  After
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;phases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;preBuild&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Custom image already has pnpm. Fallback just in case.&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;command -v pnpm &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 || { nvm install &amp;amp;&amp;amp; nvm use &amp;amp;&amp;amp; corepack enable &amp;amp;&amp;amp; corepack prepare pnpm@10.28.1 --activate; }&lt;/span&gt;
        &lt;span class="c1"&gt;# Chromium Layer is baked in — just copy it&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;LAYER_DIR=/opt/chromium-layer&lt;/span&gt;
          &lt;span class="s"&gt;if [ -f "$LAYER_DIR/chromium-v127.0.0-layer.zip" ]; then&lt;/span&gt;
            &lt;span class="s"&gt;cp "$LAYER_DIR/chromium-v127.0.0-layer.zip" packages/gen2-shared-backend/layer/&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pnpm install --frozen-lockfile&lt;/span&gt;
&lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/root/.local/share/pnpm/store&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node_modules/.pnpm&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;apps/web-app/.build-cache&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;packages/gen2-shared-backend/.build-cache&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I kept the &lt;code&gt;command -v pnpm&lt;/code&gt; fallback intentionally: &lt;strong&gt;the amplify.yml should still work even if the custom image isn't in effect&lt;/strong&gt;. I learned this the hard way — one PR removed the fallback, Amplify quietly fell back to the standard image for a branch, and every build died with &lt;code&gt;pnpm: command not found&lt;/code&gt;. Design the amplify.yml so custom image = fast, no custom image = still works.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;p&gt;I pulled staging build times via &lt;code&gt;aws amplify list-jobs&lt;/code&gt;, filtered to successful jobs, and compared the windows before and after the image switch:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Window&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Median&lt;/th&gt;
&lt;th&gt;Trimmed mean (trim top/bottom 10%)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Before&lt;/strong&gt; (38 jobs)&lt;/td&gt;
&lt;td&gt;38&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9m43s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;9m48s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;After&lt;/strong&gt; (57 jobs)&lt;/td&gt;
&lt;td&gt;57&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8m45s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;8m53s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;~58 seconds faster on median, 10.0%&lt;/strong&gt;. A minute per build doesn't sound like much, but at ~50 builds a week that's 50 minutes of saved lead time per week, every week.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your framework caps your ceiling
&lt;/h3&gt;

&lt;p&gt;The most important thing to understand: &lt;strong&gt;the upper bound of this speedup is set by your framework&lt;/strong&gt;, not by how clever your Dockerfile is. Here's the table I put together when designing the migration, with expected speedup per framework:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;First build&lt;/th&gt;
&lt;th&gt;Subsequent builds&lt;/th&gt;
&lt;th&gt;Speedup&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Next.js (SSR)&lt;/td&gt;
&lt;td&gt;90–150s&lt;/td&gt;
&lt;td&gt;30–60s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;50–70%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;React Router v7 / Remix&lt;/td&gt;
&lt;td&gt;30–60s&lt;/td&gt;
&lt;td&gt;25–50s&lt;/td&gt;
&lt;td&gt;10–20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Vite (our current setup)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;60–90s&lt;/td&gt;
&lt;td&gt;50–80s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10–15%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Our measured 10.0% landed right in the middle of the Vite band. &lt;strong&gt;Figuring out which band you're in tells you, up front, whether this project is worth the effort.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vite doesn't benefit much because &lt;strong&gt;Vite has no persistent production build cache&lt;/strong&gt; (see &lt;code&gt;vitejs/vite#15092&lt;/code&gt;). &lt;code&gt;node_modules/.vite/&lt;/code&gt; is just the dev-server dependency pre-bundle — it does nothing for &lt;code&gt;pnpm build&lt;/code&gt;. So the only win for Vite projects is the pnpm store cache.&lt;/p&gt;

&lt;p&gt;Next.js is a different story entirely. &lt;code&gt;.next/cache/webpack&lt;/code&gt; persists compiled Webpack/SWC chunks — that's a real production build cache, and the second-build speedup is an order of magnitude bigger. &lt;strong&gt;If you're on Next.js, this is unambiguously worth doing.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Three wins bigger than the minute of speedup
&lt;/h2&gt;

&lt;p&gt;Honestly, the one-minute median speedup wasn't even the main win. The three side effects were:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Build failures dropped to zero
&lt;/h3&gt;

&lt;p&gt;This is the biggest one. Before the image, we were losing builds a few times a month to &lt;code&gt;curl&lt;/code&gt; failures on the Chromium Layer or GitHub Releases rate limits. You know the drill: stop the release at midnight, retry it before standup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Since the switch, zero failures from external downloads.&lt;/strong&gt; The ZIP is baked into the image, and the image build itself validates the size, so by the time production Amplify runs, "the file exists and is the right size" is mathematically guaranteed.&lt;/p&gt;

&lt;p&gt;That peace of mind is worth more than the minute to me.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The "which Node version are we on?" question went away
&lt;/h3&gt;

&lt;p&gt;Amplify's standard image updates on its own schedule. One day Node's minor version jumps, a preinstalled tool changes, and suddenly something that passed locally fails in CI. We've been bitten by this multiple times.&lt;/p&gt;

&lt;p&gt;With the custom image, &lt;strong&gt;&lt;code&gt;amazonlinux:2023&lt;/code&gt; + &lt;code&gt;Node.js 22 LTS&lt;/code&gt; + &lt;code&gt;pnpm 10.28.1&lt;/code&gt; are pinned everywhere&lt;/strong&gt; — local dev, CI, and all four Amplify environments (prod / staging + two regional variants) all run the same runtime derived from the same Dockerfile. Quietly huge for reproducibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The next project gets the payoff
&lt;/h3&gt;

&lt;p&gt;Our next project is going to be Next.js. Same image, and we hit the 50–70% speedup by design. The 10% on the Vite app is really the down payment — the real ROI compounds as we add more projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you run multiple monorepos, the horizontal expansion cost is ~zero after the first one.&lt;/strong&gt; That matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: it works in GitHub Actions too
&lt;/h2&gt;

&lt;p&gt;The image isn't Amplify-specific. &lt;strong&gt;You can drop it straight into GitHub Actions via &lt;code&gt;container:&lt;/code&gt;&lt;/strong&gt;, which removes &lt;code&gt;setup-node&lt;/code&gt; and &lt;code&gt;setup-pnpm&lt;/code&gt; from your workflow entirely. That's another 30–60 seconds shaved off CI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;e2e&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;public.ecr.aws/j9g5b1t3/amplify-build:e2e&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v6&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache@v5&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.local/share/pnpm/store&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm-store-${{ hashFiles('pnpm-lock.yaml') }}&lt;/span&gt;
          &lt;span class="na"&gt;restore-keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm-store-&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm install --frozen-lockfile&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm test:e2e&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;e2e&lt;/code&gt; stage also has Playwright + Chromium pre-baked, so &lt;code&gt;npx playwright install&lt;/code&gt; is gone too. The really nice structural property here is that &lt;strong&gt;"Amplify and CI ran with slightly different runtimes" is now impossible&lt;/strong&gt; — they're literally the same image. Use the same &lt;code&gt;actions/cache&lt;/code&gt; key as Amplify's cache paths and your entire caching layer is unified.&lt;/p&gt;

&lt;p&gt;If you've been putting off writing a Dockerfile "just for Amplify," the honest framing is: &lt;strong&gt;you're writing it for CI too&lt;/strong&gt;, and that makes the investment much easier to justify.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operational tips
&lt;/h2&gt;

&lt;p&gt;A few things I learned the hard way.&lt;/p&gt;

&lt;h3&gt;
  
  
  The NODE_OPTIONS trap
&lt;/h3&gt;

&lt;p&gt;I initially baked &lt;code&gt;ENV NODE_OPTIONS="--max-old-space-size=6144"&lt;/code&gt; into the Dockerfile. Even on Amplify's STANDARD_8GB compute, the OOM killer still came for us. Turns out that when pnpm, Vite, and tsx spike simultaneously, peak memory can exceed 6 GB — and &lt;code&gt;NODE_OPTIONS&lt;/code&gt; inherits into every Node process.&lt;/p&gt;

&lt;p&gt;I dropped it to 4096, then eventually &lt;strong&gt;removed &lt;code&gt;NODE_OPTIONS&lt;/code&gt; from the Dockerfile entirely&lt;/strong&gt;. Amplify's runtime controls this when it needs to; pinning it in the image just overrode that logic. &lt;strong&gt;When in doubt, leave the image "plain" and let Amplify manage runtime constraints.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Assume &lt;code&gt;rsync&lt;/code&gt; doesn't exist
&lt;/h3&gt;

&lt;p&gt;I had a postBuild step doing &lt;code&gt;rsync --exclude='*.map' dist/ out/&lt;/code&gt; to strip source maps. It failed in Amplify's default environment because &lt;code&gt;rsync&lt;/code&gt; isn't preinstalled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Before&lt;/span&gt;
rsync &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'*.map'&lt;/span&gt; dist/ out/

&lt;span class="c"&gt;# After (find + cp alternative)&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; out &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;dist &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s1"&gt;'*.map'&lt;/span&gt; &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;--parents&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; ../out/ &lt;span class="se"&gt;\;&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I could have just added &lt;code&gt;rsync&lt;/code&gt; to the custom image, but the principle here matters: &lt;strong&gt;the amplify.yml should work even when the custom image isn't active&lt;/strong&gt;. If you lean too hard on "but my custom image has X," you make your deployment fragile. Keep the amplify.yml executable on vanilla Amplify.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try the public image yourself
&lt;/h2&gt;

&lt;p&gt;The actual image from this article is on ECR Public — no auth needed, pull it directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# build stage (for Amplify custom build / CI lint / build)&lt;/span&gt;
docker pull public.ecr.aws/j9g5b1t3/amplify-build:latest

&lt;span class="c"&gt;# e2e stage (includes Playwright + Chromium)&lt;/span&gt;
docker pull public.ecr.aws/j9g5b1t3/amplify-build:e2e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gallery&lt;/strong&gt;: &lt;a href="https://gallery.ecr.aws/j9g5b1t3/amplify-build" rel="noopener noreferrer"&gt;https://gallery.ecr.aws/j9g5b1t3/amplify-build&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contains&lt;/strong&gt;: Amazon Linux 2023 / Node.js 22 / pnpm 10.28.1 / bun / AWS CLI v2 / CDK / Chromium Lambda Layer v127.0.0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caveat&lt;/strong&gt;: This registry is published for the article. For long-term production use, fork the image into your own ECR so you control tags and lifecycle.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Custom Docker image for AWS Amplify Gen2 monorepo (Vite): ~1 minute (10%) faster on median.&lt;/strong&gt; Realistic range by framework: &lt;strong&gt;10–20% for most, 50–70% for Next.js.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework characteristics set the ceiling.&lt;/strong&gt; Vite: 10–15%. React Router v7 / Remix: 10–20%. Next.js: 50–70%. Decide whether to invest based on which band you're in, before you start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The speedup isn't the main prize.&lt;/strong&gt; Zero download-caused build failures, unified runtime across all environments, and near-zero cost to roll this out to the next project — those are what actually matter day to day.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep the amplify.yml backward-compatible.&lt;/strong&gt; &lt;code&gt;command -v pnpm&lt;/code&gt; fallback + existence check for &lt;code&gt;/opt/chromium-layer/&lt;/code&gt; = your production doesn't die if the custom image ever isn't in effect.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build-time improvements tend to get framed as "make it faster," but in practice, &lt;strong&gt;"make it not fail"&lt;/strong&gt; is just as important — maybe more. A custom Docker image gets you both for one piece of work. That's a good trade.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/Sparticuz/chromium" rel="noopener noreferrer"&gt;Sparticuz/chromium&lt;/a&gt; — the Chromium Lambda Layer&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pnpm.io/symlinked-node-modules-structure" rel="noopener noreferrer"&gt;How pnpm's store works&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/vitejs/vite/issues/15092" rel="noopener noreferrer"&gt;Vite issue tracking persistent production build cache (vitejs/vite#15092)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>amplify</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>How I Use 4 Terminal Setups with Claude Code Agent Teams</title>
      <dc:creator>Kohei Aoki</dc:creator>
      <pubDate>Tue, 31 Mar 2026 13:58:16 +0000</pubDate>
      <link>https://dev.to/coa00/how-i-use-4-terminal-setups-with-claude-code-agent-teams-52i9</link>
      <guid>https://dev.to/coa00/how-i-use-4-terminal-setups-with-claude-code-agent-teams-52i9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I stopped opening my IDE. Claude Code is my development environment now — VSCode is only for the occasional visual check.&lt;/p&gt;

&lt;p&gt;One feature I use daily is &lt;a href="https://code.claude.com/docs/agent-teams" rel="noopener noreferrer"&gt;Agent Teams&lt;/a&gt;: multiple Claude Code instances working as a coordinated team, with one leader assigning tasks and members working independently in their own context windows. It's especially effective for parallel investigation, code review, and debugging.&lt;/p&gt;

&lt;p&gt;Agent Teams support two display modes: &lt;strong&gt;in-process mode&lt;/strong&gt; (any terminal) and &lt;strong&gt;split-pane mode&lt;/strong&gt; (each member gets its own pane). Split-pane mode requires &lt;strong&gt;tmux or iTerm2&lt;/strong&gt;, which sent me on a search for the best terminal setup on Mac.&lt;/p&gt;

&lt;p&gt;I tested four environments, found that each has trade-offs, and ended up building &lt;a href="https://github.com/coa00/ghostty-session-picker" rel="noopener noreferrer"&gt;an fzf session picker&lt;/a&gt; that lets me choose the right one every time I open a window.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Need from a Terminal
&lt;/h2&gt;

&lt;p&gt;When Claude Code is your primary development tool, terminal requirements expand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clickable URLs and file paths&lt;/strong&gt; — Claude outputs links constantly (PR URLs, docs, deploy URLs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://code.claude.com/docs/agent-teams" rel="noopener noreferrer"&gt;Agent Teams&lt;/a&gt; split-pane mode&lt;/strong&gt; — run multiple agents in parallel with full visibility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session persistence&lt;/strong&gt; — stop work today, resume tomorrow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Japanese input&lt;/strong&gt; — I write prompts in Japanese daily (relevant for CJK users)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparing 4 Setups
&lt;/h2&gt;

&lt;p&gt;Legend: ◎ = built-in　○ = requires config/plugin　△ = limited　× = not supported&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Ghostty&lt;/th&gt;
&lt;th&gt;iTerm2&lt;/th&gt;
&lt;th&gt;Ghostty + tmux&lt;/th&gt;
&lt;th&gt;Ghostty + zellij&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Rendering speed&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;△ slow&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open URLs / files&lt;/td&gt;
&lt;td&gt;◎ Cmd+Click&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;◎ Cmd+Shift+Click&lt;/td&gt;
&lt;td&gt;◎ Cmd+Shift+Click&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CJK input&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;○ Ghostty config&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shift+Enter newline&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;○ Ghostty config&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Session persistence&lt;/td&gt;
&lt;td&gt;×&lt;/td&gt;
&lt;td&gt;×&lt;/td&gt;
&lt;td&gt;○ plugin&lt;/td&gt;
&lt;td&gt;◎ built-in&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status line display&lt;/td&gt;
&lt;td&gt;×&lt;/td&gt;
&lt;td&gt;×&lt;/td&gt;
&lt;td&gt;○ tmux config&lt;/td&gt;
&lt;td&gt;×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent Teams split pane&lt;/td&gt;
&lt;td&gt;×&lt;/td&gt;
&lt;td&gt;○ it2 CLI + Python API&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ease of use&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;◎&lt;/td&gt;
&lt;td&gt;△ many keybindings&lt;/td&gt;
&lt;td&gt;◎ UI guides&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Ghostty — Fastest, Zero Config
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://ghostty.org/" rel="noopener noreferrer"&gt;Ghostty&lt;/a&gt; (v1.3.1) is a GPU-accelerated cross-platform terminal emulator. On macOS it uses Metal, and startup feels under 0.1 seconds.&lt;/p&gt;

&lt;p&gt;The killer feature for Claude Code is &lt;strong&gt;clickable URLs&lt;/strong&gt;. With &lt;code&gt;link-url&lt;/code&gt;, Cmd+Click opens any URL in your browser. Since Claude Code constantly outputs PR links, doc URLs, and deploy URLs, this is a significant productivity boost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weakness&lt;/strong&gt;: No session management. Agent Teams only work in in-process mode — no split panes.&lt;/p&gt;

&lt;h3&gt;
  
  
  iTerm2 — Feature-Rich, No Setup Required
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://iterm2.com/" rel="noopener noreferrer"&gt;iTerm2&lt;/a&gt; is the classic macOS terminal. It's stable, feature-rich, and supports Agent Teams split-pane mode (requires &lt;a href="https://github.com/mkusaka/it2" rel="noopener noreferrer"&gt;&lt;code&gt;it2&lt;/code&gt; CLI&lt;/a&gt; installation and enabling the Python API).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weakness&lt;/strong&gt;: Noticeably slower rendering compared to Ghostty. When Claude Code outputs long responses, iTerm2 shows visible lag that Ghostty doesn't.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ghostty + tmux — Extensible with Config and Plugins
&lt;/h3&gt;

&lt;p&gt;Combining &lt;a href="https://github.com/tmux/tmux/wiki/Installing" rel="noopener noreferrer"&gt;tmux&lt;/a&gt; with Ghostty unlocks Agent Teams split-pane mode, plus:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Session persistence&lt;/strong&gt;: &lt;a href="https://github.com/tmux-plugins/tmux-resurrect" rel="noopener noreferrer"&gt;tmux-resurrect&lt;/a&gt; + &lt;a href="https://github.com/tmux-plugins/tmux-continuum" rel="noopener noreferrer"&gt;tmux-continuum&lt;/a&gt; save and restore window layouts, pane arrangements, and working directories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://code.claude.com/docs/status-line" rel="noopener noreferrer"&gt;Status line display&lt;/a&gt;&lt;/strong&gt;: Show Claude Code's project name, model, and context usage in tmux's status bar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Some Ghostty features are restricted through tmux. URL clicking works with Cmd+Shift+Click, and Shift+Enter requires a Ghostty config tweak. See the configuration section below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ghostty + zellij — Built-in Features
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://zellij.dev/" rel="noopener noreferrer"&gt;zellij&lt;/a&gt; (v0.43.1) is a Rust-based terminal workspace. It displays UI guides, so you don't need to memorize keybindings like tmux. Session persistence is built in.&lt;/p&gt;

&lt;p&gt;Overall, zellij is more intuitive than tmux and easier to pick up if you're new to terminal multiplexers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weakness&lt;/strong&gt;: Agent Teams split-pane mode is not supported. URL clicking works with Cmd+Shift+Click, same as tmux.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendations by Use Case
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use case&lt;/th&gt;
&lt;th&gt;Recommendation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Daily Claude Code usage&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Ghostty&lt;/strong&gt; (fast, clickable URLs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent Teams with split panes&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Ghostty + tmux&lt;/strong&gt; (Shift+Enter fix via config)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Session persistence + intuitive UX&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ghostty + zellij&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stability, minimal parallel work&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;iTerm2&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Configuration for Ghostty + tmux/zellij
&lt;/h2&gt;

&lt;h3&gt;
  
  
  tmux: Shift+Enter Fix
&lt;/h3&gt;

&lt;p&gt;Through tmux, Shift+Enter doesn't work in Claude Code. Add this to your Ghostty config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# ~/.config/ghostty/config
&lt;/span&gt;&lt;span class="n"&gt;keybind&lt;/span&gt; = &lt;span class="n"&gt;shift&lt;/span&gt;+&lt;span class="n"&gt;enter&lt;/span&gt;=&lt;span class="n"&gt;text&lt;/span&gt;:\&lt;span class="n"&gt;x1b&lt;/span&gt;[&lt;span class="m"&gt;13&lt;/span&gt;;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  tmux: Session Persistence
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# tmux.conf&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @plugin &lt;span class="s1"&gt;'tmux-plugins/tmux-resurrect'&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @plugin &lt;span class="s1"&gt;'tmux-plugins/tmux-continuum'&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @resurrect-capture-pane-contents &lt;span class="s1"&gt;'on'&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @continuum-restore &lt;span class="s1"&gt;'on'&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @continuum-save-interval &lt;span class="s1"&gt;'15'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  tmux: Claude Code Status Line
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# tmux.conf&lt;/span&gt;
&lt;span class="c"&gt;# Left: project name&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; status-left &lt;span class="s2"&gt;"#[fg=#89b4fa,bold] #(cat ~/.claude/tmux-status-left.txt 2&amp;gt;/dev/null || echo '#S') "&lt;/span&gt;
&lt;span class="c"&gt;# Right: model, context usage, rate limit, cost&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; status-right &lt;span class="s2"&gt;"#[fg=#a6e3a1]#(cat ~/.claude/tmux-status-right.txt 2&amp;gt;/dev/null)#[default]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  zellij: Session Persistence
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// zellij config.kdl
session_serialization true
serialize_pane_viewport true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ghostty: Keybinding Adjustments (Shared)
&lt;/h3&gt;

&lt;p&gt;When using tmux or zellij, Ghostty keybindings can conflict:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# ~/.config/ghostty/config
&lt;/span&gt;&lt;span class="n"&gt;keybind&lt;/span&gt; = &lt;span class="n"&gt;super&lt;/span&gt;+&lt;span class="n"&gt;t&lt;/span&gt;=&lt;span class="n"&gt;unbind&lt;/span&gt;
&lt;span class="n"&gt;keybind&lt;/span&gt; = &lt;span class="n"&gt;super&lt;/span&gt;+&lt;span class="n"&gt;n&lt;/span&gt;=&lt;span class="n"&gt;unbind&lt;/span&gt;
&lt;span class="n"&gt;keybind&lt;/span&gt; = &lt;span class="n"&gt;super&lt;/span&gt;+&lt;span class="n"&gt;w&lt;/span&gt;=&lt;span class="n"&gt;unbind&lt;/span&gt;
&lt;span class="n"&gt;keybind&lt;/span&gt; = &lt;span class="n"&gt;super&lt;/span&gt;+&lt;span class="n"&gt;c&lt;/span&gt;=&lt;span class="n"&gt;copy_to_clipboard&lt;/span&gt;
&lt;span class="n"&gt;keybind&lt;/span&gt; = &lt;span class="n"&gt;super&lt;/span&gt;+&lt;span class="n"&gt;v&lt;/span&gt;=&lt;span class="n"&gt;paste_from_clipboard&lt;/span&gt;
&lt;span class="n"&gt;keybind&lt;/span&gt; = &lt;span class="n"&gt;super&lt;/span&gt;+&lt;span class="n"&gt;q&lt;/span&gt;=&lt;span class="n"&gt;quit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  fzf Session Picker
&lt;/h2&gt;

&lt;p&gt;Since the best setup depends on what I'm doing, I built a script that lets me choose every time I open a new Ghostty window.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌──────────────────────────────────────┐
│ session &amp;gt;                            │
│ Select a session or create new       │
│──────────────────────────────────────│
│ [zellij] my-project                  │
│ [tmux] claude-team                   │
│ [tmux] dev-server                    │
│ [new] Ghostty (plain shell)          │
│ [new] tmux                           │
│ [new] zellij                         │
└──────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Ghostty's &lt;code&gt;command&lt;/code&gt; config launches the script on window open&lt;/li&gt;
&lt;li&gt;The script lists existing zellij and tmux sessions&lt;/li&gt;
&lt;li&gt;fzf lets you pick — attach to an existing session or create a new one&lt;/li&gt;
&lt;li&gt;Esc/Ctrl+C cancels and drops you into a plain shell&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;The script is on GitHub: &lt;a href="https://github.com/coa00/ghostty-session-picker" rel="noopener noreferrer"&gt;ghostty-session-picker&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/coa00/ghostty-session-picker/main/ghostty-session-picker &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt; ~/.local/bin/ghostty-session-picker
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x ~/.local/bin/ghostty-session-picker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add one line to your Ghostty config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# ~/.config/ghostty/config
&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt; = /&lt;span class="n"&gt;Users&lt;/span&gt;/&lt;span class="n"&gt;you&lt;/span&gt;/.&lt;span class="n"&gt;local&lt;/span&gt;/&lt;span class="n"&gt;bin&lt;/span&gt;/&lt;span class="n"&gt;zellij&lt;/span&gt;-&lt;span class="n"&gt;sessionizer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enable working directory restore, add this to &lt;code&gt;~/.zshrc&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;chpwd&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ~/.last_working_dir
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrap Up
&lt;/h2&gt;

&lt;p&gt;There's no single "best terminal for Claude Code." The right choice depends on what you're doing — so rather than picking one, I built a system to choose every time.&lt;/p&gt;

&lt;p&gt;Personally, after investing time in tmux configuration, I now primarily use &lt;strong&gt;Ghostty + tmux&lt;/strong&gt;. Beyond Agent Teams split panes, tmux handles session persistence and status line display — consolidating everything into one setup. That said, this comes down to how much you invest in configuration and what you prioritize. The comparison should help you find the right fit for your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/coa00/ghostty-session-picker" rel="noopener noreferrer"&gt;ghostty-session-picker&lt;/a&gt; — fzf session selector&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ghostty.org/" rel="noopener noreferrer"&gt;Ghostty&lt;/a&gt; — GPU-accelerated terminal&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://iterm2.com/" rel="noopener noreferrer"&gt;iTerm2&lt;/a&gt; — macOS terminal&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tmux/tmux" rel="noopener noreferrer"&gt;tmux&lt;/a&gt; — terminal multiplexer&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://zellij.dev/" rel="noopener noreferrer"&gt;zellij&lt;/a&gt; — Rust-based terminal workspace&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/junegunn/fzf" rel="noopener noreferrer"&gt;fzf&lt;/a&gt; — command-line fuzzy finder&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; — Anthropic's official CLI&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://code.claude.com/docs/agent-teams" rel="noopener noreferrer"&gt;Agent Teams&lt;/a&gt; — Claude Code's parallel agent feature&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://code.claude.com/docs/status-line" rel="noopener noreferrer"&gt;Status Line&lt;/a&gt; — Claude Code status display&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tmux-plugins/tmux-resurrect" rel="noopener noreferrer"&gt;tmux-resurrect&lt;/a&gt; — session save/restore plugin&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tmux-plugins/tmux-continuum" rel="noopener noreferrer"&gt;tmux-continuum&lt;/a&gt; — automatic session saving&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mkusaka/it2" rel="noopener noreferrer"&gt;&lt;code&gt;it2&lt;/code&gt; CLI&lt;/a&gt; — iTerm2 split-pane CLI tool&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>claudecode</category>
      <category>terminal</category>
      <category>productivity</category>
      <category>ai</category>
    </item>
    <item>
      <title>I Enabled Aurora Data API and My AI Agent Started Querying the Database Directly</title>
      <dc:creator>Kohei Aoki</dc:creator>
      <pubDate>Mon, 02 Mar 2026 11:23:07 +0000</pubDate>
      <link>https://dev.to/coa00/i-enabled-aurora-data-api-and-my-ai-agent-started-querying-the-database-directly-2gl3</link>
      <guid>https://dev.to/coa00/i-enabled-aurora-data-api-and-my-ai-agent-started-querying-the-database-directly-2gl3</guid>
      <description>&lt;p&gt;I see Data API less as an infrastructure improvement and more as &lt;strong&gt;a tool that changes how the team works&lt;/strong&gt;. Whether your AI tools can directly access the database makes an order-of-magnitude difference in debugging and data verification speed.&lt;/p&gt;

&lt;p&gt;We enabled one AWS feature — Aurora Data API — and our AI coding tools could suddenly query the database. No bastion host, no port forwarding, no copy-pasting query results. Here's what that actually looks like in practice, and what you need to enable it.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data API&lt;/strong&gt; is an HTTPS-based SQL execution API built into Aurora. Enabling it costs virtually nothing&lt;/li&gt;
&lt;li&gt;It &lt;strong&gt;complements&lt;/strong&gt; SSM bastion hosts — use both&lt;/li&gt;
&lt;li&gt;Once enabled, &lt;strong&gt;Claude Code / Cursor can query your DB directly&lt;/strong&gt; via shell commands or MCP&lt;/li&gt;
&lt;li&gt;Reduces team learning curve, simplifies bastion operations, and accelerates automation&lt;/li&gt;
&lt;li&gt;If you're running Aurora, there's no reason &lt;em&gt;not&lt;/em&gt; to enable it&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI Coding Tool Integration: How Data API Changes Team Development
&lt;/h2&gt;

&lt;p&gt;Let me start with the payoff — this is where Data API had the biggest impact for us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shell-Based Tool Use: Claude Code Runs AWS CLI Directly
&lt;/h3&gt;

&lt;p&gt;With the traditional bastion host setup — a dedicated EC2 instance you SSH into (or tunnel through) to reach your private database — AI tools accessing the database was effectively impossible. SSH port forwarding + MySQL client connection is a workflow designed for humans operating manually.&lt;/p&gt;

&lt;p&gt;With Data API, a single shell command — &lt;code&gt;aws rds-data execute-statement&lt;/code&gt; — executes SQL over HTTPS. That means &lt;strong&gt;Claude Code can run this command through its built-in Bash tool&lt;/strong&gt; (which lets it execute shell commands on your behalf) to query and modify your database directly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example: what Claude Code actually runs&lt;/span&gt;
aws rds-data execute-statement &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resource-arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:rds:ap-northeast-1:xxx:cluster:dev-cluster"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--secret-arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:secretsmanager:ap-northeast-1:xxx:secret:dbsecret"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--database&lt;/span&gt; &lt;span class="s2"&gt;"my_database"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--sql&lt;/span&gt; &lt;span class="s2"&gt;"SHOW TABLES"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--profile&lt;/span&gt; dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the AI agent using a &lt;strong&gt;shell tool&lt;/strong&gt; — it constructs and executes CLI commands, then parses the text output. It works, and it unlocks workflows like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging — a multi-step agentic loop:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"Check the staging &lt;code&gt;users&lt;/code&gt; table for records where &lt;code&gt;status&lt;/code&gt; is null"&lt;/li&gt;
&lt;li&gt;Claude Code queries via Data API → finds 12 orphaned records&lt;/li&gt;
&lt;li&gt;It reasons about the cause, issues a follow-up query on the &lt;code&gt;user_sessions&lt;/code&gt; table&lt;/li&gt;
&lt;li&gt;Identifies a race condition in the cleanup job → suggests a fix with a migration SQL&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Migration authoring:&lt;/strong&gt;&lt;br&gt;
"Look at the current schema in dev and write a migration SQL for this spec" → auto-fetches current table definitions → generates diff SQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data checks:&lt;/strong&gt;&lt;br&gt;
"How many records are in the production &lt;code&gt;offices&lt;/code&gt; table, and when was the last update?" → instant answer.&lt;/p&gt;

&lt;p&gt;Previously, a developer had to manually query through the bastion, copy-paste the results back to the AI, wait for analysis... &lt;strong&gt;Data API eliminates the human as the bottleneck&lt;/strong&gt; in this loop.&lt;/p&gt;
&lt;h3&gt;
  
  
  MCP-Based Structured Tool Use: Natural Language DB Access
&lt;/h3&gt;

&lt;p&gt;The second integration pattern is more powerful. MCP (Model Context Protocol) lets AI tools like Cursor and Claude Code connect to external data sources through typed, schema-defined interfaces. Instead of parsing free-form CLI output, the agent receives structured data with column names and types — making its actions more reliable and predictable.&lt;/p&gt;

&lt;p&gt;The official &lt;strong&gt;MySQL MCP Server from AWS Labs&lt;/strong&gt; uses Data API internally. Here's how to wire it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"awslabs.mysql-mcp-server"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"awslabs.mysql-mcp-server@latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--resource_arn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;cluster-arn&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--secret_arn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;secret-arn&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--database"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;db-name&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--region"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ap-northeast-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--readonly"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"True"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AWS_PROFILE"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AWS_REGION"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ap-northeast-1"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Change &lt;code&gt;ap-northeast-1&lt;/code&gt; to your AWS region.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once configured, you can query from Cursor or Claude Code using natural language:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"How many support tickets came in this month?"&lt;/li&gt;
&lt;li&gt;"Show me all active users created in the last 30 days"&lt;/li&gt;
&lt;li&gt;"Which records were updated in the last 7 days?"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Guardrails for Production Use
&lt;/h3&gt;

&lt;p&gt;With AI tools querying your database, safety matters. Here's our approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;--readonly True&lt;/code&gt; in MCP config&lt;/strong&gt;: Restricts the MCP server to SELECT-only queries. This is enforced at the MCP server level, but should not be your only line of defense.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read-only DB user&lt;/strong&gt;: Create a dedicated database user with SELECT-only privileges and store those credentials in a separate Secrets Manager secret. Use this secret for AI tool access — not the application's read-write credentials.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM policy scoping&lt;/strong&gt;: Restrict the IAM role to &lt;code&gt;rds-data:ExecuteStatement&lt;/code&gt; and &lt;code&gt;secretsmanager:GetSecretValue&lt;/code&gt; on the specific read-only secret ARN.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudTrail audit trail&lt;/strong&gt;: Every Data API call is logged in CloudTrail, including the SQL statement. This gives you post-hoc observability of everything the agent executed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human-in-the-loop for writes&lt;/strong&gt;: Removing the human from read queries is a productivity win. For write operations, the human-as-bottleneck is actually the safety mechanism. Keep approval workflows for any non-read access.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note on prompt injection&lt;/strong&gt;: If your database contains user-generated content, be aware that query results could include text designed to manipulate the agent's next action. Use parameterized queries through the Data API to mitigate injection risks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Team Learning Curve Drops Dramatically
&lt;/h3&gt;

&lt;p&gt;This one matters if you're leading a team.&lt;/p&gt;

&lt;p&gt;What team members previously had to learn for DB access:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS SSO login workflow&lt;/li&gt;
&lt;li&gt;AWS Systems Manager (SSM) Session Manager installation and configuration&lt;/li&gt;
&lt;li&gt;Port forwarding concepts and execution&lt;/li&gt;
&lt;li&gt;MySQL client installation and connection setup&lt;/li&gt;
&lt;li&gt;Retrieving passwords from Secrets Manager&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With Data API + AI tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS CLI profile setup (most engineers already know this)&lt;/li&gt;
&lt;li&gt;"Show me the data for X" → ask the AI&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Five steps become effectively one.&lt;/strong&gt; For onboarding, it's now "Log in with AWS SSO, then just ask Claude" — and that's it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The barrier for non-infrastructure engineers and frontend developers who just want to "quickly check some data" drops dramatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduced Operational Burden
&lt;/h3&gt;

&lt;p&gt;For small teams, the potential to eliminate bastion host maintenance is a significant win.&lt;/p&gt;

&lt;p&gt;Complete bastion elimination depends on your team's needs — bulk data exports and complex investigations still benefit from SSM. But &lt;strong&gt;the majority of day-to-day "let me quickly check this data" or "update a few records" use cases can be handled by Data API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As bastion usage drops, you can justify switching from always-on to on-demand — further reducing costs and maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is RDS Data API?
&lt;/h2&gt;

&lt;p&gt;RDS Data API lets you &lt;strong&gt;execute SQL against Aurora via HTTPS REST calls&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Traditional database connections rely on the MySQL wire protocol over TCP/IP. Data API replaces that with standard HTTP requests through the AWS SDK or CLI. No VPC required — which also means faster Lambda cold starts, simpler networking, and no ENI provisioning delays.&lt;/p&gt;

&lt;h3&gt;
  
  
  Supported Engines and Limitations
&lt;/h3&gt;

&lt;p&gt;Data API is &lt;strong&gt;Aurora-only&lt;/strong&gt;. Standard RDS is not supported.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Engine&lt;/th&gt;
&lt;th&gt;Data API Support&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Aurora MySQL (v3.07+)&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aurora PostgreSQL (v13.12+, 14.9+, 15.4+)&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard RDS MySQL / PostgreSQL&lt;/td&gt;
&lt;td&gt;Not supported&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Key limitations:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Constraint&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Response size&lt;/td&gt;
&lt;td&gt;1 MB max per request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Row size&lt;/td&gt;
&lt;td&gt;64 KB max per row&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Timeout&lt;/td&gt;
&lt;td&gt;45 seconds max per request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-statement&lt;/td&gt;
&lt;td&gt;Not supported on MySQL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Target&lt;/td&gt;
&lt;td&gt;Writer instance only&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The 1 MB response limit means Data API isn't suited for bulk SELECTs or data exports. That's where the SSM bastion still earns its keep.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note on multi-statement&lt;/strong&gt;: If your CI/CD migration tool (Flyway, Liquibase, etc.) uses multi-statement SQL files, you'll need to split statements or use a wrapper for MySQL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Pricing
&lt;/h3&gt;

&lt;p&gt;Enabling is free. Usage costs &lt;strong&gt;$0.35 per million requests&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;1,000 SQL executions per month = &lt;strong&gt;$0.00035&lt;/strong&gt;. Not exactly a budget consideration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data API vs SSM Session Manager
&lt;/h2&gt;

&lt;p&gt;This was the decision point I wrestled with most. The answer: &lt;strong&gt;it's not either/or — it's both.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case Breakdown
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;Data API&lt;/th&gt;
&lt;th&gt;SSM Bastion&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Lambda → DB operations&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Ideal&lt;/strong&gt; (no VPC needed)&lt;/td&gt;
&lt;td&gt;Not possible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CI/CD migrations&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Good fit&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Possible but complex&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scheduled batch scripts&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Good fit&lt;/strong&gt; (IAM auth)&lt;/td&gt;
&lt;td&gt;Requires session management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application integration&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Ideal&lt;/strong&gt; (pure HTTP)&lt;/td&gt;
&lt;td&gt;Poor fit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Manual data investigation&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Ideal&lt;/strong&gt; (GUI tools)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bulk data export&lt;/td&gt;
&lt;td&gt;Poor fit (1 MB limit)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ideal&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Emergency manual UPDATEs&lt;/td&gt;
&lt;td&gt;Possible but clunky&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ideal&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Operational Cost Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;th&gt;Data API&lt;/th&gt;
&lt;th&gt;SSM Bastion&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Initial setup&lt;/td&gt;
&lt;td&gt;Near-zero (one console click)&lt;/td&gt;
&lt;td&gt;EC2 + SSM + security group config&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly cost&lt;/td&gt;
&lt;td&gt;~$0&lt;/td&gt;
&lt;td&gt;$3-8/mo (t4g.nano always-on)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintenance&lt;/td&gt;
&lt;td&gt;None (fully managed)&lt;/td&gt;
&lt;td&gt;OS patches, SSM Agent updates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Audit logging&lt;/td&gt;
&lt;td&gt;CloudTrail auto-records all queries&lt;/td&gt;
&lt;td&gt;Dual management: CloudTrail + DB audit logs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Connection management&lt;/td&gt;
&lt;td&gt;None (no persistent connections)&lt;/td&gt;
&lt;td&gt;max_connections tuning required&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The annual cost of a bastion host ($36-96) looks small on paper, but factor in &lt;strong&gt;the human cost of OS patching, SSM Agent updates, and recovery when the instance goes down&lt;/strong&gt; — it adds up fast, especially for small teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;th&gt;Data API&lt;/th&gt;
&lt;th&gt;SSM Bastion&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Attack surface&lt;/td&gt;
&lt;td&gt;HTTPS (IAM-protected)&lt;/td&gt;
&lt;td&gt;SSM only (no inbound ports)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Credential management&lt;/td&gt;
&lt;td&gt;Secrets Manager (auto-rotation)&lt;/td&gt;
&lt;td&gt;DB password stored locally&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Access control&lt;/td&gt;
&lt;td&gt;IAM policies control API calls&lt;/td&gt;
&lt;td&gt;Dual management: SSM permissions + DB user permissions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Data API delegates credentials entirely to Secrets Manager, meaning &lt;strong&gt;developers never need to know the DB password&lt;/strong&gt;. That's a significant security win.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Enable It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DB engine&lt;/td&gt;
&lt;td&gt;Aurora MySQL v3.07+ or Aurora PostgreSQL v13.12+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Instance class&lt;/td&gt;
&lt;td&gt;Any class except T instances (for provisioned; Serverless v2 is unaffected)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Secrets Manager&lt;/td&gt;
&lt;td&gt;DB credentials must be stored&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you're on standard RDS, Data API isn't available. Whether to migrate to Aurora for Data API alone depends on the other benefits (Serverless v2 autoscaling, etc.).&lt;/p&gt;

&lt;h3&gt;
  
  
  CLI
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# For Serverless v2 / Provisioned clusters&lt;/span&gt;
aws rds enable-http-endpoint &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resource-arn&lt;/span&gt; &amp;lt;cluster-arn&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--profile&lt;/span&gt; &amp;lt;profile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: For Serverless v2 / Provisioned clusters, use &lt;code&gt;enable-http-endpoint&lt;/code&gt;. The &lt;code&gt;modify-db-cluster --enable-http-endpoint&lt;/code&gt; command is for Serverless v1 only (now end-of-life). The docs are confusing on this — watch out.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws rds-data execute-statement &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resource-arn&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;cluster-arn&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--secret-arn&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;secret-arn&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--database&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;db-name&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--sql&lt;/span&gt; &lt;span class="s2"&gt;"SELECT 1 AS test"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--profile&lt;/span&gt; &amp;lt;profile&amp;gt;

&lt;span class="c"&gt;# → {"records": [[{"longValue": 1}]], "numberOfRecordsUpdated": 0}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Preventing CDK Drift
&lt;/h3&gt;

&lt;p&gt;If you enable via CLI without updating your CDK code, the next deploy might reset it to &lt;code&gt;false&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cluster&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DatabaseCluster&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Cluster&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DatabaseClusterEngine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;auroraMysql&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AuroraMysqlEngineVersion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VER_3_08_0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="c1"&gt;// ... existing config ...&lt;/span&gt;
  &lt;span class="na"&gt;enableDataApi&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// ← Add this&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Always update your CDK code alongside the CLI enablement. If CDK drift reverts the setting, any MCP servers or automation scripts that depend on Data API will break simultaneously.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Risk Assessment
&lt;/h3&gt;

&lt;p&gt;Enabling Data API carries virtually zero risk.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No impact on existing applications or database connections&lt;/li&gt;
&lt;li&gt;Even when enabled, access requires IAM permissions&lt;/li&gt;
&lt;li&gt;Can be disabled instantly with a single command&lt;/li&gt;
&lt;li&gt;Cost is effectively zero&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only caveat: &lt;strong&gt;prevent CDK drift&lt;/strong&gt;. If you forget to add &lt;code&gt;enableDataApi: true&lt;/code&gt; to your code, the next deploy reverts the setting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should You Enable It?
&lt;/h2&gt;

&lt;p&gt;If you're running Aurora, enabling Data API is one of those &lt;strong&gt;"no reason not to" improvements&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Near-zero risk and cost to enable&lt;/li&gt;
&lt;li&gt;Complements SSM bastion for full use-case coverage&lt;/li&gt;
&lt;li&gt;AI coding tool integration accelerates the entire team&lt;/li&gt;
&lt;li&gt;Directly reduces learning curve and operational overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"You don't need to know the DB password. You don't need to set up port forwarding. You just ask Claude."&lt;/strong&gt; — That's the new onboarding experience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you have Aurora clusters that haven't enabled it yet, start with your dev environment. One command, five seconds.&lt;/p&gt;

&lt;p&gt;Are you already using Data API with AI tools? Or still running bastion hosts for everything? I'd love to hear what your team's setup looks like — drop a comment below.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/09/amazon-aurora-mysql-rds-data-api/" rel="noopener noreferrer"&gt;Amazon Aurora MySQL now supports RDS Data API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html" rel="noopener noreferrer"&gt;Using the Amazon RDS Data API - Amazon Aurora&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.enabling.html" rel="noopener noreferrer"&gt;Enabling the Amazon RDS Data API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.limitations.html" rel="noopener noreferrer"&gt;Limitations for the Amazon RDS Data API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/awslabs/mcp/blob/main/src/mysql-mcp-server/README.md" rel="noopener noreferrer"&gt;AWS Labs MySQL MCP Server&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>ai</category>
      <category>claudecode</category>
    </item>
    <item>
      <title>How I Solved DynamoDB + Amplify Search Problem with OpenSearch for $27/Month Instead of Migrating to Aurora</title>
      <dc:creator>Kohei Aoki</dc:creator>
      <pubDate>Sat, 28 Feb 2026 09:05:22 +0000</pubDate>
      <link>https://dev.to/coa00/how-i-solved-dynamodbs-search-problem-with-opensearch-for-27month-instead-of-migrating-to-aurora-300f</link>
      <guid>https://dev.to/coa00/how-i-solved-dynamodbs-search-problem-with-opensearch-for-27month-instead-of-migrating-to-aurora-300f</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: One OpenSearch instance. $27/month. DynamoDB search and multi-tenancy — solved.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Were About to Migrate to Aurora
&lt;/h2&gt;

&lt;p&gt;I had the Aurora estimate ready. I was planning to rewrite our Amplify Data models into SQL schemas.&lt;/p&gt;

&lt;p&gt;We had two problems. The first was &lt;strong&gt;search&lt;/strong&gt; — once filter conditions exceeded 5 fields, DynamoDB's Query/Scan couldn't keep up. Full-text search was simply impossible. The second was &lt;strong&gt;multi-tenancy&lt;/strong&gt; — we needed to safely isolate data across multiple products and multiple environments (staging / prod / sandbox). With Amplify Gen2's sandbox spinning up per-developer environments, naively adding a search engine would multiply instances and blow up costs.&lt;/p&gt;

&lt;p&gt;In the end, &lt;strong&gt;we didn't migrate to Aurora&lt;/strong&gt;. We added OpenSearch as a "search layer" alongside DynamoDB, and used index naming conventions to consolidate multiple products and environments into a single instance. The additional cost: &lt;strong&gt;$27/month&lt;/strong&gt; — a 94% reduction from the $432/month we would have spent on per-environment instances. Both search and multi-tenancy, solved.&lt;/p&gt;

&lt;p&gt;This article shares how we implemented this in a real-world real estate tech product running on Amplify Gen2.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pain Points of DynamoDB-Only
&lt;/h2&gt;

&lt;p&gt;We're running a real estate tech product built on Amplify Gen2. Property and lot data lives in DynamoDB, with search and listing features for users. Multiple products share the same AWS account, each with staging, prod, and per-developer sandbox environments.&lt;/p&gt;

&lt;p&gt;Early on, DynamoDB Query with GSIs (Global Secondary Indexes) was fine. But as the product grew, we hit walls on both &lt;strong&gt;search&lt;/strong&gt; and &lt;strong&gt;multi-tenancy&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Search Problems
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Complex Filter Conditions Don't Scale
&lt;/h4&gt;

&lt;p&gt;Real estate search means combining "area + price range + floor size + walking distance to station + building age" — easily 5+ conditions. DynamoDB Query only filters on partition key and sort key.&lt;/p&gt;

&lt;p&gt;You can add FilterExpression for extra conditions, but &lt;strong&gt;filters apply after the Query runs&lt;/strong&gt;. So you might fetch 1,000 records just to filter down to 10. That's wasteful.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. No Full-Text Search
&lt;/h4&gt;

&lt;p&gt;Searching for "Shibuya office pet-friendly" is impossible in DynamoDB. The &lt;code&gt;contains&lt;/code&gt; operator exists but requires a full Scan — not practical at scale.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Inflexible Sorting and Pagination
&lt;/h4&gt;

&lt;p&gt;DynamoDB sorting depends on the sort key. Switching between "sort by price," "sort by size," and "sort by newest" each requires a separate GSI. The GSI limit is 20, but once you factor in filter combinations, it's nowhere near enough.&lt;/p&gt;

&lt;p&gt;Pagination is cursor-based only (&lt;code&gt;ExclusiveStartKey&lt;/code&gt;) — you can't jump directly to page 3.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Tenancy Problems
&lt;/h3&gt;

&lt;h4&gt;
  
  
  4. Data Isolation Across Products and Environments
&lt;/h4&gt;

&lt;p&gt;Our products share Amplify backend patterns. DynamoDB naturally isolates data by table, so cross-product contamination isn't an issue there.&lt;/p&gt;

&lt;p&gt;The problem emerges &lt;strong&gt;when you add a search engine&lt;/strong&gt;. If you create a separate search instance per product and environment, instances multiply fast: 2 products x 3 environments (staging / prod / sandbox) = 6 instances. With 5 developers, sandbox alone means 10 instances. At $27 each: &lt;strong&gt;$432/month&lt;/strong&gt; — just for adding search.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Amplify Sandbox Proliferation
&lt;/h4&gt;

&lt;p&gt;Amplify Gen2's &lt;code&gt;npx ampx sandbox&lt;/code&gt; creates isolated environments per developer. This works perfectly for DynamoDB, but it's devastating for always-on services like OpenSearch. Each sandbox creates an instance, takes minutes to boot, and if someone forgets to shut it down, costs pile up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep DynamoDB, Add a Search Layer
&lt;/h2&gt;

&lt;p&gt;"If search is the problem, just migrate to an RDB" — we considered it seriously. Aurora Serverless v2 is the strongest managed RDB option on AWS. But even with Aurora, &lt;strong&gt;multi-tenancy still needs separate design work&lt;/strong&gt;, and we didn't see enough upside to justify abandoning DynamoDB + Amplify's developer experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;DynamoDB + OpenSearch&lt;/th&gt;
&lt;th&gt;Aurora Serverless v2&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monthly cost (min config)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;DynamoDB: pay-per-use (~$5) + OpenSearch t3.small: &lt;strong&gt;~$27&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Min 0.5 ACU: &lt;strong&gt;~$60–70&lt;/strong&gt; (Tokyo region)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$30–50/month&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$60–100/month&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Storage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;DynamoDB: $0.25/GB + OpenSearch: $0.122/GB (gp3)&lt;/td&gt;
&lt;td&gt;$0.12/GB (Aurora storage)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scaling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;DynamoDB: auto-scale, OpenSearch: fixed instance&lt;/td&gt;
&lt;td&gt;ACU-based auto-scaling&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Aurora Serverless v2 charges even at minimum 0.5 ACU. In Tokyo region, that's ~$0.20/hour × 0.5 ACU × 730 hours = &lt;strong&gt;~$73/month&lt;/strong&gt; — even when idle.&lt;/p&gt;

&lt;p&gt;DynamoDB is pay-per-use (nearly zero when idle), and OpenSearch t3.small.search runs at &lt;strong&gt;~$27/month&lt;/strong&gt;. Combined, it's less than half of Aurora.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amplify Compatibility
&lt;/h3&gt;

&lt;p&gt;Amplify Gen2 uses DynamoDB as its default data store. Define your schema with &lt;code&gt;defineData()&lt;/code&gt; and GraphQL APIs + DynamoDB tables are auto-generated. This developer experience is excellent.&lt;/p&gt;

&lt;p&gt;Migrating to Aurora means giving that up: self-managed SQL schemas, migrations, ORM configuration. &lt;strong&gt;Keeping DynamoDB as the primary data store and adding OpenSearch as a search layer minimizes architectural changes.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Search Performance
&lt;/h3&gt;

&lt;p&gt;OpenSearch is purpose-built for search: full-text search, faceted search, geo search, scoring — all things Aurora's &lt;code&gt;LIKE&lt;/code&gt; clause or full-text indexes can't match. For complex real estate search with many filter combinations, OpenSearch is the clear winner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture: DynamoDB Streams + Lambda + OpenSearch
&lt;/h2&gt;

&lt;p&gt;The architecture is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[DynamoDB] → DynamoDB Streams → [Lambda] → [OpenSearch]
                                                ↑
                                        Search API reads from here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Data Flow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Write&lt;/strong&gt;: App writes to DynamoDB as before (Amplify Data models unchanged)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sync&lt;/strong&gt;: DynamoDB Streams detects changes and triggers a Lambda function&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index&lt;/strong&gt;: Lambda upserts/deletes documents in OpenSearch&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search&lt;/strong&gt;: Search API queries OpenSearch and returns results&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key benefit: &lt;strong&gt;zero changes to the existing write path&lt;/strong&gt;. DynamoDB CRUD goes through Amplify's GraphQL API as always — only search reads from OpenSearch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sync Lambda (Conceptual)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// DynamoDB Streams → Lambda → OpenSearch&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DynamoDBStreamEvent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Records&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tableName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;extractTableName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventSourceARN&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;indexName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;tableToIndex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tableName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Table name → index name mapping&lt;/span&gt;

    &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;INSERT&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;MODIFY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;opensearchClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;indexName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Keys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;unmarshall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NewImage&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;REMOVE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;opensearchClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;indexName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Keys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Search Query: Before / After
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Before: DynamoDB (multi-condition filtering)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// DynamoDB: Partition key + FilterExpression workaround&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;dynamoClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;TableName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Properties&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;KeyConditionExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#area = :area&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;FilterExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#price BETWEEN :minPrice AND :maxPrice AND #size &amp;gt;= :minSize&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;ExpressionAttributeValues&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;:area&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Shibuya&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;:minPrice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5000000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;:maxPrice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;:minSize&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="c1"&gt;// → Fetches 1,000 records, filters down to 10. Inefficient.&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;After: OpenSearch (same conditions + full-text search + sort)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// OpenSearch: Combine any conditions + full-text + sort in one query&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;opensearchClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;projectA-prod-properties&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;must&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pet-friendly office&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;  &lt;span class="c1"&gt;// Full-text search&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;term&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;area&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Shibuya&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;gte&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5000000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;lte&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;gte&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;asc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;  &lt;span class="c1"&gt;// Sort by any field&lt;/span&gt;
    &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;         &lt;span class="c1"&gt;// Pagination&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="c1"&gt;// → Returns exactly 20 matching records. 100–200ms response time.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full-text search + multi-condition filtering + sorting + pagination — all in one query. This was impossible with DynamoDB alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gotcha: Japanese Full-Text Search Doesn't Work by Default
&lt;/h2&gt;

&lt;p&gt;The first thing that bit us after adding OpenSearch: &lt;strong&gt;Japanese full-text search returned zero results&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Searching for "Shibuya office" returned nothing. The root cause: OpenSearch's default analyzer (standard analyzer) doesn't support Japanese morphological analysis. "渋谷オフィス" (Shibuya Office) was treated as a single token, so partial matches failed.&lt;/p&gt;

&lt;p&gt;The fix: configure &lt;strong&gt;ICU analyzer + kuromoji tokenizer&lt;/strong&gt; when creating the index.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"settings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"analysis"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"analyzer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"japanese_analyzer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"custom"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"tokenizer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"kuromoji_tokenizer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"filter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"kuromoji_baseform"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lowercase"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mappings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"analyzer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"japanese_analyzer"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The moment we added this config, Japanese keyword search started working correctly. &lt;strong&gt;OpenSearch's Japanese search doesn't work out of the box&lt;/strong&gt; — not knowing this was our biggest time waste.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note for English-language deployments&lt;/strong&gt;: If your data is English-only, the default standard analyzer works fine. The kuromoji tokenizer is specifically needed for CJK (Chinese/Japanese/Korean) text segmentation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Operational Considerations
&lt;/h2&gt;

&lt;p&gt;DynamoDB Streams + Lambda sync comes with a few operational caveats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sync latency&lt;/strong&gt;: Streams → Lambda → OpenSearch propagation typically takes a few hundred ms to a few seconds. For screens requiring real-time data, add a fallback that reads directly from DynamoDB immediately after writes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda error retries&lt;/strong&gt;: If the Streams-triggered Lambda fails repeatedly, records can expire and be lost. Set up a DLQ (Dead Letter Queue) to catch failures. &lt;strong&gt;Note: DynamoDB Streams has a 24-hour retention window&lt;/strong&gt; — if your Lambda is down for longer than that, events are permanently lost&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index rebuilds&lt;/strong&gt;: When you change OpenSearch mappings, existing data needs re-indexing. Keep a script ready that does a full DynamoDB Scan → OpenSearch Bulk Insert&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Multi-Tenancy Design: 1 Instance × Index Isolation
&lt;/h2&gt;

&lt;p&gt;Search was solved. Next: multi-tenancy — how to safely isolate multiple products × multiple environments while keeping costs down.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution: Share One Instance, Isolate by Index
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Instead of creating an OpenSearch instance per environment, we share a single instance across all environments and isolate by index name.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OpenSearch Domain (1 instance)
├── projectA-stg-properties     ← Product A staging properties
├── projectA-stg-sections       ← Product A staging lots
├── projectA-prod-properties    ← Product A prod properties
├── projectA-prod-sections      ← Product A prod lots
├── projectB-stg-properties     ← Product B staging properties
├── projectB-prod-properties    ← Product B prod properties
└── sandbox-{user}-properties   ← Sandbox (only when needed)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key decisions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No OpenSearch resources for sandbox&lt;/strong&gt; — In CDK/Amplify backend definitions, skip OpenSearch resource creation for sandbox environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sandbox Lambda points to shared dev OpenSearch&lt;/strong&gt; — Inject the OpenSearch endpoint and index name via environment variables. Sandbox either uses the stg instance or falls back to direct DynamoDB reads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index names include project and environment&lt;/strong&gt; — &lt;code&gt;{project}-{env}-{entity}&lt;/code&gt; naming convention prevents collisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach keeps &lt;strong&gt;OpenSearch instances at the absolute minimum (1 shared)&lt;/strong&gt; while safely isolating data across multiple products and environments. What would have been $432/month with 16 instances is now &lt;strong&gt;$27/month&lt;/strong&gt; with one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Gained
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Search
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Before (DynamoDB only)&lt;/th&gt;
&lt;th&gt;After (DynamoDB + OpenSearch)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Filtering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GSI + FilterExpression (2–3 conditions max)&lt;/td&gt;
&lt;td&gt;Combine any number of conditions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Full-text search&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not possible&lt;/td&gt;
&lt;td&gt;Japanese morphological analysis supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sorting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sort key dependent (needs GSI)&lt;/td&gt;
&lt;td&gt;Sort by any field&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pagination&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cursor-based only&lt;/td&gt;
&lt;td&gt;from/size for direct page jumps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Response time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Scan could take seconds&lt;/td&gt;
&lt;td&gt;Complex queries in 100–200ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Multi-Tenancy
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Before (separate instances)&lt;/th&gt;
&lt;th&gt;After (1 instance × index isolation)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Instances&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1 per product × environment&lt;/td&gt;
&lt;td&gt;1 total&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monthly cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Up to $432/month (16 instances)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$27/month (1 instance)&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sandbox startup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Wait for OpenSearch boot (minutes)&lt;/td&gt;
&lt;td&gt;Skip (uses shared stg instance)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data isolation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Instance-level&lt;/td&gt;
&lt;td&gt;Index naming convention (logical)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Getting Started: 5 Steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create an OpenSearch domain&lt;/strong&gt; — t3.small.search, single instance. ~$27/month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create indexes with Japanese analyzer&lt;/strong&gt; — Configure kuromoji tokenizer. Skip this and Japanese search won't work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable DynamoDB Streams&lt;/strong&gt; — Set &lt;code&gt;NEW_AND_OLD_IMAGES&lt;/code&gt; on target tables&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy sync Lambda&lt;/strong&gt; — Streams-triggered Lambda that upserts/deletes to OpenSearch. Don't forget the DLQ&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add a search API&lt;/strong&gt; — Query OpenSearch from a new endpoint. Keep existing DynamoDB CRUD APIs untouched&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When your GSI count exceeds 5, or your FilterExpression hit rate drops below 10% — that's when to consider adding OpenSearch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;We were ready to migrate to Aurora. We had the estimate. But in the end, &lt;strong&gt;we didn't need to abandon DynamoDB&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;One OpenSearch instance solved both search and multi-tenancy. Writes go to DynamoDB, search reads from OpenSearch. DynamoDB Streams + Lambda keeps them in sync. Index naming (&lt;code&gt;{project}-{env}-{entity}&lt;/code&gt;) consolidates multiple products and environments into a single instance. Amplify Gen2's DynamoDB-centric developer experience stays intact, and search capabilities scale independently.&lt;/p&gt;

&lt;p&gt;The beauty of this "patch the weakness" approach: &lt;strong&gt;you don't have to rewrite your architecture all at once&lt;/strong&gt;. Start with DynamoDB. When search requirements grow, add OpenSearch. When products multiply, add indexes. At $27/month, there was no reason to migrate to Aurora.&lt;/p&gt;

&lt;h3&gt;
  
  
  When NOT to Use This Approach
&lt;/h3&gt;

&lt;p&gt;This pattern isn't universal. Consider a full RDB migration if you need complex relational queries across entities, ACID transactions spanning multiple tables, or your search requirements involve heavy aggregations that OpenSearch alone can't serve back to the write path. If your product is SQL-first from the start, don't force DynamoDB into the picture just to save a few dollars.&lt;/p&gt;

&lt;p&gt;But if you're already running DynamoDB and your pain is search — try patching the weakness with OpenSearch before rewriting everything. The best architecture isn't always the newest one. Sometimes it's the cheapest complement to what you already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/opensearch-service/pricing/" rel="noopener noreferrer"&gt;Amazon OpenSearch Service Pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/rds/aurora/pricing/" rel="noopener noreferrer"&gt;Amazon Aurora Pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html" rel="noopener noreferrer"&gt;Change Data Capture with DynamoDB Streams&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.amplify.aws/" rel="noopener noreferrer"&gt;AWS Amplify Gen2 Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How I Built a Mobile Approval System for Claude Code So I Can Finally Leave My Desk</title>
      <dc:creator>Kohei Aoki</dc:creator>
      <pubDate>Sat, 28 Feb 2026 06:01:30 +0000</pubDate>
      <link>https://dev.to/coa00/how-i-built-a-mobile-approval-system-for-claude-code-so-i-can-finally-leave-my-desk-1ida</link>
      <guid>https://dev.to/coa00/how-i-built-a-mobile-approval-system-for-claude-code-so-i-can-finally-leave-my-desk-1ida</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Claude Code hooks + ntfy.sh = approve/deny permissions from your phone. 60 lines of bash, 3-minute setup, &lt;a href="https://github.com/coa00/claude-push" rel="noopener noreferrer"&gt;open source&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Every AI Coding Agent User Faces
&lt;/h2&gt;

&lt;p&gt;I came back with my coffee and found Claude Code had been frozen for 15 minutes.&lt;/p&gt;

&lt;p&gt;As AI coding agents gain autonomy — writing code, running builds, modifying files — the question of human oversight becomes critical. You want the agent to keep working, but you also want to know what it's doing. The permission prompt is the checkpoint, but it's also the bottleneck.&lt;/p&gt;

&lt;p&gt;Sure, you can add commands to the &lt;code&gt;allowlist&lt;/code&gt; to reduce prompts, but blanket-approving unknown commands and file writes is risky. On the other hand, staying glued to your terminal isn't realistic either.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;&lt;a href="https://github.com/coa00/claude-push" rel="noopener noreferrer"&gt;claude-push&lt;/a&gt;&lt;/strong&gt; — an async human-in-the-loop approval layer for Claude Code. It uses PermissionRequest hooks and &lt;a href="https://ntfy.sh" rel="noopener noreferrer"&gt;ntfy.sh&lt;/a&gt; to send Allow/Deny push notifications straight to your phone. Open source, 3-minute setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dilemma: Stay at Your Desk or Allow Everything
&lt;/h2&gt;

&lt;p&gt;When you delegate code generation and refactoring to Claude Code, you'll inevitably see prompts like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Claude wants to run: rm -rf dist &amp;amp;&amp;amp; npm run build
Allow? (y/n)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every time this pops up, you have to go back to the terminal and hit &lt;code&gt;y&lt;/code&gt;. If you're deep in focus (or just grabbing coffee), you miss it and Claude Code sits there doing nothing.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Pros&lt;/th&gt;
&lt;th&gt;Cons&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Manual approval in terminal&lt;/td&gt;
&lt;td&gt;Safe&lt;/td&gt;
&lt;td&gt;Can't leave your desk&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Allowlist everything&lt;/td&gt;
&lt;td&gt;Convenient&lt;/td&gt;
&lt;td&gt;Unknown commands get through&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Mobile push notifications&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Away from desk + safe&lt;/td&gt;
&lt;td&gt;Requires setup&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;claude-push&lt;/code&gt; makes option 3 a reality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prior Art
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/konsti-web/claude_push" rel="noopener noreferrer"&gt;konsti-web/claude_push&lt;/a&gt; tackled this same problem, but it's Windows + PowerShell + keystroke injection — doesn't work on macOS or Linux. I had the same pain point, so I rebuilt the concept from scratch using &lt;strong&gt;bash + PermissionRequest hooks&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works: PermissionRequest Hook + ntfy.sh
&lt;/h2&gt;

&lt;p&gt;Claude Code has a &lt;a href="https://docs.anthropic.com/en/docs/claude-code/hooks" rel="noopener noreferrer"&gt;Hooks&lt;/a&gt; system that lets you run external scripts on specific events. By registering a hook on the &lt;code&gt;PermissionRequest&lt;/code&gt; event, you can intercept the permission prompt and return your own decision.&lt;/p&gt;

&lt;p&gt;Here's the full flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Claude Code requests permission
  → Hook script fires
    → Sends notification with Allow/Deny buttons to ntfy.sh
      → Phone receives the push notification
        → You tap Allow or Deny
          → Response received via ntfy.sh SSE
            → Hook returns allow/deny JSON
              → Claude Code continues or stops
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://ntfy.sh" rel="noopener noreferrer"&gt;ntfy.sh&lt;/a&gt; is a free, HTTP-based push notification service. No account required — if you know the topic name, you can send and receive notifications. Unlike Pushover or LINE Notify, you can send a notification with a single &lt;code&gt;curl&lt;/code&gt; command, which makes it a perfect fit for hook scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Details
&lt;/h2&gt;

&lt;p&gt;The hook script is about 60 lines of bash. Here are the three key design decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Using ntfy.sh HTTP Actions for Buttons
&lt;/h3&gt;

&lt;p&gt;ntfy.sh supports &lt;a href="https://docs.ntfy.sh/publish/#action-buttons" rel="noopener noreferrer"&gt;Action Buttons&lt;/a&gt; on notifications. I'm using &lt;code&gt;http&lt;/code&gt; actions so that tapping a button POSTs to a separate response topic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;jq &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arg&lt;/span&gt; topic &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TOPIC&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arg&lt;/span&gt; title &lt;span class="s2"&gt;"[&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT&lt;/span&gt;&lt;span class="s2"&gt;] &lt;/span&gt;&lt;span class="nv"&gt;$TOOL_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arg&lt;/span&gt; message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TOOL_INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arg&lt;/span&gt; allow_url &lt;span class="s2"&gt;"https://ntfy.sh/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RESPONSE_TOPIC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arg&lt;/span&gt; allow_body &lt;span class="s2"&gt;"allow|&lt;/span&gt;&lt;span class="nv"&gt;$REQ_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arg&lt;/span&gt; deny_url &lt;span class="s2"&gt;"https://ntfy.sh/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RESPONSE_TOPIC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arg&lt;/span&gt; deny_body &lt;span class="s2"&gt;"deny|&lt;/span&gt;&lt;span class="nv"&gt;$REQ_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'{
      topic: $topic, title: $title, message: $message,
      priority: 4, tags: ["lock"],
      actions: [
        {action:"http",label:"Allow",url:$allow_url,method:"POST",body:$allow_body},
        {action:"http",label:"Deny",url:$deny_url,method:"POST",body:$deny_body}
      ]
    }'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"https://ntfy.sh/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key detail: the notification topic and the response topic are separate channels. This prevents the SSE stream from being polluted by your own outgoing notifications.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. SSE + Request ID for Response Matching
&lt;/h3&gt;

&lt;p&gt;After sending the notification, the hook waits for a response on the ntfy.sh SSE endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;REQ_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%s&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; line&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$line&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; data:&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nv"&gt;DATA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;line&lt;/span&gt;&lt;span class="p"&gt;#data&lt;/span&gt;:&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nv"&gt;MSG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DATA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.message // empty'&lt;/span&gt; 2&amp;gt;/dev/null&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MSG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="s2"&gt;"|&lt;/span&gt;&lt;span class="nv"&gt;$REQ_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nv"&gt;DECISION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MSG&lt;/span&gt;&lt;span class="p"&gt;%%|*&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="nb"&gt;break
    &lt;/span&gt;&lt;span class="k"&gt;fi
  fi
done&lt;/span&gt; &amp;lt; &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-N&lt;/span&gt; &lt;span class="nt"&gt;--max-time&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WAIT_TIMEOUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Accept: text/event-stream"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"https://ntfy.sh/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RESPONSE_TOPIC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/sse"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;REQ_ID&lt;/code&gt; (timestamp + PID) is embedded in the notification body and matched against the response. This ensures that &lt;strong&gt;even when multiple permission requests fire simultaneously, each response is matched to the correct request&lt;/strong&gt;. Without this, you could accidentally apply a previous notification's response to a new one.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Timeout Fallback to Terminal
&lt;/h3&gt;

&lt;p&gt;If no response comes within the timeout window, the script outputs nothing and exits with code 0.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DECISION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"allow"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;jq &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'{hookSpecificOutput:{...decision:{behavior:"allow"}}}'&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DECISION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"deny"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;jq &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'{hookSpecificOutput:{...decision:{behavior:"deny"}}}'&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="c"&gt;# Timeout: no output → falls back to interactive prompt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Claude Code's hook specification, no output + exit 0 means "the hook didn't make a decision," which falls back to the standard terminal prompt. So even if you're not looking at your phone, you can still handle it from the terminal after the timeout. No permissions are silently granted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;macOS or Linux&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bash&lt;/code&gt;, &lt;code&gt;jq&lt;/code&gt;, &lt;code&gt;curl&lt;/code&gt; installed&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ntfy.sh" rel="noopener noreferrer"&gt;ntfy app&lt;/a&gt; installed on your phone&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/coa00/claude-push.git
&lt;span class="nb"&gt;cd &lt;/span&gt;claude-push
bash install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The installer walks you through:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dependency check&lt;/strong&gt; — verifies &lt;code&gt;jq&lt;/code&gt; and &lt;code&gt;curl&lt;/code&gt; are available&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topic name input&lt;/strong&gt; — generates a random one if left blank (&lt;code&gt;claude-push-a1b2c3d4&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Config file creation&lt;/strong&gt; — writes to &lt;code&gt;~/.config/claude-push/config&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hook deployment&lt;/strong&gt; — places script at &lt;code&gt;~/.local/share/claude-push/hooks/claude-push.sh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude settings registration&lt;/strong&gt; — safely merges the hook into &lt;code&gt;~/.claude/settings.json&lt;/code&gt; using &lt;code&gt;jq&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test notification&lt;/strong&gt; — sends a test push to verify everything works&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After installation, subscribe to your topic in the ntfy app and you're good to go.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;Edit &lt;code&gt;~/.config/claude-push/config&lt;/code&gt;. No reinstallation needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Topic name (acts as a shared secret for your ntfy.sh channel)&lt;/span&gt;
&lt;span class="nv"&gt;CLAUDE_PUSH_TOPIC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"my-unique-topic"&lt;/span&gt;

&lt;span class="c"&gt;# Timeout in seconds (falls back to terminal prompt after this)&lt;/span&gt;
&lt;span class="nv"&gt;CLAUDE_PUSH_TIMEOUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;90
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Send a test notification with Allow/Deny buttons&lt;/span&gt;
bash scripts/test.sh test-notify

&lt;span class="c"&gt;# Check installation status&lt;/span&gt;
bash scripts/test.sh status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;status&lt;/code&gt; command checks Config / Hook / Settings / Dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=== claude-push status ===
[OK] Config: ~/.config/claude-push/config
     Topic: my-unique-topic
     Timeout: 90s
[OK] Hook: ~/.local/share/claude-push/hooks/claude-push.sh
[OK] Settings: hook registered in ~/.claude/settings.json
[OK] Dependency: jq
[OK] Dependency: curl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Uninstall
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash uninstall.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Removes the hook from Claude settings and cleans up all installed files.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Looks Like in Practice
&lt;/h2&gt;

&lt;p&gt;After setup, my daily workflow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tell Claude Code to refactor something, then leave my desk&lt;/li&gt;
&lt;li&gt;A few minutes later, my phone buzzes: &lt;code&gt;[myproject] Bash: npm run build&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;I check the command and tap &lt;strong&gt;Allow&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;When I get back to my desk, the build is already done&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I can handle permission requests during meetings, walks, or coffee runs — as long as I have my phone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before / After
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Approval method&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;y&lt;/code&gt;/&lt;code&gt;n&lt;/code&gt; in terminal&lt;/td&gt;
&lt;td&gt;Allow/Deny on phone&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;While away&lt;/td&gt;
&lt;td&gt;Claude Code stops and waits&lt;/td&gt;
&lt;td&gt;Handle via push notification&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Timeout&lt;/td&gt;
&lt;td&gt;None (waits forever)&lt;/td&gt;
&lt;td&gt;Falls back to terminal after 90s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security&lt;/td&gt;
&lt;td&gt;Allowlist or manual check&lt;/td&gt;
&lt;td&gt;Case-by-case approval per notification&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Setup cost&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;bash install.sh&lt;/code&gt; in 3 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;The ntfy.sh topic name acts as a &lt;strong&gt;shared secret&lt;/strong&gt;. Anyone who knows the topic name can send notifications or forge responses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a random, hard-to-guess string for your topic name (the installer generates one by default)&lt;/li&gt;
&lt;li&gt;For stricter control, configure &lt;a href="https://docs.ntfy.sh/config/#access-control" rel="noopener noreferrer"&gt;ntfy.sh access control&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;For dangerous commands (&lt;code&gt;rm -rf&lt;/code&gt;, etc.), keep them explicitly blocked in your allowlist as an extra safety layer&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Claude Code's permission system used to be a binary choice: either babysit your terminal or allowlist everything. &lt;code&gt;claude-push&lt;/code&gt; adds a &lt;strong&gt;third option — mobile approval&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The implementation is deliberately boring: bash + ntfy.sh HTTP Actions + SSE. The hook itself is about 60 lines. What made this possible is Claude Code's well-designed Hooks API — you just return a JSON object with &lt;code&gt;allow&lt;/code&gt; or &lt;code&gt;deny&lt;/code&gt; and you're done.&lt;/p&gt;

&lt;p&gt;The broader pattern here applies to any AI coding agent, not just Claude Code: as agents get more capable, we need lightweight, async approval mechanisms that don't require us to sit and watch. Mobile push notifications hit the sweet spot — fast enough to keep the agent moving, visible enough to maintain oversight.&lt;/p&gt;

&lt;p&gt;The best developer tool is the one that lets you stop staring at a screen for 5 minutes. &lt;code&gt;claude-push&lt;/code&gt; is that tool.&lt;/p&gt;

&lt;p&gt;How do you handle AI agent permissions? Whether you're an allowlist person or a manual-check person, I'd love for you to give this a try.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/coa00/claude-push" rel="noopener noreferrer"&gt;https://github.com/coa00/claude-push&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.anthropic.com/en/docs/claude-code/hooks" rel="noopener noreferrer"&gt;Claude Code Hooks Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ntfy.sh" rel="noopener noreferrer"&gt;ntfy.sh&lt;/a&gt; — HTTP-based push notification service&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.ntfy.sh/publish/#action-buttons" rel="noopener noreferrer"&gt;ntfy.sh Action Buttons&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/konsti-web/claude_push" rel="noopener noreferrer"&gt;konsti-web/claude_push&lt;/a&gt; — Original inspiration (Windows/PowerShell version)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>claudecode</category>
      <category>productivity</category>
      <category>bash</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
