<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rodrigo Burgos</title>
    <description>The latest articles on DEV Community by Rodrigo Burgos (@burgossrodrigo).</description>
    <link>https://dev.to/burgossrodrigo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/burgossrodrigo"/>
    <language>en</language>
    <item>
      <title>Running Anchor Tests on GitHub Actions Without Losing Your Mind</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Tue, 10 Mar 2026 06:00:41 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/running-anchor-tests-on-github-actions-without-losing-your-mind-20j0</link>
      <guid>https://dev.to/burgossrodrigo/running-anchor-tests-on-github-actions-without-losing-your-mind-20j0</guid>
      <description>&lt;p&gt;Setting up anchor test on GitHub Actions sounds like a weekend task. It took me a full week of debugging to get it right. Here's&lt;br&gt;
  everything I found so far, so you don't have to.&lt;/p&gt;

&lt;p&gt;The context&lt;/p&gt;

&lt;p&gt;I'm building an open source cross-chain bridge between Ethereum and Solana. The Solana side uses Anchor 0.31.1. At some point I&lt;br&gt;
  needed CI — integration tests running automatically on every push, no manual anchor test on my machine.&lt;/p&gt;

&lt;p&gt;Simple enough, right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem 1: Agave 3.x and io_uring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first approach: install the Solana CLI inside the CI runner directly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  - run: sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-sSfL&lt;/span&gt; https://release.anza.xyz/stable/install&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;stable at the time of writing resolves to Agave 3.1.x. The install completes fine. Then anchor test starts the local validator and&lt;br&gt;
   immediately panics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  thread 'main' panicked at fs/src/dirs.rs:27:9:
  assertion failed: io_uring_supported()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agave 3.x added a hard dependency on io_uring, a Linux kernel feature for async I/O. GitHub Actions runners run on kernels where&lt;br&gt;
  io_uring is either disabled or not available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix: use Agave 2.x. v2.1.21 is the latest in the 2.1 series and works without io_uring.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;
  ENV SOLANA_VERSION=v2.1.21
  RUN sh -c "$(curl -sSfL https://release.anza.xyz/${SOLANA_VERSION}/install)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem 2: GLIBC mismatch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next issue: the anchor binary itself.&lt;/p&gt;

&lt;p&gt;avm install 0.31.1 downloads a pre-built binary from GitHub releases. That binary was compiled on a system with GLIBC 2.38/2.39.&lt;br&gt;
  Ubuntu 22.04 ships with GLIBC 2.35.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   /root/.avm/bin/anchor-0.31.1: /lib/x86_64-linux-gnu/libm.so.6:
     version `GLIBC_2.38' not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Two fixes needed:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Switch the base image to Ubuntu 24.04 (ships with GLIBC 2.39)&lt;/li&gt;
&lt;li&gt;Compile anchor from source instead of using avm. A binary compiled inside the container will always be compatible with the
container's GLIBC.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;FROM ubuntu:24.04&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  RUN cargo &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--git&lt;/span&gt; https://github.com/coral-xyz/anchor &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tag&lt;/span&gt; v0.31.1 anchor-cli &lt;span class="nt"&gt;--locked&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes the build slower (adds ~5 min to the image build), but the image is built once and cached. Runtime CI stays fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem 3: Container environment quirks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When GitHub Actions runs a job inside a container, it overrides some environment variables. Two things break silently:&lt;/p&gt;

&lt;p&gt;cargo can't find its toolchain:&lt;/p&gt;

&lt;p&gt;error: rustup could not choose a version of cargo to run,&lt;br&gt;
  because one wasn't specified explicitly, and no default is configured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix: explicitly set CARGO_HOME and RUSTUP_HOME in the job env:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;CARGO_HOME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.cargo&lt;/span&gt;
    &lt;span class="na"&gt;RUSTUP_HOME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.rustup&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;solana-keygen writes to the wrong place:&lt;/p&gt;

&lt;p&gt;The keypair must exist at ~/.config/solana/id.json. Inside a GitHub Actions container, HOME is /github/home, not /root. If you&lt;br&gt;
  hardcode /root/.config/solana/id.json it won't be found by Anchor.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Generate keypair&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;mkdir -p $HOME/.config/solana&lt;/span&gt;
      &lt;span class="s"&gt;solana-keygen new \&lt;/span&gt;
        &lt;span class="s"&gt;--outfile $HOME/.config/solana/id.json \&lt;/span&gt;
        &lt;span class="s"&gt;--no-bip39-passphrase --force&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem 4: Test validator startup time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The local validator takes longer to start in a CI runner than on a dev machine. Without configuring a startup wait, anchor test&lt;br&gt;
  will fail with:&lt;/p&gt;

&lt;p&gt;Unable to get latest blockhash. Test validator does not look started.&lt;/p&gt;

&lt;p&gt;Add this to your Anchor.toml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;  &lt;span class="nn"&gt;[test]&lt;/span&gt;
  &lt;span class="py"&gt;startup_wait&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;60000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem 5: Event listeners hang forever&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was the hardest one to diagnose. Tests that use program.addEventListener work fine locally but hang indefinitely in CI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eventPromise&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listenerId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;program&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;TokenSent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;program&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;removeEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;listenerId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;program&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;methods&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bridgeSend&lt;/span&gt;&lt;span class="p"&gt;(...).&lt;/span&gt;&lt;span class="nf"&gt;rpc&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;eventPromise&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// hangs forever in CI&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason: addEventListener opens a WebSocket subscription to the local validator. In containerized environments, this&lt;br&gt;
  subscription is established but log notifications are never delivered. The promise never resolves. Mocha's timeout is 1000s by&lt;br&gt;
  default, so the test runner just sits there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix: fetch the transaction logs directly after rpc() and parse the events from them.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;  &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;BorshCoder&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@coral-xyz/anchor&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;IDL&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../target/idl/bridge.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eventCoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BorshCoder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;IDL&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getConfirmedTx&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getTransaction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;commitment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;confirmed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;maxSupportedTransactionVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Transaction &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; not found after retries`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;parseEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logMessages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;logMessages&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Program data: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;eventCoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Program data: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Boolean&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then in the test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;program&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;methods&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bridgeSend&lt;/span&gt;&lt;span class="p"&gt;(...).&lt;/span&gt;&lt;span class="nf"&gt;rpc&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getConfirmedTx&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parseEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;logMessages&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;TokenSent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nx"&gt;assert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;TokenSent event not found&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;assert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;equal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toNumber&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two notes on this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;getTransaction can return null briefly after rpc() returns, even after confirmation — hence the retry loop.&lt;/li&gt;
&lt;li&gt;Use new BorshCoder(IDL) directly rather than program.coder. In some container configurations program.coder.events.decode returns
null even when the discriminator matches. Loading the IDL directly is reliable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also add "resolveJsonModule": true to your tsconfig.json to enable the JSON import.&lt;/p&gt;

&lt;p&gt;The pre-built image&lt;/p&gt;

&lt;p&gt;To avoid reinstalling all of this on every CI run, I packaged it into a Docker image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker pull burgossrodrigo/anchor-build:0.31.1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Full integration.yml example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;solana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;burgossrodrigo/anchor-build:0.31.1&lt;/span&gt;

    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;CARGO_HOME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.cargo&lt;/span&gt;
      &lt;span class="na"&gt;RUSTUP_HOME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.rustup&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cache build artifacts&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contracts/solana/target&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ runner.os }}-anchor-${{ hashFiles('contracts/solana/Cargo.lock') }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Generate keypair&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;mkdir -p $HOME/.config/solana&lt;/span&gt;
          &lt;span class="s"&gt;solana-keygen new \&lt;/span&gt;
            &lt;span class="s"&gt;--outfile $HOME/.config/solana/id.json \&lt;/span&gt;
            &lt;span class="s"&gt;--no-bip39-passphrase --force&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fix blake3 compatibility&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contracts/solana&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cargo update -p blake3 --precise 1.8.2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fix indexmap compatibility&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contracts/solana&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cargo update -p indexmap --precise 2.11.4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contracts/solana&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;anchor test&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The blake3 and indexmap pins are required because cargo build-sbf uses a bundled Rust (1.79) that is incompatible with their&lt;br&gt;
  latest versions — blake3 1.8.3+ introduced edition2024, and indexmap 2.12+ has an MSRV higher than 1.79.&lt;/p&gt;

&lt;p&gt;Source&lt;/p&gt;

&lt;p&gt;The full project — bridge contracts, backend relayer, CI setup — is open source:&lt;/p&gt;

&lt;p&gt;&lt;a href="//github.com/burgossrodrigo/token_bridge"&gt;github.com/burgossrodrigo/token_bridge&lt;/a&gt;&lt;/p&gt;

</description>
      <category>solana</category>
      <category>anchor</category>
      <category>docker</category>
      <category>ci</category>
    </item>
    <item>
      <title>How to Make Your Rust Tests Run Faster in CI (A Practical Guide)</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Thu, 05 Mar 2026 00:22:03 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/how-to-make-your-rust-tests-run-faster-in-ci-a-practical-guide-3937</link>
      <guid>https://dev.to/burgossrodrigo/how-to-make-your-rust-tests-run-faster-in-ci-a-practical-guide-3937</guid>
      <description>&lt;p&gt;Slow CI pipelines are often blamed on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Heavy test suites&lt;/li&gt;
&lt;li&gt;Complex integrations&lt;/li&gt;
&lt;li&gt;Rust compilation time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But in many cases, the real issue is much simpler:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your tests are not fully using the CPU available in the CI runner.&lt;/li&gt;
&lt;li&gt;But in many cases, the real issue is much simpler:&lt;/li&gt;
&lt;li&gt;Your tests are not fully using the CPU available in the CI runner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — Understand How cargo test Uses Threads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rust’s test harness runs tests in parallel by default. However, in CI environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU limits may restrict available cores&lt;/li&gt;
&lt;li&gt;Containers may expose fewer threads&lt;/li&gt;
&lt;li&gt;The harness may default to 1 thread in constrained setups&lt;/li&gt;
&lt;li&gt;You should never assume your CI is using all available CPUs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, verify it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Check How Many CPUs Your Runner Has&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inside your CI job, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nproc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means the environment has 2 logical CPUs available. If you don’t explicitly configure thread usage, your tests might not use both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Explicitly Set --test-threads&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script:
  - cargo test -p my_crate module_a
  - cargo test -p my_crate module_b
  - cargo test -p my_crate module_c
  - cargo test -p my_crate module_d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each invocation runs sequentially. To ensure each test run uses all available CPU cores, capture the number of CPUs dynamically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script:
  - THREADS=$(nproc)
  - echo "Running tests with ${THREADS} threads"
  - cargo test -p my_crate module_a -- --test-threads=${THREADS}
  - cargo test -p my_crate module_b -- --test-threads=${THREADS}
  - cargo test -p my_crate module_c -- --test-threads=${THREADS}
  - cargo test -p my_crate module_d -- --test-threads=${THREADS}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why the Double Dash &lt;code&gt;(--)&lt;/code&gt; Is Important. The &lt;code&gt;--&lt;/code&gt; separator is critical. Everything before &lt;code&gt;--&lt;/code&gt; is interpreted by cargo. Everything after &lt;code&gt;--&lt;/code&gt; is passed to the test binary (Rust’s test harness).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--test-threads&lt;/code&gt; is a test harness argument &lt;code&gt;—&lt;/code&gt; not a cargo argument. If you forget the separator, the flag won’t work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 — Why Use $(nproc) Instead of a Fixed Number?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You could hardcode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--test-threads=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But that creates a hidden maintenance issue. If the runner changes from 2 CPUs to 4, your CI won’t scale automatically.&lt;/p&gt;

&lt;p&gt;Using: &lt;code&gt;THREADS=$(nproc)&lt;/code&gt; ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic adaptation&lt;/li&gt;
&lt;li&gt;No future edits required&lt;/li&gt;
&lt;li&gt;Better portability between environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5 — Make Sure Your Tests Are Safe to Parallelize&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Parallel test execution requires test isolation. Your tests should:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid global mutable state&lt;/li&gt;
&lt;li&gt;Avoid shared in-memory singletons&lt;/li&gt;
&lt;li&gt;Avoid reusing the same database instance&lt;/li&gt;
&lt;li&gt;Avoid mutating global environment variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A safe pattern is to instantiate dependencies per test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn create_test_repository() -&amp;gt; InMemoryRepository {
    InMemoryRepository::new()
}

[tokio::test]
fn example_test() {
    let repo = create_test_repository();
    // test logic here
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each test gets its own isolated state. When You Should Disable Parallelism. If a test suite depends on shared external state (for example, a real database instance), you may need to force sequential execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cargo test -- --test-threads=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this only to specific test groups that require it. Do not disable parallelism globally unless necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional Optimization: Avoid Repeating Expensive Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes slow tests are caused by repeated expensive operations (for example, hashing, cryptographic setup, or large fixture generation). You can cache computed values safely using &lt;code&gt;OnceLock&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use std::sync::OnceLock;

static PRECOMPUTED_VALUE: OnceLock&amp;lt;String&amp;gt; = OnceLock::new();

fn get_precomputed_value() -&amp;gt; String {
    PRECOMPUTED_VALUE
        .get_or_init(|| {
            expensive_operation()
        })
        .clone()
}

fn expensive_operation() -&amp;gt; String {
    // Simulate heavy work
    "computed_result".to_string()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The expensive operation runs only once&lt;/li&gt;
&lt;li&gt;Tests remain deterministic&lt;/li&gt;
&lt;li&gt;No unsafe global mutation occurs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, always evaluate tradeoffs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does it significantly reduce runtime?&lt;/li&gt;
&lt;li&gt;Does it add unnecessary complexity?&lt;/li&gt;
&lt;li&gt;Is parallelism alone sufficient?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Often, proper thread configuration already solves most CI performance issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expected Impact&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your CI was running tests effectively single-threaded on a multi-core runner, explicitly configuring &lt;code&gt;--test-threads&lt;/code&gt; can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce test stage time dramatically&lt;/li&gt;
&lt;li&gt;Improve resource utilization&lt;/li&gt;
&lt;li&gt;Avoid unnecessary infrastructure upgrades&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In many cases, improvements of 2–3x are realistic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Checklist&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your Rust CI feels slow, verify the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many CPUs does the runner expose? (nproc)&lt;/li&gt;
&lt;li&gt;Are tests running in parallel?&lt;/li&gt;
&lt;li&gt;Is --test-threads explicitly configured?&lt;/li&gt;
&lt;li&gt;Are tests properly isolated?&lt;/li&gt;
&lt;li&gt;Are expensive operations unnecessarily repeated?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before rewriting your test suite or scaling infrastructure, make sure you are actually using the hardware available to you.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>testing</category>
      <category>cicd</category>
      <category>qa</category>
    </item>
    <item>
      <title>Api rust com pulumi IaC, k8s, gke, dns e CI no gitlab.</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Fri, 26 Sep 2025 15:32:16 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/api-rust-com-pulumi-iac-k8s-gke-dns-e-ci-no-gitlab-4268</link>
      <guid>https://dev.to/burgossrodrigo/api-rust-com-pulumi-iac-k8s-gke-dns-e-ci-no-gitlab-4268</guid>
      <description>&lt;p&gt;O projetinho de hoje é construir uma api rust, com deploy no gke usando pulumi como iac. Nossa primeira abordagem será criar a nossa IaC usando pulumi.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IaC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Primeiro recurso é o ApiDeployment, não é o serviço propriamente dito mas possui algumas especificações do pod como recurso de máquina, portas (do serviço, observabilidade) e variáveis de ambiente:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func ApiDeployment(ctx *pulumi.Context, provider *kubernetes.Provider, apiLabels pulumi.StringMap, secret *corev1.Secret, image pulumi.StringInput) error {
    _, err := appsv1.NewDeployment(ctx, "api", &amp;amp;appsv1.DeploymentArgs{
        Metadata: &amp;amp;metav1.ObjectMetaArgs{
            Name: pulumi.String("api"),
        },
        Spec: &amp;amp;appsv1.DeploymentSpecArgs{
            Replicas: pulumi.Int(1),
            Selector: &amp;amp;metav1.LabelSelectorArgs{
                MatchLabels: apiLabels,
            },
            Template: &amp;amp;corev1.PodTemplateSpecArgs{
                Metadata: &amp;amp;metav1.ObjectMetaArgs{
                    Labels: apiLabels,
                },
                Spec: &amp;amp;corev1.PodSpecArgs{
                    Containers: corev1.ContainerArray{
                        &amp;amp;corev1.ContainerArgs{
                            Name:  pulumi.String("api"),
                            Image: image,
                            Ports: corev1.ContainerPortArray{
                                &amp;amp;corev1.ContainerPortArgs{ContainerPort: pulumi.Int(8080)},
                                &amp;amp;corev1.ContainerPortArgs{ContainerPort: pulumi.Int(3001), Name: pulumi.String("metrics")},
                            },
                            Resources: &amp;amp;corev1.ResourceRequirementsArgs{
                                Requests: pulumi.StringMap{"cpu": pulumi.String("100m")},
                                Limits:   pulumi.StringMap{"cpu": pulumi.String("500m")},
                            },
                            Env: corev1.EnvVarArray{
                                &amp;amp;corev1.EnvVarArgs{
                                    Name: pulumi.String("TEST"),
                                    ValueFrom: &amp;amp;corev1.EnvVarSourceArgs{
                                        SecretKeyRef: &amp;amp;corev1.SecretKeySelectorArgs{
                                            Name: secret.Metadata.Name(),
                                            Key:  pulumi.String("TEST"),
                                        },
                                    },
                                },
                            },
                        },
                    },
                },
            },
        },
    }, pulumi.Provider(provider))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Proximo recurso é o serviço propriamente dito, mais focado no load balancer (atenção pra congruência das portas expostas).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func ApiLb(ctx *pulumi.Context, provider *kubernetes.Provider, apiLabels pulumi.StringMap) (*corev1.Service, error) {
    apiService, err := corev1.NewService(ctx, "api-lb", &amp;amp;corev1.ServiceArgs{
        Metadata: &amp;amp;metav1.ObjectMetaArgs{
            Name:   pulumi.String("api-lb"),
            Labels: apiLabels,
            Annotations: pulumi.StringMap{
                "cloud.google.com/neg":            pulumi.String(`{"ingress": true}`),
                "cloud.google.com/backend-config": pulumi.String(`{"default":"api-backendconfig"}`),
                "pulumi.com/skipAwait":            pulumi.String("true"),
            },
        },
        Spec: &amp;amp;corev1.ServiceSpecArgs{
            Selector: apiLabels,
            Ports: corev1.ServicePortArray{
                &amp;amp;corev1.ServicePortArgs{
                    Port:       pulumi.Int(80),
                    TargetPort: pulumi.Int(8080),
                },
            },
        },
    }, pulumi.Provider(provider))
    if err != nil {
        return nil, err
    }

    return apiService, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Este recurso diz respeito ao ingress, a classe do ingress, o gerenciador de certificado, nome do ip estático público e o o domínio são gerenciados aqui:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func ApiIngress(
    ctx *pulumi.Context,
    provider *kubernetes.Provider,
    apiService *corev1.Service,
    cert *apiextensions.CustomResource,
    ip *compute.GlobalAddress, // &amp;lt;— MUDEI: recebe o recurso inteiro
) (*networkingv1.Ingress, error) {

    ingress, err := networkingv1.NewIngress(ctx, "ingress", &amp;amp;networkingv1.IngressArgs{
        Metadata: &amp;amp;metav1.ObjectMetaArgs{
            Name: pulumi.String("ingress"),
            Annotations: pulumi.StringMap{
                "networking.gke.io/managed-certificates": pulumi.String("managed-cert"),
                "kubernetes.io/ingress.class":            pulumi.String("gce"),
                "pulumi.com/skipAwait":                   pulumi.String("true"),
                // &amp;lt;- Aqui vai o NOME do GlobalAddress:
                "kubernetes.io/ingress.global-static-ip-name": ip.Name,
            },
        },
        Spec: &amp;amp;networkingv1.IngressSpecArgs{
            IngressClassName: pulumi.StringPtr("gce"),
            Rules: networkingv1.IngressRuleArray{
                &amp;amp;networkingv1.IngressRuleArgs{
                    Host: pulumi.String("subdomio.dominio.com"),
                    Http: &amp;amp;networkingv1.HTTPIngressRuleValueArgs{
                        Paths: networkingv1.HTTPIngressPathArray{
                            &amp;amp;networkingv1.HTTPIngressPathArgs{
                                Path:     pulumi.String("/"),
                                PathType: pulumi.String("Prefix"),
                                Backend: &amp;amp;networkingv1.IngressBackendArgs{
                                    Service: &amp;amp;networkingv1.IngressServiceBackendArgs{
                                        Name: pulumi.String("api-lb"),
                                        Port: &amp;amp;networkingv1.ServiceBackendPortArgs{
                                            Number: pulumi.Int(80),
                                        },
                                    },
                                },
                            },
                        },
                    },
                },
            },
        },
    }, pulumi.Provider(provider), pulumi.DependsOn([]pulumi.Resource{apiService, cert, ip}))

    if err != nil {
        return nil, err
    }
    return ingress, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Já este recurso é uma outra camada de gerenciamento do certificado, especificações do gcp.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func ManagedCert(ctx *pulumi.Context, provider *kubernetes.Provider) (*apiextensions.CustomResource, error) {
    cert, err := apiextensions.NewCustomResource(ctx, "managed-cert", &amp;amp;apiextensions.CustomResourceArgs{
        ApiVersion: pulumi.String("networking.gke.io/v1"),
        Kind:       pulumi.String("ManagedCertificate"),
        Metadata: metav1.ObjectMetaArgs{
            Name:      pulumi.String("managed-cert"),
            Namespace: pulumi.String("default"),
        },
        OtherFields: kubernetes.UntypedArgs{
            "spec": pulumi.Map{
                "domains": pulumi.StringArray{
                    pulumi.String("subdominio.dominio.com"),
                },
            },
        },
    }, pulumi.Provider(provider))

    if err != nil {
        return nil, err
    }

    return cert, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terceira camada de gerenciamento de recurso, especificação do gcp:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func NewRecordSet(ctx *pulumi.Context, managedZone *dns.ManagedZone, addr pulumi.StringInput) error {
    _, err := dns.NewRecordSet(ctx, "nomeclatura-arbitraria", &amp;amp;dns.RecordSetArgs{
        ManagedZone: managedZone.Name,
        Name:        pulumi.String("subdominio.dominio.com."),
        Type:        pulumi.String("A"),
        Ttl:         pulumi.Int(300),
        Rrdatas:     pulumi.StringArray{addr},
    })
    return err
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Quarta camada de gerenciamento de dns&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func ManagedZone(ctx *pulumi.Context) (*dns.ManagedZone, error) {
    managedZone, err := dns.NewManagedZone(ctx, "nomeclatura-arbitraria-para-zona", &amp;amp;dns.ManagedZoneArgs{
        DnsName:     pulumi.String("subdominio.dominio.com."),
        Description: pulumi.String("Managed zone for GKE ingress"),
    })
    if err != nil {
        return nil, err
    }
    return managedZone, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Declaração de IP público&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func IngressIP(ctx *pulumi.Context) (*compute.GlobalAddress, error) {
    ip, err := compute.NewGlobalAddress(ctx, "nomeclatura-arbitraria-referenciando-ip", &amp;amp;compute.GlobalAddressArgs{
        AddressType: pulumi.String("EXTERNAL"),
        IpVersion:   pulumi.String("IPV4"),
        Name:        pulumi.String("nomeclatura-arbitraria-referenciando-ip"),
    })
    if err != nil {
        return nil, err
    }
    return ip, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nossa main function com todas as declarações:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func main() {
    pulumi.Run(func(ctx *pulumi.Context) error {
        cfg := config.New(ctx, "gcp")
        region := cfg.Require("region")
        image := config.Require(ctx, "apiImage")

        cluster, err := infra.Cluster(ctx, region)
        if err != nil {
            return err
        }

        node, err := infra.Node(region, ctx, cluster)
        if err != nil {
            return err
        }

        kubeconfig := utils.Kubeconfig(cluster)

        provider, err := infra.Provider(ctx, node, &amp;amp;kubeconfig)
        if err != nil {
            return err
        }

        apiLabels := service.ApiLabels()

        secret, err := utils.CreateAppSecret(ctx, provider)

        err = service.ApiDeployment(ctx, provider, apiLabels, secret, pulumi.String(image))
        if err != nil {
            return err
        }

        apiLb, err := service.ApiLb(ctx, provider, apiLabels)
        if err != nil {
            return err
        }

        // Cria o ManagedCertificate e o armazena para ser usado como dependência.
        managedCert, err := service.ManagedCert(ctx, provider)
        if err != nil {
            return err
        }

        // O Ingress agora recebe o ManagedCert como argumento,
        // garantindo a ordem de criação correta.
        // _, err = service.ApiIngress(ctx, provider, apiLb, managedCert)
        // if err != nil {
        //  return err
        // }
        //

        IP, err := utils.IngressIP(ctx)
        if err != nil {
            return err
        }

        _, err = service.ApiIngress(ctx, provider, apiLb, managedCert, IP)
        if err != nil {
            return err
        }

        err = utils.BackendConfig(ctx, provider)
        if err != nil {
            return err
        }

        // ingressIP := service.IngressIP(apiIngress)

        // ingressIP.ApplyT(func(ip string) error {
        //  ctx.Log.Info("Ingress IP: "+ip, nil)
        //  return nil
        // })

        managedZone, err := service.ManagedZone(ctx)
        if err != nil {
            return err
        }

        err = service.NewRecordSet(ctx, managedZone, IP.Address)
        if err != nil {
            return err
        }

        return nil

    })

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Não vou me aprofundar muito na api, aqui está o dockerfile performático com gerenciamento de cache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ===== Base (Rust + deps de build) =====
ARG RUST_VERSION=1.83-bookworm
FROM rust:${RUST_VERSION} AS rust-base
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y --no-install-recommends \
    pkg-config libssl-dev ca-certificates &amp;amp;&amp;amp; \
    rm -rf /var/lib/apt/lists/*
WORKDIR /app

# ===== Planner (gera a recipe do cargo-chef) =====
FROM rust-base AS planner
RUN cargo install cargo-chef --locked
COPY Cargo.* ./
COPY src ./src
RUN cargo chef prepare --recipe-path recipe.json

# ===== Cache de deps (cozinha as dependências) =====
FROM rust-base AS cacher
RUN cargo install cargo-chef --locked
COPY --from=planner /app/recipe.json /app/recipe.json
RUN cargo chef cook --release --recipe-path /app/recipe.json

# ===== Build final do binário =====
FROM rust-base AS builder
# Opcional: garante lockfile se não estiver versionado
COPY . .
RUN [ -f Cargo.lock ] || cargo generate-lockfile
# Reaproveita cache de deps via layers do chef
RUN cargo build --release

# ===== Runtime (slim, com OpenSSL 3) =====
FROM debian:bookworm-slim
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y --no-install-recommends \
    libssl3 ca-certificates &amp;amp;&amp;amp; \
    apt-get clean &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*
WORKDIR /app

COPY --from=builder /app/target/release/test_api /app/test_api

ENV PORT=8080
EXPOSE 8080
CMD ["/app/test_api"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Gitlab CI/CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mandando a imagem pro artifact registry do gcp&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;push_test_api:
    stage: push_test_api
    tags: [temp]
    environment: { name: sandbox }
    rules:
        - if: '$CI_COMMIT_BRANCH == "sandbox" &amp;amp;&amp;amp; $CI_PIPELINE_SOURCE == "push"'
          changes: ["test_api/**/*"]
          when: always
        - when: never
    before_script:
        - *before_gcp
    script: |
        set -euo pipefail
        IMAGE_TAG="${CI_COMMIT_SHORT_SHA}"
        REPO="us-central1-docker.pkg.dev/projeto/registry"
        IMAGE_NAME="test-api"
        IMAGE_URI="${REPO}/${IMAGE_NAME}:${IMAGE_TAG}"

        docker build -t "${IMAGE_URI}" -f test_api/Dockerfile test_api
        docker push "${IMAGE_URI}"

        # Pin por digest (opcional, recomendado)
        DIGEST=$(gcloud artifacts docker images describe "${IMAGE_URI}" --format="value(image_summary.digest)")
        echo "IMAGE_URI=${IMAGE_URI}"   &amp;gt;  image_env.txt
        echo "IMAGE_DIGEST=${DIGEST}"   &amp;gt;&amp;gt; image_env.txt
    artifacts:
        reports:
            dotenv: image_env.txt
    after_script:
        - rm -f gcloud-key.json || true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Atualizando o recurso através do pulumi&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy_api_pulumi:
    stage: deploy_api_pulumi
    needs: ["push_test_api"]
    tags: [temp]
    environment: { name: sandbox }
    rules:
        - if: '$CI_COMMIT_BRANCH == "branch_name" &amp;amp;&amp;amp; $CI_PIPELINE_SOURCE == "push"'
          changes: ["test_api/**/*"]
          when: always
        - when: never
    before_script:
        - *before_gcp
        # Pulumi + Go
        - curl -fsSL https://get.pulumi.com | sh
        - export PATH="$HOME/.pulumi/bin:$PATH"
        - mkdir -p "$CI_PROJECT_DIR/.tmp/go"
        - curl -sSL "https://go.dev/dl/go1.23.0.linux-amd64.tar.gz" | tar -xz -C "$CI_PROJECT_DIR/.tmp/go" --strip-components=1
        - export PATH="$CI_PROJECT_DIR/.tmp/go/bin:$PATH"
        # GKE auth plugin + kubectl
        - gcloud components install gke-gcloud-auth-plugin kubectl --quiet || true
        - export USE_GKE_GCLOUD_AUTH_PLUGIN=True
        - kubectl version --client=true
        # kubeconfig
        - gcloud container clusters get-credentials seu-cluster --region us-east1 --project seu-projeto
    script: |
        set -euo pipefail
        cd iac_secret_manager

        # usa o backend do Pulumi.yaml (gs://seu-bucket-gerenciamento-state)
        pulumi login

        export PULUMI_CONFIG_PASSPHRASE=""

        pulumi stack select dev

        # aponta a imagem imutável produzida no job anterior
        pulumi config set apiImage "${IMAGE_URI}"

        # ---- (opcional) injete configs/segredos que sua app usa ----
        # Exemplos: (defina as envs no GitLab CI e marque-as como masked/protected)
        # pulumi config set --secret TEST "$TEST"
"$AUTH_TOKEN_KEY_STAGING"
        # ------------------------------------------------------------

        pulumi up --yes
        kubectl rollout status deploy/api -n default --timeout=120s
    after_script:
        - rm -f gcloud-key.json || true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Garanta um SA (service account) com permissões necessárias para essa operação. Crie o bucket no gs para gerenciar o estado, se o gitlab for self hosted, prepare uma VM para computar essas operações (com bastante espaço pra armazenar o cache ou com algum mecanismo de limpeza periódico) e você terá uma CI/CD segura e performática para seu projeto :)&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>gke</category>
      <category>gitlab</category>
      <category>rust</category>
    </item>
    <item>
      <title>Deploy de uma api em rust em cloudrun através de pipeline com gitlab</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Mon, 08 Sep 2025 21:21:10 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/deploy-de-uma-api-em-rust-em-cloudrun-atraves-de-pipeline-com-gitlab-na6</link>
      <guid>https://dev.to/burgossrodrigo/deploy-de-uma-api-em-rust-em-cloudrun-atraves-de-pipeline-com-gitlab-na6</guid>
      <description>&lt;p&gt;Post genuíno, em português e escrito por mim (gpt revisou hehehe). Tentarei ser o máximo objetivo possível mas sinta-se convidado a discutir abordagens ou tirar dúvidas por dm ou nos comentários.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;API&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A abordagem que utilizei para esta api é a de DDD (&lt;em&gt;domain drive design&lt;/em&gt;). Não vou me demorar muito no desenvolvimento da api em sí, vou focar no &lt;code&gt;m̀ain.rs&lt;/code&gt; e na árvore de arquivos.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use test_api::infra::handler::health::get_health_handler;
use tokio;
use warp::Filter;

#[tokio::main]
async fn main() {
    // Define the route
    let get_health_route = warp::path("health")
        .and(warp::get())
        .and_then(get_health_handler);

    // Start the warp server
    warp::serve(get_health_route)
        .run(([0, 0, 0, 0], 8080))
        .await;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Cargo.toml
├── Dockerfile
├── REAME.md
├── src
│   ├── domain
│   │   ├── entities
│   │   └── use_case
│   ├── infra
│   │   └── handler
│   ├── lib.rs
│   ├── main.rs
│   └── use_case
│       └── get_health
└── target
    ├── CACHEDIR.TAG
    └── debug
        ├── build
        ├── deps
        ├── examples
        └── incremental

15 directories, 6 files

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pontos importantes para focar são os paramêtros do run do warp/serve:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;.run(([0, 0, 0, 0], 8080))&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Para que o container seja acessado externamente.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dockerfile
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Etapa 1: Builder com Rust + OpenSSL
FROM rust:latest AS builder

RUN apt-get update &amp;amp;&amp;amp; apt-get install -y pkg-config libssl-dev

WORKDIR /usr/src/test_api

# Copia arquivos do crate
COPY Cargo.toml ./
COPY src ./src

# Gera lockfile e baixa dependências
RUN cargo generate-lockfile
RUN cargo build --release

# Etapa 2: Imagem final com compatibilidade glibc
FROM debian:bookworm-slim

# Instala OpenSSL e dependências mínimas de runtime
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y libssl3 ca-certificates &amp;amp;&amp;amp; \
    apt-get clean &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*

WORKDIR /app

COPY --from=builder /usr/src/test_api/target/release/test_api .

ENV PORT=8080
EXPOSE 8080

CMD ["./test_api"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Gitlab
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
    - deploy_cloud_run

deploy_cloud_run:
    stage: deploy_cloud_run
    tags: [temp]
    environment:
        name: dev
    rules:
        - if: '$CI_COMMIT_BRANCH == "dev" &amp;amp;&amp;amp; $CI_PIPELINE_SOURCE == "push"'
          changes:
              - **/*
          when: always
        - when: never
    before_script:
        - mkdir -p "$CI_PROJECT_DIR/.tmp/google-cloud-sdk"
        - curl -sSL "https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-438.0.0-linux-x86_64.tar.gz" | tar -xz -C "$CI_PROJECT_DIR/.tmp/google-cloud-sdk" --strip-components=1
        - export PATH="$CI_PROJECT_DIR/.tmp/google-cloud-sdk/bin:$PATH"
        - echo "$SERVICE_ACCOUNT_JSON" | base64 -d &amp;gt; gcloud-key.json
        - gcloud auth activate-service-account --key-file=gcloud-key.json
        - export GCP_PROJECT_ID="seu-projeto"
        - gcloud config set project "$GCP_PROJECT_ID"
        - gcloud auth configure-docker us-central1-docker.pkg.dev --quiet
    script:
        - |
            IMAGE_TAG="$(git rev-parse --short HEAD)"
            IMAGE_URI="us-central1-docker.pkg.dev/seu-repo/api/test-api:${IMAGE_TAG}"

            docker build \
              -t "$IMAGE_URI" \
              -f test_api/Dockerfile \
              test_api

            docker push "$IMAGE_URI"

            gcloud run deploy api-dev \
              --image "$IMAGE_URI" \
              --platform managed \
              --region us-central1 \
              --allow-unauthenticated \
              --project "$GCP_PROJECT_ID" \
              --set-env-vars "ENVIRONMENT=staging"
    after_script:
        - rm -f gcloud-key.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Permissões qua service account necessita para rodar esse job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud projects add-iam-policy-binding seu-projeto \ --member="serviceAccount:service-account@project.iam.gserviceaccount.com" \ --role="roles/run.admin"

gcloud projects add-iam-policy-binding seu-projeto \ --member="serviceAccount:service-account@project.iam.gserviceaccount.com" \ --role="roles/viewer"

gcloud projects add-iam-policy-binding seu-projeto \ --member="serviceAccount:service-account@project.iam.gserviceaccount.com" \ --role="roles/iam.serviceAccountUser"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;O json da SA deve ser armazenado como segredo do Gitlab.&lt;/p&gt;

&lt;p&gt;Dentro da sua gitlab dashboard, na sessão de jobs, deve existir um output como esse:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Service [api-dev] revision [sua-api-00000-pez] has been deployed and is serving 100 percent of traffic.
Service URL: https://sua-api-hash-uc.a.run.app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Se tudo deu certo, utilizando curl para interagir com o endpoint health:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://sua-api-hash-uc.a.run.app/health
{"health_status":{"status":"healthy","message":"API is working fine"}}%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repositório da api &lt;a href="https://github.com/burgossrodrigo/test_api" rel="noopener noreferrer"&gt;aqui&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Espero que, na eventualidade de você se encontrar travado com um procedimento como esse, ou parecido, essa postagem te ajuda a desimpedir (:&lt;/p&gt;

</description>
      <category>rust</category>
      <category>gcp</category>
      <category>gitlab</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying Redis on GCP with Helm, Kubernetes, and Pulumi</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Tue, 27 May 2025 03:36:16 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/deploying-redis-on-gcp-with-helm-kubernetes-and-pulumi-2gj3</link>
      <guid>https://dev.to/burgossrodrigo/deploying-redis-on-gcp-with-helm-kubernetes-and-pulumi-2gj3</guid>
      <description>&lt;p&gt;In this tutorial, you’ll learn how to deploy a highly available Redis instance on Google Kubernetes Engine (GKE) using Pulumi, Helm, and Kubernetes. We'll cover:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Prerequisites&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setting up GCP &amp;amp; GKE via Pulumi&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploying Redis with the Bitnami Helm chart&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exposing Redis with a LoadBalancer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Validating the Deployment&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;1. Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you begin, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A GCP project with billing enabled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;gcloud CLI installed and authenticated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pulumi CLI installed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A local Kubernetes context (you'll dynamically generate one via Pulumi).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm installed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go ≥1.18 (or your preferred Pulumi language runtime).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Setting up GCP &amp;amp; GKE via Pulumi&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a new Pulumi project (e.g. pulumi new go), then install the GCP and Kubernetes providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go get github.com/pulumi/pulumi-gcp/sdk/v6/go/gcp/container
go get github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, implement a provider package to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Provision a GKE cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate a kubeconfig&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instantiate a Kubernetes provider&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// provider/provider.go
package provider

import (
  "fmt"
  "os/exec"
  "strings"

  "github.com/pulumi/pulumi-gcp/sdk/v6/go/gcp/container"
  k8s "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes"
  "github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func SetupProvider(ctx *pulumi.Context) (*k8s.Provider, error) {
  // Provision a GKE cluster
  cluster, err := container.NewCluster(ctx, "my-gke-cluster", &amp;amp;container.ClusterArgs{
    Location:         pulumi.String("&amp;lt;YOUR_REGION&amp;gt;"),
    InitialNodeCount: pulumi.Int(1),
    NodeConfig: &amp;amp;container.ClusterNodeConfigArgs{
      MachineType: pulumi.String("e2-standard-2"),
      OauthScopes: pulumi.StringArray{
        pulumi.String("https://www.googleapis.com/auth/cloud-platform"),
      },
    },
  })
  if err != nil {
    return nil, fmt.Errorf("creating GKE cluster: %w", err)
  }

  // Build kubeconfig dynamically
  kubeconfig := pulumi.All(cluster.Name, cluster.Endpoint, cluster.MasterAuth).ApplyT(
    func(args []interface{}) (string, error) {
      name := args[0].(string)
      endpoint := args[1].(string)
      auth := args[2].(container.ClusterMasterAuth)

      // Retrieve a short-lived access token
      out, err := exec.Command("gcloud", "auth", "print-access-token").Output()
      if err != nil {
        return "", fmt.Errorf("gcloud auth: %w", err)
      }
      token := strings.TrimSpace(string(out))

      return fmt.Sprintf(`
apiVersion: v1
kind: Config
clusters:
- name: %s
  cluster:
    certificate-authority-data: %s
    server: https://%s
contexts:
- name: %s
  context:
    cluster: %s
    user: %s
current-context: %s
users:
- name: %s
  user:
    token: %s
`, name, *auth.ClusterCaCertificate, endpoint,
        name, name, name, name, name, token), nil
    }).(pulumi.StringOutput)

  // Create the Pulumi Kubernetes provider
  k8sProvider, err := k8s.NewProvider(ctx, "k8s", &amp;amp;k8s.ProviderArgs{
    Kubeconfig: kubeconfig,
  })
  if err != nil {
    return nil, fmt.Errorf("creating Kubernetes provider: %w", err)
  }

  return k8sProvider, nil
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, call this from your main.go:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// main.go
package main

import (
  "iac_staging/provider"
  "iac_staging/redis"

  "github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
  pulumi.Run(func(ctx *pulumi.Context) error {
    // Setup GKE + Kubernetes provider
    k8sProvider, err := provider.SetupProvider(ctx)
    if err != nil {
      return err
    }

    // Deploy Redis
    if err := redis.Install(ctx, k8sProvider); err != nil {
      return err
    }

    return nil
  })
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Deploying Redis with the Bitnami Helm Chart&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a redis package that uses Pulumi’s Helm integration to install the Bitnami Redis chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// redis/redis.go
package redis

import (
  "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes/helm/v3"
  k8s "github.com/pulumi/pulumi-kubernetes/sdk/v4/go/kubernetes"
  "github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func Install(ctx *pulumi.Context, provider *k8s.Provider) error {
  // Replace &amp;lt;YOUR_REDIS_PASSWORD&amp;gt; with a Pulumi secret or config
  redisPassword := pulumi.String("&amp;lt;YOUR_REDIS_PASSWORD&amp;gt;")

  _, err := helm.NewChart(ctx, "redis", helm.ChartArgs{
    Chart:     pulumi.String("redis"),
    Version:   pulumi.String("17.3.11"),
    Namespace: pulumi.String("default"),
    FetchArgs: helm.FetchArgs{
      Repo: pulumi.String("https://charts.bitnami.com/bitnami"),
    },
    Values: pulumi.Map{
      "auth": pulumi.Map{
        "enabled":  pulumi.Bool(true),
        "password": redisPassword,
      },
      "master": pulumi.Map{
        "service": pulumi.Map{
          // Type LoadBalancer will provision a GCP Network LB
          "type": pulumi.String("LoadBalancer"),
        },
        "resources": pulumi.Map{
          "requests": pulumi.Map{
            "cpu":    pulumi.String("100m"),
            "memory": pulumi.String("128Mi"),
          },
          "limits": pulumi.Map{
            "cpu":    pulumi.String("250m"),
            "memory": pulumi.String("256Mi"),
          },
        },
      },
      "replica": pulumi.Map{
        "replicaCount": pulumi.Int(1),
        "resources": pulumi.Map{
          "requests": pulumi.Map{
            "cpu":    pulumi.String("100m"),
            "memory": pulumi.String("128Mi"),
          },
          "limits": pulumi.Map{
            "cpu":    pulumi.String("250m"),
            "memory": pulumi.String("256Mi"),
          },
        },
      },
    },
  }, pulumi.Provider(provider))
  return err
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Exposing Redis with a LoadBalancer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By setting the service.type to LoadBalancer, GKE will automatically provision a GCP Network Load Balancer with an external IP. You can retrieve this IP with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl --context $(pulumi stack output kubeconfig) \
  get svc redis-master --namespace default -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tip: You can also configure a static IP in GCP and annotate the chart to use it by adding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master:
  service:
    annotations:
      networking.gke.io/load-balancer-type: "External"
    loadBalancerIP: "YOUR_RESERVED_STATIC_IP"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Validating the Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Obtain the External IP:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXTERNAL_IP=$(kubectl get svc redis-master \
  --namespace default \
  -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "Redis external IP: $EXTERNAL_IP"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Test Connectivity with redis-cli:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-cli -h $EXTERNAL_IP -a $REDIS_PASSWORD PING
# Expected: PONG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Conclusion&lt;br&gt;
You’ve now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Provisioned a GKE cluster on GCP with Pulumi.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installed Redis using the Bitnami Helm chart via Pulumi.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exposed Redis publicly (or privately, if you choose a different service.type).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Validated the deployment with redis-cli.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach lets you manage your infrastructure and application in a single Pulumi program, ensuring repeatable, version-controlled deployments.&lt;/p&gt;

&lt;p&gt;Feel free to drop any questions or improvements in the comments!&lt;/p&gt;

</description>
      <category>redis</category>
      <category>kubernetes</category>
      <category>gcp</category>
    </item>
    <item>
      <title>Building a Rust API to retrieve Your DEV.to Posts</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Tue, 22 Apr 2025 11:55:38 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/building-a-rust-api-to-retrieve-your-devto-posts-55be</link>
      <guid>https://dev.to/burgossrodrigo/building-a-rust-api-to-retrieve-your-devto-posts-55be</guid>
      <description>&lt;p&gt;Have you ever wanted a lightweight, self‑hosted API that serves your own DEV.to articles in JSON? &lt;/p&gt;

&lt;p&gt;In this tutorial I’ll walk you through the core ideas behind a small Rust service that fetches your posts directly from the DEV.to &lt;code&gt;/api/articles/me/all&lt;/code&gt; endpoint and exposes them at GET /posts on localhost:3030.&lt;/p&gt;

&lt;p&gt;First, we use Warp as our web server and routing library. Warp’s filter system makes it trivial to mount a route, handle query parameters, and return JSON. In &lt;code&gt;main.rs&lt;/code&gt; we set up a single filter for &lt;code&gt;warp::path("posts")&lt;/code&gt;, combine it with &lt;code&gt;warp::get()&lt;/code&gt;, and dispatch to a handler function.&lt;/p&gt;

&lt;p&gt;The handler uses a “use case” module powered by Reqwest and Serde. When you ping &lt;code&gt;/posts&lt;/code&gt;, the service reads your &lt;code&gt;API_KEY&lt;/code&gt; (from a .env file), builds an HTTP client, and issues a &lt;code&gt;GET&lt;/code&gt; to &lt;code&gt;https://dev.to/api/articles/me/all&lt;/code&gt; with the required headers: &lt;/p&gt;

&lt;p&gt;• api-key: &lt;br&gt;
• Accept: application/vnd.forem.api-v1+json&lt;br&gt;
• User-Agent: reqwest&lt;/p&gt;

&lt;p&gt;The JSON response is deserialized into a Vec struct, defined with #[serde(rename_all = "snake_case")] and fields matching the API (id, title, description, slug, url, tag_list, etc). Finally, we wrap that vector in a GetApiCallOutputType and return it as JSON via Warp.&lt;/p&gt;

&lt;p&gt;Because it’s pure Rust, this service is fast, statically linked, and easy to containerize. You can build it in release mode (cargo build --release) or package it in Docker with a multi‑stage Dockerfile. Once running, everything is accessible on port 3030.&lt;/p&gt;

&lt;p&gt;Next steps could include adding pagination support, in‑memory or Redis caching, request throttling, or even basic auth to protect your local endpoint. Give it a spin, tweak it to your needs, and let me know how you extend it!&lt;/p&gt;

&lt;p&gt;You check the git repository &lt;a href="https://github.com/burgossrodrigo/devto_api" rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>devto</category>
    </item>
    <item>
      <title>Access AWS ElastiCache from Localhost Using a Bastion Host and SSM</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Wed, 09 Apr 2025 05:42:20 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/access-aws-elasticache-from-localhost-using-a-bastion-host-and-ssm-5b60</link>
      <guid>https://dev.to/burgossrodrigo/access-aws-elasticache-from-localhost-using-a-bastion-host-and-ssm-5b60</guid>
      <description>&lt;p&gt;When managing cloud infrastructure, you often want to securely access services like Redis (ElastiCache) that live in private subnets. Direct access from your machine is usually not allowed — and that’s a good thing. But what if you still want to test or inspect your Redis cluster?&lt;/p&gt;

&lt;p&gt;One approach is to set up a bastion host with SSM (Session Manager) and use it to tunnel requests to ElastiCache from your localhost — without exposing anything to the public internet.&lt;/p&gt;

&lt;p&gt;Let’s break down how to do it using Terraform and the AWS CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s what we’re setting up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A bastion EC2 instance in a public subnet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Redis ElastiCache cluster in private subnets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security Groups allowing just enough access to get the job done.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSM Session Manager to connect to the bastion host without SSH or public key management.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Terraform Resources&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EC2 Bastion host
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "bastion_host" {
  ami                         = "ami-02f3f602d23f1659d"
  instance_type               = "t3.micro"
  subnet_id                   = var.subnet_id_a
  associate_public_ip_address = true
  key_name                    = "your_key"
  iam_instance_profile        = aws_iam_instance_profile.ssm_profile.name
  vpc_security_group_ids      = [aws_security_group.bastion_sg.id]

  root_block_device {
    volume_size = 10
  }

  user_data = file("user_data.sh")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;IAM Role for SSM
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "ssm_role" {
  name = "ssm-role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Effect = "Allow",
      Principal = {
        Service = "ec2.amazonaws.com"
      },
      Action = "sts:AssumeRole"
    }]
  })

  managed_policy_arns = [
    "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  ]
}

resource "aws_iam_instance_profile" "ssm_profile" {
  name = "ssm-profile"
  role = aws_iam_role.ssm_role.name
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Bastion Host Security Group
Allow only egress to required ports (Redis, HTTP, HTTPS):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "bastion_sg" {
  name   = "bastion-sg"
  vpc_id = var.vpc_id

  egress {
    from_port   = 6379
    to_port     = 6379
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/16"] # restrict to your VPC range
  }

  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;ElastiCache Redis Cluster
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elasticache_cluster" "redis" {
  cluster_id           = "redis-dev"
  engine               = "redis"
  node_type            = "cache.t2.micro"
  num_cache_nodes      = 1
  port                 = 6379
  subnet_group_name    = aws_elasticache_subnet_group.redis_subnets.name
  security_group_ids   = [aws_security_group.redis_sg.id]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Redis Security Group&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Allow ingress only from the bastion host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "redis_sg" {
  name   = "redis-sg"
  vpc_id = var.vpc_id

  ingress {
    from_port       = 6379
    to_port         = 6379
    protocol        = "tcp"
    security_groups = [aws_security_group.bastion_sg.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Accessing Redis from Localhost&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start the SSM session
On your local machine, use the AWS CLI to start a session:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ssm start-session \
  --target i-xxxxxxxxxxxxxxxxx \
  --profile your-aws-profile \
  --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Connect to Redis from the bastion
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-cli -h redis-dev.xxxxxx.use1.cache.amazonaws.com -p 6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-dev.xxxxxx.use1.cache.amazonaws.com:6379&amp;gt; ping
PONG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nice — Redis is alive 🎉&lt;/p&gt;

&lt;p&gt;Bonus: Security Best Practices&lt;/p&gt;

&lt;p&gt;✅ Never use 0.0.0.0/0 unless you’re testing — always restrict IPs or CIDRs.&lt;/p&gt;

&lt;p&gt;✅ Use private subnets for ElastiCache.&lt;/p&gt;

&lt;p&gt;✅ Limit egress/ingress traffic via security groups.&lt;/p&gt;

&lt;p&gt;✅ Prefer SSM over SSH for bastion access (no exposed ports).&lt;/p&gt;

&lt;p&gt;✅ Remove public IPs once everything is connected through VPC peering or VPNs.&lt;/p&gt;

</description>
      <category>redis</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying Redis and an ECS Service with websocket on AWS with Terraform</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Wed, 02 Apr 2025 20:55:58 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/deploying-redis-and-an-ecs-service-with-websocket-on-aws-with-terraform-1bc</link>
      <guid>https://dev.to/burgossrodrigo/deploying-redis-and-an-ecs-service-with-websocket-on-aws-with-terraform-1bc</guid>
      <description>&lt;p&gt;Amazon Web Services (AWS) provides a robust infrastructure for deploying scalable applications. In this tutorial, we will walk through setting up an Amazon ElastiCache Redis cluster and an ECS (Fargate) service using Terraform.&lt;/p&gt;

&lt;p&gt;This setup includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Redis cluster for caching.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An ECS service running an API container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security groups to control network access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CloudWatch logging for monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you begin, ensure you have the following installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Terraform (latest version)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CLI (configured with credentials)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A VPC with available subnets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An ECR repository for your application image&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. Configuring the Redis Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, create an ElastiCache parameter group to customize Redis settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elasticache_parameter_group" "custom_redis_parameter_group_dev" {
  name   = "custom-redis-parameter-group-dev"
  family = "redis7"

  parameter {
    name  = "maxmemory-policy"
    value = "allkeys-lru"
  }

  parameter {
    name  = "timeout"
    value = "3600"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, define a subnet group for Redis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elasticache_subnet_group" "redis_public_subnet_group_dev" {
  name       = "redis-public-subnet-group-dev"
  subnet_ids = [var.subnet_id_a, var.subnet_id_b]

  tags = {
    Name = "redis-public-subnet-group-dev"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create the Redis cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elasticache_cluster" "redis_cluster_dev" {
  cluster_id           = "redis-cluster-dev"
  engine               = "redis"
  node_type            = "cache.t2.micro"
  num_cache_nodes      = 1
  parameter_group_name = aws_elasticache_parameter_group.custom_redis_parameter_group_dev.name
  port                 = 6379
  security_group_ids   = [aws_security_group.redis_dev_sg.id]
  subnet_group_name    = aws_elasticache_subnet_group.redis_public_subnet_group_dev.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Configuring the ECS Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Define the ECS Task Definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_task_definition" "game_api_task" {
  family                   = "game-api-task-dev"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "256"
  memory                   = "512"
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  task_role_arn            = aws_iam_role.ecs_task_role.arn

  container_definitions = jsonencode([
    {
      name      = "game_api_container_dev"
      image     = "${aws_ecr_repository.game_api_ecr_dev.repository_url}:latest"
      cpu       = 256
      memory    = 512
      essential = true

      portMappings = [
        {
          containerPort = 3000
          hostPort      = 3000
          protocol      = "tcp"
        }
      ]

      logConfiguration = {
        logDriver = "awslogs"
        options = {
          awslogs-group         = aws_cloudwatch_log_group.game_api_dev_log_group.name
          awslogs-region        = "us-east-1"
          awslogs-stream-prefix = "ecs"
        }
      }
    }
  ])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a CloudWatch log group for logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_log_group" "game_api_dev_log_group" {
  name              = "/ecs/game-api-task-dev"
  retention_in_days = 7
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Configuring Security Groups&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Define the security group for the API Load Balancer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "api_load_balancer_dev_sg" {
  name        = "api-sg-dev"
  description = "Allow HTTP traffic"
  vpc_id      = var.vpc_id

  ingress {
    description = "Allow HTTP traffic"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "Allow HTTPS traffic"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "Allow API traffic"
    from_port   = 3000
    to_port     = 3000
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    description = "Allow all outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Define the security group for Redis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "redis_dev_sg" {
  name        = "redis-sg-dev"
  description = "Security group for Redis cluster"
  vpc_id      = var.vpc_id

  ingress {
    description = "Allow Redis access from ECS"
    from_port   = 6379
    to_port     = 6379
    protocol    = "tcp"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Implementing a WebSocket Gateway in NestJS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In real-time applications, WebSockets enable bidirectional communication between clients and servers. NestJS provides a built-in WebSocket module to facilitate this. Below, we define a WebSocket gateway using the @nestjs/websockets package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up the WebSocket Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The MatchGateway class listens for specific messages and manages client connections.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  SubscribeMessage,
  WebSocketGateway,
  WebSocketServer,
  MessageBody,
  ConnectedSocket
} from "@nestjs/websockets";
import { Server, Socket } from "socket.io";
import { PlayerConnectionDto } from "./dto/player.dto";
import { SocketMessagesEnum } from "@/domain/enum/socket-messages";
import { AddPlayerToRoundUseCase } from "@/use-cases/player/add-player-to-round";

@WebSocketGateway({
  cors: {
    origin: "*",
    methods: ["GET", "POST"]
  },
  transports: ["websocket"]
})
export class MatchGateway {
  @WebSocketServer() server: Server;

  constructor(
    private readonly addPlayerToRoundUseCase: AddPlayerToRoundUseCase
  ) {}

  handleConnection(client: Socket): void {
    console.log("Client connected:", client.id);
  }

  @SubscribeMessage(SocketMessagesEnum.PLAYER_CONNECTION)
  async handlePlayerOn(
    @MessageBody() player: PlayerConnectionDto,
    @ConnectedSocket() client: Socket
  ): Promise&amp;lt;void&amp;gt; {
    try {
      const playerData = { id: player.id };
      client.emit(SocketMessagesEnum.PLAYER_CONNECTION, playerData);
    } catch (error) {
      console.error("Error in WebSocket handler:", error);
      client.emit(SocketMessagesEnum.ERROR, {
        message: `Error processing player connection: ${error.message}`
      });
    }
  }

  @SubscribeMessage(SocketMessagesEnum.CHART_DATA)
  async handleChartData(): Promise&amp;lt;void&amp;gt; {
    try {
      // Handle real-time chart data
    } catch (error) {
      console.error("Error in chart data handler:", error);
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Setting Up the WebSocket Adapter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To ensure the WebSocket server runs smoothly within the NestJS application, an adapter is needed. This can be added in the bootstrap function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { IoAdapter } from "@nestjs/platform-socket.io";
import { NestFactory } from "@nestjs/core";
import { AppModule } from "./modules/app.module";

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.useWebSocketAdapter(new IoAdapter(app));
  app.enableCors({ origin: "*", methods: ["GET", "POST"] });

  const port = process.env.PORT || 3000;
  await app.listen(port);
  console.log(`🚀 Server running on port ${port}`);
}

bootstrap();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Automating Deployment with GitHub Actions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To automate deployments, create a GitHub Actions workflow (.github/workflows/deploy.yml) with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy API to ECS

on:
  push:
    branches:
      - dev

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout the repository
        uses: actions/checkout@v3

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v3
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: "us-east-1"

      - name: Login to Amazon ECR
        uses: aws-actions/amazon-ecr-login@v1

      - name: Build, tag, and push the Docker image to Amazon ECR
        env:
          IMAGE_TAG: ${{ github.sha }}
        run: |
          docker build -t $ECR_REPOSITORY:$IMAGE_TAG .
          docker push $ECR_REPOSITORY:$IMAGE_TAG

      - name: Deploy to ECS
        run: |
          aws ecs update-service --cluster $ECS_CLUSTER --service $ECS_SERVICE --force-new-deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>redis</category>
      <category>terraform</category>
      <category>ecs</category>
      <category>nestjs</category>
    </item>
    <item>
      <title>Why Deploying NestJS on AWS Lambda with Docker is a Nightmare</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Tue, 25 Feb 2025 00:03:32 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/why-deploying-nestjs-on-aws-lambda-with-docker-is-a-nightmare-4gd8</link>
      <guid>https://dev.to/burgossrodrigo/why-deploying-nestjs-on-aws-lambda-with-docker-is-a-nightmare-4gd8</guid>
      <description>&lt;p&gt;🚨Do not even try to deploy your NestJS/Javascript app's on lambda through docker🚨 &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. AWS Lambda Was Not Designed for Heavy Docker Images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✅ What Lambda was designed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small, lightweight Node.js functions that start up in milliseconds.&lt;/li&gt;
&lt;li&gt;Short-lived execution with minimal dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Why Docker breaks this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Even the most optimized NestJS Docker image is HUGE (~500MB+).&lt;/li&gt;
&lt;li&gt;Lambda pulls the container on every cold start, which can take seconds to minutes.&lt;/li&gt;
&lt;li&gt;Containerized apps have a higher startup time, defeating the purpose of Lambda's serverless scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. AWS Lambda + Prisma is a Disaster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✅ How a normal NestJS server works:&lt;/p&gt;

&lt;p&gt;Runs continuously and stores dependencies on &lt;code&gt;nod_modules&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;❌ Why Lambda + Docker makes cold starts worse:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every cold start initializes a fresh Docker container (~5–15s startup).&lt;/li&gt;
&lt;li&gt;Lambda must pull the container from AWS ECR, which adds even more latency.&lt;/li&gt;
&lt;li&gt;NestJS bootstrapping is slow because of its dependency injection system.&lt;/li&gt;
&lt;li&gt;If inside a VPC, cold starts can exceed 10–30 seconds (!).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Special point:&lt;/p&gt;

&lt;p&gt;The directory where .prisma package goes is kinda unpredictable. For some wierd reason that's the place where my package goes after running &lt;code&gt;prisma generate&lt;/code&gt; inside the &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./node_modules/.pnpm/@prisma+client@6.2.1_prisma@6.2.1/node_modules/@prisma/client /app/node_modules/.prisma/client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And i was only able to perceive this because i was echoing the directory where the library went through github actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. AWS Lambda’s Container Runtime Has Many Restrictions&lt;/strong&gt;&lt;br&gt;
✅ How a regular NestJS Docker container works:&lt;/p&gt;

&lt;p&gt;You control everything: OS, networking, environment.&lt;/p&gt;

&lt;p&gt;❌ Why Lambda makes this impossible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda’s container runtime restricts system calls (you can't run background tasks).&lt;/li&gt;
&lt;li&gt;Filesystem is read-only, so Prisma’s binary caching breaks.
Lambda has no long-lived storage unless using EFS (which adds latency and complexity).&lt;/li&gt;
&lt;li&gt;Can’t run extra processes inside the container (e.g., you can't run a worker process in the background).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Debugging is a Nightmare&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✅ Normal Docker deployments (ECS, Kubernetes) have:&lt;/p&gt;

&lt;p&gt;Persistent logs&lt;br&gt;
SSH access to debug issues&lt;br&gt;
❌ Why AWS Lambda with Docker is a black box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No real-time debugging (you can only check CloudWatch logs after execution).&lt;/li&gt;
&lt;li&gt;Lambda crashes without logs (Runtime.ExitError with no explanation).&lt;/li&gt;
&lt;li&gt;You need to login into ECS and then access the container, to get the logs inside of it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cold starts kill performance.&lt;br&gt;
Prisma and databases don’t work well in Lambda.&lt;br&gt;
Docker images are too big for Lambda’s fast-scaling model.&lt;br&gt;
Debugging is painful.&lt;/p&gt;

&lt;p&gt;If you have huge NestJS app (basically everything is huge on NestJS app's), go for ALB or break it (if possible) in microsservices. Avoid using Prisma or any huge library, you also can't use some cryptography libraries such as bcrypt.&lt;/p&gt;

</description>
      <category>nestjs</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Automate Testing and Release of npm Packages with GitHub Actions</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Mon, 17 Feb 2025 12:16:52 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/automate-testing-and-release-of-npm-packages-with-github-actions-3jei</link>
      <guid>https://dev.to/burgossrodrigo/automate-testing-and-release-of-npm-packages-with-github-actions-3jei</guid>
      <description>&lt;p&gt;Still some working to be done, such as release control within github itself. Maybe i'll update this post later with these features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Creating an npm Authentication Token&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To publish your package from GitHub Actions, you need an authentication token:&lt;/p&gt;

&lt;p&gt;Go to npmjs.com.&lt;/p&gt;

&lt;p&gt;Log in and navigate to Access Tokens in your profile settings.&lt;/p&gt;

&lt;p&gt;Click Generate New Token with at least Publish access.&lt;/p&gt;

&lt;p&gt;Copy the generated token.&lt;/p&gt;

&lt;p&gt;In your GitHub repository, go to Settings &amp;gt; Secrets and variables &amp;gt; Actions.&lt;/p&gt;

&lt;p&gt;Click New repository secret, name it NPM_TOKEN, and paste the token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Setting Up GitHub Actions Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a .github/workflows/test-and-publish.yml file in your repository with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Test and Publish

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v3

    - name: Set up Node.js
      uses: actions/setup-node@v3
      with:
        node-version: '20'

    - name: Install dependencies
      run: yarn install --frozen-lockfile

    - name: Run tests
      run: yarn test

  publish:
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main' &amp;amp;&amp;amp; github.event_name == 'push'

    steps:
    - name: Checkout repository
      uses: actions/checkout@v3

    - name: Set up Node.js
      uses: actions/setup-node@v3
      with:
        node-version: '20'
        registry-url: 'https://registry.npmjs.org/'

    - name: Install dependencies
      run: yarn install --frozen-lockfile

    - name: Publish to npm
      run: yarn publish --access public
      env:
        NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation of Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Runs on every push and pull request to main.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checks out the repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sets up Node.js version 20.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installs dependencies using yarn install --frozen-lockfile to ensure dependency consistency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Runs tests using yarn test.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Publish Job&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Runs only if tests pass and the event is a push to main.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensures a clean checkout of the repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sets up Node.js with the npm registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installs dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Publishes the package using yarn publish --access public.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Committing and Running the Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Commit and push the .github/workflows/test-and-publish.yml file to your repository. Once pushed:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Actions will trigger a test run.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the tests pass and the branch is main, the package will be published to npm automatically.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>node</category>
      <category>githubactions</category>
      <category>npm</category>
    </item>
    <item>
      <title>Automating AWS API Gateway Deployment with GitHub Actions and OpenAPI 3.0</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Fri, 14 Feb 2025 18:57:32 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/automating-aws-api-gateway-deployment-with-github-actions-and-openapi-30-33gd</link>
      <guid>https://dev.to/burgossrodrigo/automating-aws-api-gateway-deployment-with-github-actions-and-openapi-30-33gd</guid>
      <description>&lt;p&gt;In this guide, we'll walk through setting up a CI/CD pipeline using GitHub Actions to deploy an AWS API Gateway with an OpenAPI 3.0 specification. This automation will streamline updates to your API Gateway whenever changes are pushed to the dev branch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Before proceeding, ensure you have:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An AWS account with API Gateway permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An API Gateway REST API already created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An OpenAPI 3.0 definition file (openapi.json) in your repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Secrets configured with:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID &lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY&lt;/li&gt;
&lt;li&gt;AWS_REGION &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;GitHub Actions Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a new workflow file in your repository at .github/workflows/deploy-api-gateway.yml and add the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy API Gateway

on:
  push:
    branches:
      - dev
  workflow_dispatch:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v3
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Update API Gateway with OpenAPI definition
        run: |
          aws apigateway put-rest-api \
            --rest-api-id ${{ secrets.API_GATEWAY_ID }} \
            --body "$(cat openapi.json | base64 --wrap=0)"

      - name: Deploy API Gateway
        run: |
          aws apigateway create-deployment \
            --rest-api-id ${{ secrets.API_GATEWAY_ID }} \
            --stage-name dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation of Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Trigger Conditions: The workflow runs on a push to the dev branch or when manually triggered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checkout Repository: Clones the repository so we can access the OpenAPI file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure AWS Credentials: Uses the GitHub Secrets to authenticate with AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update API Gateway: Uploads the latest OpenAPI 3.0 definition to API Gateway.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy API Gateway: Creates a new deployment in the dev stage.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Automation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensures API Gateway stays in sync with the OpenAPI definition.&lt;/li&gt;
&lt;li&gt;Eliminates manual updates, reducing errors.&lt;/li&gt;
&lt;li&gt;Provides a clear version history through GitHub Actions logs.&lt;/li&gt;
&lt;li&gt;By implementing this workflow, your API updates will be automatically deployed, allowing for a more efficient development process.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>AWS Systems Manager (SSM) to perform Prisma operations on a closed RDS instance on github actions</title>
      <dc:creator>Rodrigo Burgos</dc:creator>
      <pubDate>Wed, 12 Feb 2025 19:12:05 +0000</pubDate>
      <link>https://dev.to/burgossrodrigo/aws-systems-manager-ssm-to-perform-prisma-operations-on-a-closed-rds-instance-on-github-actions-1hoe</link>
      <guid>https://dev.to/burgossrodrigo/aws-systems-manager-ssm-to-perform-prisma-operations-on-a-closed-rds-instance-on-github-actions-1hoe</guid>
      <description>&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS RDS Instance: Your RDS instance must be configured to accept connections from localhost (when the EC2 instance is used as a bastion to connect).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSM Agent: Ensure that your EC2 instance (acting as the bastion host) has the SSM agent installed and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IAM Roles: The IAM role associated with your EC2 instance must have the necessary permissions to use AWS Systems Manager (SSM) and access RDS resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC Security Group: Your EC2 instance should have the right security group and routing configured to connect to the RDS instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prepare your environment with terraform&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Make sure your EC2 instance has the required IAM role attached with the necessary permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "role_acesso_ssm" {
  assume_role_policy    = "{\"Statement\":[{\"Action\":\"sts:AssumeRole\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ec2.amazonaws.com\"}}],\"Version\":\"2012-10-17\"}"
  managed_policy_arns   = [
    "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess",
    "arn:aws:iam::aws:policy/AmazonS3FullAccess",
    "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  ]
  name                  = "role-acesso-ssm"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This role ensures the EC2 instance can perform operations on SSM and connect to the necessary resources.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable Port Forwarding with SSM on github actions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once your EC2 instance has the necessary IAM roles and SSM agent installed, you'll set up port forwarding using AWS Systems Manager. Port forwarding allows you to connect to a closed RDS instance through the bastion host without opening its security group.&lt;/p&gt;

&lt;p&gt;Start an SSM Session to forward the port (e.g., port 5432 for PostgreSQL) from the bastion host to the RDS instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSTANCE_ID=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=my-bastion-host" --query "Reservations[0].Instances[0].InstanceId" --output text)

aws ssm start-session --target $INSTANCE_ID \
  --document-name AWS-StartPortForwardingSessionToRemoteHost \
  --parameters '{"host":["my-rds-instance.rds.amazonaws.com"],"portNumber":["5432"],"localPortNumber":["5432"]}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will establish a secure connection between your EC2 instance and RDS, and allow you to connect to the database locally on your machine via port 5432.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setting Up Environment Variables&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’ll need environment variables in your github secrets to securely connect to your RDS instance using Prisma. These should include your database credentials, which are best stored in AWS Secrets Manager or as environment variables.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;"postgresql://username:password@localhost:5432/my_database"&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Perform Prisma Operations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that you have port forwarding in place, you can interact with the closed RDS instance using Prisma from your dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Generate Prisma Client
RUN pnpm prisma generate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Important Notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security: Ensure your IAM roles and permissions are securely configured to avoid unnecessary exposure to sensitive resources.
Port Forwarding: If the RDS instance is closed, port forwarding via SSM is a great way to establish a secure tunnel without exposing the database publicly.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>github</category>
      <category>ssm</category>
      <category>prisma</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
