DEV Community

Cover image for Running Anchor Tests on GitHub Actions Without Losing Your Mind
Rodrigo Burgos
Rodrigo Burgos

Posted on

Running Anchor Tests on GitHub Actions Without Losing Your Mind

Setting up anchor test on GitHub Actions sounds like a weekend task. It took me a full week of debugging to get it right. Here's
everything I found so far, so you don't have to.

The context

I'm building an open source cross-chain bridge between Ethereum and Solana. The Solana side uses Anchor 0.31.1. At some point I
needed CI — integration tests running automatically on every push, no manual anchor test on my machine.

Simple enough, right?

Problem 1: Agave 3.x and io_uring

The first approach: install the Solana CLI inside the CI runner directly.

  - run: sh -c "$(curl -sSfL https://release.anza.xyz/stable/install)"
Enter fullscreen mode Exit fullscreen mode

stable at the time of writing resolves to Agave 3.1.x. The install completes fine. Then anchor test starts the local validator and
immediately panics:

  thread 'main' panicked at fs/src/dirs.rs:27:9:
  assertion failed: io_uring_supported()
Enter fullscreen mode Exit fullscreen mode

Agave 3.x added a hard dependency on io_uring, a Linux kernel feature for async I/O. GitHub Actions runners run on kernels where
io_uring is either disabled or not available.

Fix: use Agave 2.x. v2.1.21 is the latest in the 2.1 series and works without io_uring.


  ENV SOLANA_VERSION=v2.1.21
  RUN sh -c "$(curl -sSfL https://release.anza.xyz/${SOLANA_VERSION}/install)"
Enter fullscreen mode Exit fullscreen mode

Problem 2: GLIBC mismatch

Next issue: the anchor binary itself.

avm install 0.31.1 downloads a pre-built binary from GitHub releases. That binary was compiled on a system with GLIBC 2.38/2.39.
Ubuntu 22.04 ships with GLIBC 2.35.

   /root/.avm/bin/anchor-0.31.1: /lib/x86_64-linux-gnu/libm.so.6:
     version `GLIBC_2.38' not found
Enter fullscreen mode Exit fullscreen mode

Two fixes needed:

  1. Switch the base image to Ubuntu 24.04 (ships with GLIBC 2.39)
  2. Compile anchor from source instead of using avm. A binary compiled inside the container will always be compatible with the container's GLIBC.

FROM ubuntu:24.04

  RUN cargo install \
    --git https://github.com/coral-xyz/anchor \
    --tag v0.31.1 anchor-cli --locked
Enter fullscreen mode Exit fullscreen mode

This makes the build slower (adds ~5 min to the image build), but the image is built once and cached. Runtime CI stays fast.

Problem 3: Container environment quirks

When GitHub Actions runs a job inside a container, it overrides some environment variables. Two things break silently:

cargo can't find its toolchain:

error: rustup could not choose a version of cargo to run,
because one wasn't specified explicitly, and no default is configured.

Fix: explicitly set CARGO_HOME and RUSTUP_HOME in the job env:

  env:
    CARGO_HOME: /root/.cargo
    RUSTUP_HOME: /root/.rustup
Enter fullscreen mode Exit fullscreen mode

solana-keygen writes to the wrong place:

The keypair must exist at ~/.config/solana/id.json. Inside a GitHub Actions container, HOME is /github/home, not /root. If you
hardcode /root/.config/solana/id.json it won't be found by Anchor.

  - name: Generate keypair
    run: |
      mkdir -p $HOME/.config/solana
      solana-keygen new \
        --outfile $HOME/.config/solana/id.json \
        --no-bip39-passphrase --force
Enter fullscreen mode Exit fullscreen mode

Problem 4: Test validator startup time

The local validator takes longer to start in a CI runner than on a dev machine. Without configuring a startup wait, anchor test
will fail with:

Unable to get latest blockhash. Test validator does not look started.

Add this to your Anchor.toml:

  [test]
  startup_wait = 60000
Enter fullscreen mode Exit fullscreen mode

Problem 5: Event listeners hang forever

This was the hardest one to diagnose. Tests that use program.addEventListener work fine locally but hang indefinitely in CI:

  const eventPromise = new Promise<any>((resolve) => {
    const listenerId = program.addEventListener("TokenSent", (event) => {
      program.removeEventListener(listenerId);
      resolve(event);
    });
  });

  await program.methods.bridgeSend(...).rpc();
  const event = await eventPromise; // hangs forever in CI
Enter fullscreen mode Exit fullscreen mode

The reason: addEventListener opens a WebSocket subscription to the local validator. In containerized environments, this
subscription is established but log notifications are never delivered. The promise never resolves. Mocha's timeout is 1000s by
default, so the test runner just sits there.

Fix: fetch the transaction logs directly after rpc() and parse the events from them.

  import { BorshCoder } from "@coral-xyz/anchor";
  import IDL from "../target/idl/bridge.json";

  const eventCoder = new BorshCoder(IDL as any).events;

  async function getConfirmedTx(sig: string) {
    for (let i = 0; i < 10; i++) {
      const tx = await provider.connection.getTransaction(sig, {
        commitment: "confirmed",
        maxSupportedTransactionVersion: 0,
      });
      if (tx) return tx;
      await new Promise((r) => setTimeout(r, 1000));
    }
    throw new Error(`Transaction ${sig} not found after retries`);
  }

  function parseEvents(logMessages: string[]) {
    return logMessages
      .filter((log) => log.startsWith("Program data: "))
      .map((log) => {
        try {
          return eventCoder.decode(log.slice("Program data: ".length));
        } catch {
          return null;
        }
      })
      .filter(Boolean);
  }
Enter fullscreen mode Exit fullscreen mode

Then in the test:

  const sig = await program.methods.bridgeSend(...).rpc();
  const tx = await getConfirmedTx(sig);
  const events = parseEvents(tx.meta.logMessages);
  const event = events.find((e) => e.name === "TokenSent")?.data;

  assert.ok(event, "TokenSent event not found");
  assert.equal(event.amount.toNumber(), 100_000);
Enter fullscreen mode Exit fullscreen mode

Two notes on this approach:

  • getTransaction can return null briefly after rpc() returns, even after confirmation — hence the retry loop.
  • Use new BorshCoder(IDL) directly rather than program.coder. In some container configurations program.coder.events.decode returns null even when the discriminator matches. Loading the IDL directly is reliable.

Also add "resolveJsonModule": true to your tsconfig.json to enable the JSON import.

The pre-built image

To avoid reinstalling all of this on every CI run, I packaged it into a Docker image:

docker pull burgossrodrigo/anchor-build:0.31.1

Full integration.yml example:

solana:
    runs-on: ubuntu-latest
    container:
      image: burgossrodrigo/anchor-build:0.31.1

    env:
      CARGO_HOME: /root/.cargo
      RUSTUP_HOME: /root/.rustup

    steps:
      - uses: actions/checkout@v4

      - name: Cache build artifacts
        uses: actions/cache@v4
        with:
          path: contracts/solana/target
          key: ${{ runner.os }}-anchor-${{ hashFiles('contracts/solana/Cargo.lock') }}

      - name: Generate keypair
        run: |
          mkdir -p $HOME/.config/solana
          solana-keygen new \
            --outfile $HOME/.config/solana/id.json \
            --no-bip39-passphrase --force

      - name: Fix blake3 compatibility
        working-directory: contracts/solana
        run: cargo update -p blake3 --precise 1.8.2

      - name: Fix indexmap compatibility
        working-directory: contracts/solana
        run: cargo update -p indexmap --precise 2.11.4

      - name: Run tests
        working-directory: contracts/solana
        run: anchor test

Enter fullscreen mode Exit fullscreen mode

The blake3 and indexmap pins are required because cargo build-sbf uses a bundled Rust (1.79) that is incompatible with their
latest versions — blake3 1.8.3+ introduced edition2024, and indexmap 2.12+ has an MSRV higher than 1.79.

Source

The full project — bridge contracts, backend relayer, CI setup — is open source:

github.com/burgossrodrigo/token_bridge

Top comments (0)