DEV Community

Cover image for Ghostty Is Leaving GitHub: What It Means for Developer-Tool Builders
Hassann
Hassann

Posted on • Originally published at apidog.com

Ghostty Is Leaving GitHub: What It Means for Developer-Tool Builders

On April 28, 2026, Mitchell Hashimoto announced that Ghostty, his open-source terminal emulator, is leaving GitHub. Hashimoto is GitHub user 1299, joined in February 2008, and has used the platform almost daily for more than 18 years. The trigger was not pricing, features, or politics. It was reliability: he had been keeping a journal where “almost every day has an X” for GitHub outages, and on the announcement day a GitHub Actions failure blocked his PR reviews for two hours.

Try Apidog today

If you build developer tools, this is worth treating as an implementation warning. Hashimoto co-founded HashiCorp on top of GitHub and shipped tools like Terraform, Vagrant, Vault, Consul, and Boundary through it. When that type of user moves a project off GitHub because the platform blocks daily work, the lesson is not “pick a different forge.” The lesson is: design your own toolchain so one provider outage does not stop your users from shipping.

For broader context on AI-era developer tooling around GitHub workflows, see how to write AGENTS.md files, GitHub Copilot usage and billing API for teams, and the Clawsweeper triage bot writeup.

TL;DR

  • Mitchell Hashimoto announced on April 28, 2026 that Ghostty will leave GitHub for an unnamed forge.
  • His reason was chronic GitHub Actions and platform outages, including a two-hour Actions failure that blocked PR reviews on the announcement day.
  • Ghostty will keep a read-only GitHub mirror. Active development will move incrementally.
  • His other projects stay on GitHub for now.
  • The important takeaway is reliability, not forge preference.
  • If your developer tool depends on GitHub, another API provider, or a single CI platform, build and test a fallback path before you need it.
  • For API teams, the practical pattern is: mock dependencies, test against multiple providers, and keep release infrastructure portable. Apidog supports that style of workflow.

What Hashimoto actually announced

In the announcement post, Hashimoto says Ghostty is leaving GitHub because the platform had become too unreliable for his workflow.

The key details:

  • He had been tracking GitHub outages in a personal journal.
  • The journal showed a pattern, not a one-off incident.
  • On the morning of the post, GitHub Actions blocked PR reviews for two hours.
  • The April 27, 2026 outage affected Actions, Packages, and API surfaces.
  • The Ghostty repository will remain on GitHub as a read-only mirror.
  • Issues, pull requests, CI, and active development will move elsewhere.
  • Hashimoto had not yet named the destination forge.
  • The migration will be incremental.

The important omission: he did not complain about missing features. He did not frame this as a pricing problem. He framed it as an operational dependency that stopped working too often.

Why this matters for developer-tool builders

If your product sits in a developer’s critical path, reliability eventually beats feature velocity.

A developer can tolerate a missing dashboard. They cannot tolerate a tool that blocks:

  • PR review
  • CI execution
  • release publishing
  • API testing
  • package deployment
  • incident response
  • customer hotfixes

The Hashimoto post is a useful stress test for your own product:

Could a power user write a private journal where your tool blocks them several times per week?

If yes, you have a reliability problem even if your monthly uptime looks acceptable.

Run this reliability audit on your own stack

Use the announcement as a prompt to audit your own platform.

1. List every critical dependency

Create a table like this:

Dependency Used for If unavailable for 4 hours Fallback exists?
GitHub Actions CI/release Cannot ship Partial
GitHub API PR automation Bot fails No
Package registry Distribution Cannot publish No
LLM provider AI feature Feature degraded Yes
Payment API Billing New purchases fail Manual

Focus on services that block developers from continuing work.

2. Identify hard imports

Look for code like this:

import { Octokit } from "@octokit/rest";

const github = new Octokit({ auth: process.env.GITHUB_TOKEN });

export async function createIssue(title: string, body: string) {
  return github.rest.issues.create({
    owner: "org",
    repo: "repo",
    title,
    body,
  });
}
Enter fullscreen mode Exit fullscreen mode

This is easy to write, but it couples your product directly to one provider.

Wrap it behind an interface instead:

export interface IssueTracker {
  createIssue(input: {
    title: string;
    body: string;
    labels?: string[];
  }): Promise<{ id: string; url: string }>;
}
Enter fullscreen mode Exit fullscreen mode

Then implement provider adapters:

export class GitHubIssueTracker implements IssueTracker {
  constructor(private client: Octokit) {}

  async createIssue(input: {
    title: string;
    body: string;
    labels?: string[];
  }) {
    const res = await this.client.rest.issues.create({
      owner: process.env.GITHUB_OWNER!,
      repo: process.env.GITHUB_REPO!,
      title: input.title,
      body: input.body,
      labels: input.labels,
    });

    return {
      id: String(res.data.id),
      url: res.data.html_url,
    };
  }
}
Enter fullscreen mode Exit fullscreen mode

Later, you can add a GitLab, Forgejo, or internal adapter without rewriting product logic.

3. Measure reliability against user workflow, not uptime

A service can have good aggregate uptime and still be unusable during critical windows.

Track:

  • failed CI runs per workday
  • blocked release attempts
  • delayed PR reviews
  • API latency during peak customer usage
  • queue backlog during business hours
  • manual retries required by users

For developer tools, “available” should mean:

The user can complete the workflow they opened the tool to complete.

Lock-in is bigger than Git history

Git repositories are portable. Developer workflows are not.

A full GitHub-native workflow may include:

  • repositories
  • issues
  • pull requests
  • review history
  • Actions workflows
  • secrets
  • environments
  • releases
  • packages
  • Discussions
  • CODEOWNERS
  • branch protection rules
  • GitHub OAuth identity
  • Marketplace distribution

You can clone a repository with:

git clone --mirror git@github.com:org/project.git
Enter fullscreen mode Exit fullscreen mode

But that does not migrate issue history, PR metadata, CI secrets, or release automation.

That is the lock-in surface.

Add a second-forge mirror

A low-cost starting point is to mirror active repositories to another forge.

Example:

git clone --mirror git@github.com:org/project.git
cd project.git
git remote add backup git@codeberg.org:org/project.git
git push --mirror backup
Enter fullscreen mode Exit fullscreen mode

To automate this in CI:

name: Mirror repository

on:
  schedule:
    - cron: "0 */12 * * *"
  workflow_dispatch:

jobs:
  mirror:
    runs-on: ubuntu-latest
    steps:
      - name: Clone GitHub repo
        run: |
          git clone --mirror git@github.com:org/project.git repo.git

      - name: Push mirror
        run: |
          cd repo.git
          git remote add backup git@codeberg.org:org/project.git
          git push --mirror backup
Enter fullscreen mode Exit fullscreen mode

This does not solve issue or PR migration, but it gives you a working source backup when your primary forge is unavailable.

Keep your release path portable

If your release pipeline only runs on one hosted CI system, that system owns your ability to ship.

At minimum, document a manual path:

npm ci
npm test
npm run build
npm version patch
npm publish
Enter fullscreen mode Exit fullscreen mode

For compiled projects:

make clean
make test
make build
make release
Enter fullscreen mode Exit fullscreen mode

Then make sure required credentials are available outside the primary CI provider.

A practical release-readiness checklist:

  • Can a maintainer build locally from a clean checkout?
  • Can tests run without GitHub Actions?
  • Can artifacts be signed outside hosted CI?
  • Can packages be published manually?
  • Are release credentials stored somewhere other than one CI secret store?
  • Is the manual process documented and tested quarterly?

If the answer is no, your release pipeline is not resilient.

Forge alternatives to evaluate

Hashimoto did not name Ghostty’s destination. The credible options as of late April 2026 include:

  • Forgejo: FOSS forge, hard fork of Gitea, maintained by Codeberg e.V.
  • Codeberg: hosted Forgejo instance run as a nonprofit.
  • GitLab: mature CI/CD and broad GitHub-like feature coverage.
  • Sourcehut: minimalist, email-driven workflow.
  • Self-hosted Forgejo or Gitea: maximum control with operational burden.
  • Radicle: peer-to-peer model, promising but still early for many public projects.

None is a perfect drop-in replacement for GitHub’s full surface area. That is the point: the more one platform owns, the harder it is to exit.

What API teams should do differently

For API teams, replace “GitHub Actions” with any upstream API your product depends on.

The question is the same:

Can your users keep working when an upstream service is unavailable?

Three patterns help.

1. Mock upstream APIs

Your local development loop and CI should not require every upstream provider to be online.

For example, instead of calling a live API in every test:

const res = await fetch("https://api.example.com/v1/messages", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ message: "hello" }),
});
Enter fullscreen mode Exit fullscreen mode

Run the same request against a mock in development:

const baseUrl =
  process.env.API_BASE_URL ?? "http://localhost:4010";

const res = await fetch(`${baseUrl}/v1/messages`, {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.API_KEY ?? "mock-key"}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ message: "hello" }),
});
Enter fullscreen mode Exit fullscreen mode

Apidog supports mock servers from API definitions, so the same schema used for documentation and testing can also power local development.

For a related multi-provider API example, see how to use the GPT-5.5 API.

2. Test against multiple providers

If your product wraps several similar APIs, run the same contract tests against each provider.

Example configuration:

PROVIDER=openai
API_BASE_URL=https://api.openai.com/v1
API_KEY=...
Enter fullscreen mode Exit fullscreen mode
PROVIDER=mock
API_BASE_URL=http://localhost:4010
API_KEY=mock-key
Enter fullscreen mode Exit fullscreen mode

Your test should not care which provider is active:

describe("chat completion contract", () => {
  it("returns a message", async () => {
    const res = await fetch(`${process.env.API_BASE_URL}/chat/completions`, {
      method: "POST",
      headers: {
        Authorization: `Bearer ${process.env.API_KEY}`,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        model: "default",
        messages: [{ role: "user", content: "Say hello" }],
      }),
    });

    expect(res.status).toBeLessThan(500);

    const body = await res.json();
    expect(body).toHaveProperty("choices");
  });
});
Enter fullscreen mode Exit fullscreen mode

The goal is not perfect provider equivalence. The goal is to detect breakage before customers do.

3. Separate API design from API availability

A resilient API workflow should support:

  • schema design
  • documentation
  • mock responses
  • contract testing
  • environment switching
  • live debugging

without requiring the production upstream to be online.

That way, an upstream outage degrades integration testing but does not stop all development.

An Apidog-style resilient API workflow

A practical workflow looks like this:

  1. Download Apidog.
  2. Create one project per upstream API surface.
  3. Define request and response schemas once.
  4. Generate a mock server from those schemas.
  5. Store credentials as environment-scoped variables.
  6. Run the same request against dev, staging, and prod.
  7. Add contract tests for the API behaviors your product requires.
  8. Run those tests before every release.
  9. When an upstream is degraded, switch development and CI to the mock environment.
  10. Keep shipping non-blocked work.

The same pattern applies whether the upstream is an AI provider, payments API, Git hosting API, or internal service.

Practical checklist for your own stack

Use this as a short implementation plan.

  • Mirror important repositories to a second forge.
  • Wrap provider SDKs behind interfaces.
  • Avoid hard-coding GitHub, GitLab, or provider-specific assumptions in core logic.
  • Store release credentials outside a single CI platform.
  • Keep a documented manual release path.
  • Mock every external API used in tests.
  • Run contract tests against mock, staging, and production-like environments.
  • Track reliability by user workflow, not just aggregate uptime.
  • Review incident trends monthly.
  • Test your fallback path before an outage forces you to use it.

For an API-tooling-specific example, see building durable workflows that survive provider outages, which applies the same pattern with DeepSeek and OpenAI as dual-provider examples.

FAQ

Where is Ghostty moving?

Hashimoto did not name the destination in the announcement. He said discussions are ongoing with multiple commercial and FOSS providers. The migration will be incremental, and the current GitHub repository will remain as a read-only mirror.

Is GitHub unreliable enough to justify leaving?

GitHub’s headline uptime can still be competitive while specific workflows suffer repeated disruption. Hashimoto’s complaint was about the accumulated pattern of partial outages, especially around Actions and related developer workflows.

Should every project leave GitHub?

Not necessarily. Most teams should start with mirroring and fallback planning before making a primary-forge migration. Moving issues, PR history, CI, secrets, and release automation is real work.

Does this affect GitHub Actions or Copilot?

The immediate trigger in Hashimoto’s post was a GitHub Actions failure. Copilot is a separate product surface. If your team uses Copilot, see GitHub Copilot usage and billing API for teams.

What does this mean for tools built on GitHub APIs?

Review bots, issue triage tools, MCP servers, and release automation inherit GitHub’s reliability profile. Cache aggressively, fail open where possible, and mock GitHub API behavior in tests. The Clawsweeper triage bot writeup shows one implementation pattern.

What is the main lesson?

If developers rely on your tool to ship, reliability is a product feature. Design every critical dependency as replaceable, test the replacement path, and do it before your users start keeping outage journals.

Top comments (0)