When I want to see what developers are actually shipping, GitHub's trending page is a reliable signal. Scrolling through the Go monthly trending board this month surfaced four projects that each deserve a closer look. All four are written in Go, and each tackles a very different problem.
Here's the current monthly ranking for the Go language:
| # | Repository | Stars | Stars This Month | What It Does |
|---|---|---|---|---|
| 1 | QuantumNous/new-api | 28,276 | 5,970 | Unified AI model hub |
| 2 | Wei-Shaw/sub2api | 14,394 | 6,822 | AI API subscription sharing |
| 3 | steipete/wacli | 2,034 | 1,348 | WhatsApp CLI |
| 4 | maximhq/bifrost | 4,169 | 1,076 | Enterprise AI gateway |
Here's what each project does and why it's climbing this month.
1. Bifrost: An Enterprise AI Gateway Built in Go
Repository: maximhq/bifrost
Stars: 4,169 | Forks: 485 | License: Apache 2.0
Bifrost is a high-throughput AI gateway, written in Go, that exposes a single OpenAI-compatible API fronting more than 15 LLM providers. What pulled it onto my radar were the performance figures. Per-request overhead sits at roughly 11 microseconds, and the gateway sustains 5,000 RPS. That puts it around 50x faster than Python-based alternatives like LiteLLM, a gap that teams evaluating gateway options tend to notice quickly.
Key features:
- Multi-provider routing covering automatic failover and weighted load balancing
- Semantic caching using a dual-layer design (exact-hash match plus semantic similarity through Weaviate)
- MCP integration featuring Code Mode, which cuts token usage by as much as 92.8% at scale (benchmark source); teams can review the broader MCP gateway architecture for details
- Budget hierarchy enforced at four levels: Customer, Team, Virtual Key, and Provider Config
-
Zero-config deployment through
npx @anthropic-ai/bifrostor Docker
The architectural decisions matter here. Choosing Go gives the gateway predictable latency because there are no GC pauses under load. Three deployment shapes are supported: an HTTP gateway, a Go SDK, or a drop-in SDK replacement for existing OpenAI or Anthropic client libraries.
Who it is for: Teams operating multiple LLM providers in production that need lightweight routing, cost controls, and observability without stacking extra latency into every call.
Links: Docs | GitHub | Website
2. new-api: A Self-Hosted Model Hub
Repository: QuantumNous/new-api
Stars: 28,276 | Forks: 5,915 | License: AGPLv3
new-api tops the monthly Go board by star count. It works as a centralized gateway that aggregates multiple LLM vendors (OpenAI, Azure, Claude, Gemini, DeepSeek, Qwen, and more) and exposes them through standardized relay interfaces.
Key features:
- Bidirectional format translation across OpenAI, Claude, and Gemini APIs
- Token grouping with model-level restrictions and role-based access control
- A dashboard for real-time usage analytics and billing
- Docker deployment that works against SQLite, MySQL, or PostgreSQL backends
- A multi-language UI covering Chinese, English, French, and Japanese
- Redis support for distributed deployments
Development velocity is high: the project has more than 5,600 commits. Streaming APIs are supported with configurable timeouts, and reasoning-model handling is built in.
Who it is for: Teams that want a self-hosted LLM proxy paired with a full admin dashboard and cross-vendor format conversion.
Links: GitHub
3. sub2api: Sharing AI API Subscriptions Across Users
Repository: Wei-Shaw/sub2api
Stars: 14,394 | Forks: 2,488
sub2api approaches the problem from a different angle. Rather than simply proxying API requests, it is designed around pooling and sharing paid AI subscriptions (Claude, OpenAI, Gemini) behind a unified access layer that includes billing.
Key features:
- Multi-account management with OAuth and API key authentication
- API key distribution and lifecycle handling
- Token-level billing that calculates cost with precision
- Account scheduling with sticky sessions
- Per-user and per-account concurrency limits
- A built-in payment system covering Alipay, WeChat Pay, and Stripe
- An administrative dashboard
The stack is Go 1.25 with the Gin framework and the Ent ORM on the backend, and Vue 3, Vite, and TailwindCSS on the frontend. PostgreSQL 15+ and Redis 7+ are required.
Who it is for: Organizations that want to share paid AI subscriptions across multiple users with fine-grained billing and access policies.
Links: GitHub
4. wacli: A WhatsApp Command-Line Interface
Repository: steipete/wacli
Stars: 2,034 | Forks: 241
This one stands out from the rest. wacli is a full command-line client for WhatsApp, built on top of the whatsmeow library that implements the WhatsApp Web protocol. It was created by Peter Steinberger, a widely known iOS developer.
Key features:
- Local message history sync with continuous capture
- Offline search backed by SQLite with FTS5 full-text indexing
- Sending text, quoted replies, and files with captions
- Contact and group management
- Human-readable table output by default, with JSON available for scripting
- A read-only mode that prevents accidental mutations
- QR-code authentication
Installation is simple: grab it from Homebrew or build from source with go build -tags sqlite_fts5.
Who it is for: Developers and power users who want programmatic WhatsApp access from the terminal for automation, search, or scripting tasks.
Links: GitHub
Why Go Keeps Showing Up at the Top
That all four of these projects are written in Go is not a coincidence. The language's concurrency model, single-binary deployment, compact memory footprint, and predictable behavior under load make it a natural fit for infrastructure tooling.
Nowhere is that clearer than in AI gateway workloads. When you are proxying thousands of LLM calls every second, every microsecond of added overhead compounds. Python-based options like LiteLLM add milliseconds of latency per request; Go-based gateways such as Bifrost keep that overhead in the microsecond range, as their published performance benchmarks document in detail.
The CNCF project ecosystem reflects the same pattern. Kubernetes, Prometheus, the Go integrations around Envoy, and the majority of cloud-native infrastructure tools are all built in Go.
Wrapping Up
Two clear patterns surface from this month's Go trending list. First, AI infrastructure dominates the category. Second, Go continues to be the language developers reach for when they build it. Whether the need is an enterprise AI gateway with sub-millisecond overhead, a self-hosted model hub, a subscription sharing platform, or a WhatsApp CLI, the Go ecosystem now ships a production-ready option in each category.
If you want the full list, the GitHub Go trending page is one click away.
Data sourced from GitHub Trending as of April 22, 2026. Star counts and rankings change daily.
Top comments (0)