DEV Community

Cover image for Open-Source AI SRE: Aurora vs HolmesGPT vs K8sGPT (2026)
Siddharth Singh
Siddharth Singh

Posted on • Originally published at arvoai.ca

Open-Source AI SRE: Aurora vs HolmesGPT vs K8sGPT (2026)

Key Takeaways

  • Three credible open-source AI SREs exist in 2026: Aurora (Arvo AI), HolmesGPT (Robusta + Microsoft, CNCF Sandbox), and K8sGPT (CNCF Sandbox). All three are Apache 2.0.
  • Only one is a true multi-step agent. HolmesGPT runs an iterative ReAct loop. K8sGPT is a rule-based scanner that uses an LLM only to explain findings. Aurora is a multi-step LangGraph agent with cross-cloud execution.
  • Only Aurora handles multi-cloud out of the box (AWS, Azure, GCP, OVH, Scaleway, plus Kubernetes). HolmesGPT covers Kubernetes plus 30+ observability integrations. K8sGPT is Kubernetes-only.
  • Only Aurora generates remediation pull requests. HolmesGPT can open PRs with suggested fixes in Operator mode; K8sGPT is strictly read-only with no write actions.
  • All three support BYO LLM, including local inference via Ollama for air-gapped deployments — the differentiator over commercial AI SREs.

Of the 46+ companies offering "AI SRE" products in 2026, only a handful are open source — and only three are credible enough to deploy in production: Aurora, HolmesGPT, and K8sGPT. An open-source AI SRE is an AI agent that performs incident investigation, root cause analysis, and (sometimes) remediation under a permissive license that allows self-hosting, source-code audit, and modification. They get lumped together in marketing, but architecturally these three are different products solving different parts of the incident response problem.

This guide compares them on the things that actually matter: agent architecture, execution model, integration scope, and where you can deploy them. By the end, you should be able to pick the right one for your stack — or know whether you need all three.

What is an open-source AI SRE?

An open-source AI SRE is an AI agent that performs site reliability engineering work — alert triage, incident investigation, root cause analysis, remediation — under a permissive license that allows self-hosting, source-code audit, and modification. Three properties are non-negotiable:

  1. License: Apache 2.0, MIT, or equivalent. Source-available licenses (BSL, SSPL) do not count for most production teams.
  2. Self-hostable: runs entirely inside your environment without phoning home to a vendor.
  3. LLM-driven: uses large language models, not just static rules or regex. (This is what separates "AI SRE" from older AIOps tools.)

The reason this category matters: incident data is some of the most sensitive telemetry an organization produces. Self-hosted, audit-able AI is the only model that works for regulated industries, air-gapped environments, or any team that doesn't want production telemetry leaving their perimeter.

Why open source matters for AI SRE

Three reasons buyers in 2026 are explicitly asking for open-source AI SRE:

  • Data sovereignty. Incident telemetry includes log lines, configuration values, deployment IDs, and sometimes payloads. SaaS AI SREs send all of it to their backend and to a third-party LLM. Self-hosted means it stays in your VPC.
  • Audit transparency. Regulators and security teams want to know exactly what the agent does on production systems. Source code answers that question; vendor marketing does not.
  • Cost predictability. Per-user or per-incident pricing can balloon quickly. Open-source costs scale with infrastructure and LLM tokens — and Ollama-local inference can flatten the LLM bill entirely.

The trade-off is real: you operate the system yourself. For teams already operating Kubernetes and observability stacks, that's marginal effort. For teams without that operational maturity, a commercial AI SRE is often the right call.

How the three compare

This is the only table you need. Verified from each project's GitHub repo, official docs, and source as of May 2026.

Dimension Aurora HolmesGPT K8sGPT
License Apache 2.0 Apache 2.0 Apache 2.0
GitHub stars 201 2,366 7,737
Latest release v1.1.1 (Mar 2026) 0.26.0 (Apr 2026) v0.4.32 (Apr 2026)
CNCF status Independent Sandbox (Oct 2025) Sandbox
Built by Arvo AI Robusta + Microsoft k8sgpt-ai community
Agent architecture LangGraph supervisor + sub-agents ReAct loop (ToolCallingLLM) Rule-based scanner + LLM explainer
Multi-step reasoning Yes Yes No (single-shot per analyzer)
Cloud providers AWS, Azure, GCP, OVH, Scaleway Kubernetes + AWS via MCP Kubernetes only
Kubernetes execution kubectl in sandboxed pods Read-only kubectl get/describe Read-only via Kube API
Other integrations 22+ (PagerDuty, Datadog, Grafana, Slack, Confluence, Bitbucket, Jenkins, etc.) 30+ toolsets (Prometheus, Grafana, Datadog, Loki, Jira, etc.) None — Kubernetes-only by design
Knowledge base / RAG Weaviate vector search over runbooks + postmortems Yes (via toolsets) No
Dependency graph Memgraph (cross-cloud blast radius) No No
Postmortem generation Yes, exports to Confluence Investigation reports only No
Pull request remediation GitHub + Bitbucket with human approval gate GitHub PRs in Operator mode None — strictly read-only
MCP server Yes (340+ endpoints, 6 named tools) Yes (consumes MCP servers) No
LLM providers OpenAI, Anthropic, Google, Vertex, OpenRouter, Ollama OpenAI, Anthropic, Azure OpenAI, Bedrock, Gemini, Vertex, Ollama OpenAI, Azure, Cohere, Bedrock, SageMaker, Gemini, Vertex, HuggingFace, WatsonX, LocalAI, Ollama
Air-gapped support Yes (Ollama + image tarballs) Yes (Ollama) Yes (LocalAI / Ollama)
Deployment Docker Compose or Helm Binary, API server, K8s Operator, Python SDK Go binary, K8s operator

The OSS AI SRE Maturity Spectrum

A useful way to position these tools is on a four-level spectrum of agent capability. Each level is strictly more capable than the one below — and each requires more architectural work to deploy safely.

Level What the agent does Tools at this level
L1 — Diagnostic Explainer Reads system state, finds anomalies via deterministic rules, uses an LLM only to explain findings in natural language. No multi-step reasoning. Strictly read-only. K8sGPT
L2 — Read-Only Investigator Runs an iterative ReAct loop. Picks tools dynamically. Investigates across multiple data sources (metrics, logs, traces, K8s state). Read-only by design. HolmesGPT
L3 — Investigation + Suggestion Everything in L2, plus opens pull requests with suggested fixes. Humans review and merge. No autonomous writes to infrastructure. HolmesGPT (Operator mode), Aurora
L4 — Investigation + Approved Remediation Everything in L3, plus can execute approved remediation actions (rollbacks, restarts, scale changes) inside guardrails — typically a sandboxed runtime with explicit human approval for destructive operations. Aurora (with Bitbucket connector's human approval gate for destructive actions)

No open-source tool today operates as a fully autonomous L5 (closed-loop remediation without human approval) — and that's by design. Most serious teams want explicit gates before agents touch production.

Aurora vs HolmesGPT — which should you choose?

Aurora and HolmesGPT are the two genuinely agentic options. The choice depends on your blast radius.

Pick HolmesGPT when:

  • Your stack is heavily Kubernetes + Prometheus + Grafana and your incidents live there.
  • You want a tool that already integrates with 30+ observability sources, including Loki, AlertManager, NewRelic, Datadog APM, OpsGenie, and Slack.
  • You value CNCF governance and a steep ecosystem velocity.
  • You don't need cross-cloud (AWS APIs, Azure resources, GCP services) reasoning out of the box.

Pick Aurora when:

  • You operate across multiple clouds (AWS + Azure, GCP + AWS, etc.) and need an agent that can correlate incidents across providers.
  • You want auto-generated postmortems exported to Confluence.
  • You want the agent to draft remediation PRs against your codebase.
  • You need a graph-based blast radius model (Memgraph) for dependency analysis.
  • You want an MCP server so your IDE assistants (Cursor, Claude Desktop, Windsurf) can query live incident state.

In practice, some teams run both: HolmesGPT for in-cluster Kubernetes triage, Aurora for cross-cloud investigation and postmortem generation.

Aurora vs K8sGPT — which should you choose?

This is closer to "which tool category do you need?" than a head-to-head.

Pick K8sGPT when:

  • You want the absolute simplest entry point to AI for Kubernetes — a single Go binary you can install with Homebrew and run as k8sgpt analyze --explain.
  • Your needs stop at "explain why this pod is broken" rather than multi-step incident investigation.
  • You want the maturity of a 7.7k-star CNCF Sandbox project with rule-based analyzers that won't hallucinate causes (because they are deterministic before the LLM ever sees them).

Pick Aurora when:

  • You need agentic investigation, not just diagnostic explanation.
  • You operate beyond Kubernetes — cloud APIs, Terraform, monitoring tools, runbooks.
  • You want auto-generated postmortems and remediation PRs.

These two are complements, not competitors. Many teams run K8sGPT as a lightweight first-line scanner and Aurora (or HolmesGPT) for full incident investigation.

HolmesGPT vs K8sGPT — head-to-head

Despite both being CNCF Sandbox projects targeting Kubernetes, these are different categories.

Aspect HolmesGPT K8sGPT
What it is Multi-step AI agent Rule-based scanner with LLM explanations
When it shines Investigating an alert end-to-end across signals Diagnosing why a specific resource is unhealthy
Latency Seconds to minutes (multi-step) Sub-second per analyzer
LLM cost Higher (multiple calls per investigation) Lower (one explanation per finding)
Hallucination risk Higher (agent reasons across signals) Lower (deterministic before LLM)
Best fit On-call engineers handling alerts Platform teams running periodic cluster audits

K8sGPT's anonymization feature (which masks resource names and labels before sending to the LLM) is a meaningful privacy advantage that HolmesGPT does not match.

When NOT to use open-source AI SRE

Honest take: open-source AI SRE is the right answer for most engineering-led, security-conscious teams. It's the wrong answer when:

  • You don't have the operational capacity to run another stateful service in production.
  • You want vendor support with SLAs and a phone number to call at 3 AM.
  • Your team is small enough that the LLM-API bill of an investigation-heavy agent will exceed the per-seat price of a SaaS AI SRE.
  • You need certifications (SOC2, ISO 27001) at the AI-vendor layer rather than at the cloud-provider layer.

How to pilot an open-source AI SRE in your team

A six-step, low-risk pilot for any of the three tools:

  1. Pick one cluster and one observability source. Don't try to cover everything at once.
  2. Install in read-only mode first. All three tools default to read-only — keep it that way for the first two weeks.
  3. Connect one alert source. PagerDuty, Datadog, or Grafana — pick the one that's already firing real alerts.
  4. Run for two weeks alongside human on-call. Compare the agent's RCA conclusions to what your engineers determined. Track accuracy and time-to-RCA.
  5. Feed it your historical context. Aurora and HolmesGPT both support runbook + postmortem ingestion. Agents become dramatically more useful with organizational memory.
  6. Expand carefully. Add more clusters, then enable remediation suggestions, then (only after trust) approved automated actions for specific low-risk patterns.

Getting started with Aurora

Aurora is the multi-cloud, multi-tool option among open-source AI SREs. To run it:

git clone https://github.com/Arvo-AI/aurora.git
cd aurora
make init
make prod-prebuilt
Enter fullscreen mode Exit fullscreen mode

Aurora supports any LLM provider — OpenAI, Anthropic, Google, OpenRouter, or local models via Ollama for air-gapped deployments.

For the technical side of running an agent that executes kubectl against production, see the companion piece on AI agent kubectl safety and sandboxed execution.


Originally published at arvoai.ca.

Top comments (0)