How to Install Codex CLI: The Complete Official Setup Guide
TL;DR: Codex CLI requires npm install -g @openai/codex, but configuration via ~/.codex/config.toml is where the real complexity lies—covering model selection, approval modes, sandbox levels, MCP servers, and per-project AGENTS.md files that shape prompts. This guide covers installation paths (npm, Homebrew, binary release), authentication (ChatGPT vs. API key), configuration essentials, and routing to OfoxAI or compatible providers.
What Codex CLI Is in May 2026
Codex CLI functions as OpenAI's terminal-native coding agent. Users enter natural-language instructions; the system reads repositories, plans changes, and edits files in place—comparable to Claude Code but built around the GPT-5.3-Codex model family. A recent rewrite transitioned the core from Node.js to a Rust binary distributed through npm wrapper, reducing cold-start time to approximately one second and moving configuration into a dedicated TOML file.
Three critical considerations for new installations:
-
Default model is GPT-5.3-Codex, a coding-optimized variant of GPT-5.3 listed in OfoxAI's model catalog alongside GPT-5.4 family. For non-coding tasks—planning, documentation rewrites, prompt engineering—switch via
/modelto GPT-5.4 or GPT-5.4 Pro. -
Sandbox is enabled by default. Codex 0.12x operates in
workspace-writemode initially, permitting file edits within repositories but preventing writes to home directories or network-egress shells without approval. This design provides good security; do not disable it based on tutorial recommendations. -
AGENTS.md serves as the leverage point. A concise, specific
AGENTS.mdfile in repository root modifies model behavior more substantially than any--modelflag.
For comparisons with other terminal agents—Claude Code, Cursor, Cline—consult the coding tools comparison guide.
System Requirements
| Requirement | Minimum |
|---|---|
| Node.js | 22 LTS (24 recommended) |
| OS | macOS 13+, Linux (glibc 2.31+ or musl), Windows via WSL2 |
| Disk | ~80 MB for binary plus node_modules wrapper |
| Auth | ChatGPT account (Plus/Pro/Business/Edu/Enterprise) or OpenAI-compatible API key |
Windows native lacks official support as of writing. Use WSL2 with Ubuntu or Debian distributions—most CI examples in documentation assume POSIX shell.
Verify Node version before installing:
node --version
# v22.x.x or higher
Upgrade if older via nvm or distro package manager. The wrapper refuses installation on Node 18 or below.
Installation Methods
1. npm (Recommended for Most Users)
npm install -g @openai/codex
codex --version
This represents the canonical install path documented at developers.openai.com/codex/cli. It downloads platform-appropriate Rust binary during postinstall, so the apparent "node module" primarily functions as launcher. Updates flow through npm:
npm install -g @openai/codex@latest
If codex: command not found appears post-install, global npm bin lacks PATH inclusion. Execute npm prefix -g to locate global install root—the codex binary resides in <prefix>/bin—and add that directory to shell profile.
2. Homebrew (macOS Cask)
brew install --cask codex
codex --version
The cask wraps the identical Rust binary OpenAI distributes on GitHub releases page. Select this approach if existing Homebrew management is preferred and Node toolchain is unnecessary. Updates: brew upgrade --cask codex.
3. Binary Release (No Toolchain)
For CI runners, restricted environments, or air-gapped systems, obtain prebuilt binary from github.com/openai/codex/releases/latest:
- macOS Apple Silicon:
codex-aarch64-apple-darwin.tar.gz - macOS Intel:
codex-x86_64-apple-darwin.tar.gz - Linux x86_64 musl:
codex-x86_64-unknown-linux-musl.tar.gz - Linux arm64 musl:
codex-aarch64-unknown-linux-musl.tar.gz
Extract, place codex in /usr/local/bin or ~/.local/bin, execute chmod +x, complete. Node installation unnecessary.
First-Run Authentication
Initial codex invocation prompts authentication. Two options available.
Option A: Sign In With ChatGPT
codex
Codex opens browser tab; user logs into ChatGPT; CLI receives OAuth token cached at ~/.codex/auth.json. For existing ChatGPT Plus, Pro, Business, Edu, or Enterprise subscribers, this represents the most economical path—Codex usage operates atop subscription up to OpenAI's published monthly limit for tier.
This approach does not require API key and constitutes official recommendation for individual developers.
Option B: API Key
export OPENAI_API_KEY=sk-...
codex
Employ this when:
- Routing through different provider desired (see "Routing Through OfoxAI" below).
- CI environment prevents interactive browser login.
- Per-call billing precision needed instead of subscription usage.
CLI also reads OPENAI_API_KEY from ~/.codex/config.toml if shell environment secret storage is undesired.
The Config File Most Quickstarts Skip
Codex searches for ~/.codex/config.toml on each run. Absent, it defaults. Working starter file with necessary keys:
# ~/.codex/config.toml
model = "gpt-5.3-codex"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
[model_providers.openai]
name = "OpenAI"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"]
Three controls warrant explanation.
approval_policy — Three Values
-
untrusted— Codex prompts before every shell command and file edit. Maximum safety, slower performance. -
on-request— Codex auto-runs sandbox commands; requests approval only for sandbox escape (network egress, writes outside workspace). Default; appropriate for most repositories. -
never— No prompts. Pair with restrictive sandbox or accept risk. Note certain 0.12x builds do not fully honorneverin MCP tool paths—verify on your version before relying on it for unattended execution.
sandbox_mode — Three Levels
-
read-only— Codex reads files and runs no-op commands but cannot edit. Suitable for "explain codebase" sessions. -
workspace-write— Default. Edits within current workspace, no network, no writes outside repository. -
danger-full-access— Bypasses all sandboxing. Codex may executerm -rf /if it chooses. Reserve for throwaway containers or VMs.
MCP Servers
Codex implements Model Context Protocol, allowing connection of filesystem servers, search tools, database connectors, and MCP ecosystem items. Each server receives [mcp_servers.NAME] table with command and args. MCP servers configured for Claude Code function identically—reference Claude Code configuration guide for examples.
AGENTS.md — The Highest-Leverage File in Your Repo
Position AGENTS.md at project root. Codex reads it per session, treating it as persistent system-level instruction.
# Project: payments-service
- TypeScript strict mode. No `any`.
- Tests live next to source: `foo.ts` + `foo.test.ts`.
- Run `pnpm test` before suggesting a commit.
- Never touch `migrations/*.sql` — those are reviewed manually.
- DB queries go through `src/db/client.ts`. Do not import `pg` directly elsewhere.
Two practical guidelines:
- Keep it concise — under ~150 lines. Codex reads it every turn; sprawling files dilute signal and burn context budget.
- Write rules, not lore. "Never touch migrations" represents rule. "We adopted hexagonal architecture in 2024" constitutes lore—include in README.
Global rules also appear at ~/.codex/AGENTS.md, applying across every project (e.g., "always use double quotes in shell", "prefer ripgrep over grep"). Project-level AGENTS.md supersedes global where conflicts exist.
Routing Through OfoxAI (or Any Compatible Provider)
Share one API key across Codex CLI, Claude Code, Cursor, and personal scripts—or bypass direct OpenAI billing in problematic regions—by directing Codex to OpenAI-compatible gateway.
Two env vars in shell profile:
export OPENAI_API_KEY=<your_ofoxai_key>
export OPENAI_BASE_URL=https://api.ofox.ai/v1
Alternatively, declare as named provider in ~/.codex/config.toml:
model = "openai/gpt-5.3-codex"
model_provider = "ofox"
[model_providers.ofox]
name = "OfoxAI"
base_url = "https://api.ofox.ai/v1"
env_key = "OFOX_API_KEY"
Then export OFOX_API_KEY=... and default OPENAI_API_KEY (if existing) remains intact for non-Codex tools.
Why pursue this? Three developer-cited reasons:
- One key, every model. Identical key operates for GPT-5.3-Codex, Claude Opus 4.7, Gemini 3.1 Pro, DeepSeek V4. Beneficial for frequent model switching. See API aggregation overview.
-
Regional reach. OfoxAI maintains functioning endpoints where direct
api.openai.comproves intermittent. - Cost shape. Pay-per-token across providers instead per-vendor subscriptions. Cost reduction guide addresses mathematics.
For advanced routing setup (env vs config, model prefixes, billing dashboard), consult Codex CLI API configuration guide.
Verifying the Install
Quick smoke test sequence:
# 1. CLI is on PATH
codex --version
# 2. Auth is wired
codex login status
# 3. Sandbox works
cd /tmp && mkdir codex-test && cd codex-test
codex "create a Python script that prints fibonacci numbers up to 100"
ls -la
If codex login status reports "Not logged in" after previous authentication, ~/.codex/auth.json missing or unreadable—typically permissions issue from running codex as root then again as user.
Updating and Uninstalling
Update:
npm install -g @openai/codex@latest # npm path
brew upgrade --cask codex # Homebrew path
Pin a version (recommended for CI):
npm install -g @openai/[email protected]
Uninstall:
npm uninstall -g @openai/codex
rm -rf ~/.codex # removes auth, config, history
~/.codex directory contains auth token, config, session history, and cached MCP server state. Delete only for clean slate.
Troubleshooting the First Hour
"Command not found: codex" — npm global bin lacks PATH inclusion. Run npm prefix -g and add <that path>/bin to shell profile (npm bin -g was removed in npm 9+).
"Cannot find module …" on first run — Node version too old. Upgrade to Node 22+.
Browser auth loop — Default browser not opening OpenAI consent page. Run codex login --device-auth to switch to OAuth device-code flow: Codex prints URL and one-time code for paste into any browser (including different machine), no local browser handoff.
Invalid API key after switching providers — OPENAI_API_KEY still references OpenAI but OPENAI_BASE_URL references OfoxAI (or vice versa). Both env vars require same provider match, or use named [model_providers.X] config to avoid conflict entirely.
Model not found errors — When routing through non-OpenAI provider, model IDs need vendor prefix. Use openai/gpt-5.3-codex instead of gpt-5.3-codex. Same applies pointing Codex at non-OpenAI models like Claude or Gemini through gateway.
approval_policy ignored — Several 0.12x builds have open bugs around approval policy and sandbox enforcement. Pin to latest stable release and report regressions upstream rather than disabling sandbox.
What to Do Next
With working Codex CLI, two follow-ups merit reading:
- Daily workflow — Codex CLI real-world coding workflow covers AGENTS.md + plan-mode + git-worktree loop senior developers employ daily.
- Model picking — Codex defaults to GPT-5.3-Codex, but tasks may require different models. Model comparison guide frames GPT vs Claude vs Gemini decision.
- Multi-tool setup — Running Cursor, Cline, or Claude Code alongside? Unified custom API setup guide demonstrates wiring all four against one gateway.
One aspect every Codex quickstart omits: the dangerous default is not sandbox_mode = "danger-full-access"—that one remains honest. The dangerous default involves leaving OPENAI_API_KEY exported in shell while running codex in repo whose AGENTS.md states "feel free reading secrets from .env to debug." Treat install as first ten percent of setup; config file and AGENTS.md comprise remaining ninety.
Originally published on ofox.ai/blog.
Top comments (0)