TL;DR
OpenAI redefined Codex from a coding assistant to an always-on productivity agent. 3M weekly users, 50% already using it for non-coding tasks. Six new features — including self-scheduling automations and background computer use — push Codex and Anthropic's Claude Code stack toward near-identical architecture (MCP, Skills, plugins). If you build with either, you need to understand both.
What actually shipped on 2026-04-16
OpenAI announced the "superapp" update, and the framing matters: this isn't a feature pack, it's a repositioning. They're calling Codex their superapp realization after the recent $122B funding round.
Three numbers tell the story:
- 3M weekly active developers
- 50% already using Codex for non-coding work
- 90+ bundled plugins (Skills + App Integrations + MCP Servers)
Below is each feature, what it actually does, and what it breaks.
1. Background Computer Use
Codex now controls your desktop with its own cursor — click, type, read screen. The twist: it runs in the background without stealing focus. You keep working in other apps while multiple Codex agents run in parallel.
- Platform: macOS only (initial), EU/UK later
- Tech: macOS accessibility APIs (Sky acquisition)
- Use cases: legacy apps without APIs, frontend iteration, UI testing
Compare to Anthropic's Computer Use from Claude: same capability category, but Claude's version does take focus. OpenAI's "background" framing is the UX differentiator.
2. In-App Browser
A browser inside the Codex app. You can annotate directly on the page to give the agent precise instructions. Right now it only works with localhost — your local dev server — but the roadmap is full web.
- Current: localhost dev servers only
- Planned: full web commands, screenshots, user flow execution
- Origin: OpenAI's Atlas browser tech integrated into Codex
The killer workflow: edit React code → Codex renders it in its own browser → you annotate the bug on screen → Codex fixes it. No context switching.
3. gpt-image-1.5 (native image generation)
Image generation inside Codex using the new gpt-image-1.5 model. The point isn't the model — it's the single-window workflow:
screenshot → code → generated image → back to code
Product concepts, frontend mockups, game assets — all without leaving the agent.
4. Memory (preview)
Codex now remembers your preferences, edit patterns, and accumulated context across sessions. This effectively replaces "custom instructions" as a personalization mechanism. Not yet available for Enterprise, Edu, EU, UK.
5. Automations — self-scheduling
This is the paradigm shift most people will miss on first read.
Codex can now schedule its own future work and reuse existing conversation threads so context accumulates rather than resets. Heartbeat automations let an agent stay on a task across days or weeks, waking itself up to continue.
# Conceptual pattern
# Previous model: agent completes task, stops.
# New model: agent finishes task, schedules its own follow-up,
# wakes itself on condition (PR merged, Slack replied, file changed).
Target use cases:
- Open PR follow-through
- Slack / Gmail / Notion thread tracking
- Long-running data collection
If you've built "always-on agent" with cron + state files, this is that — native.
6. 90+ Plugins (Skills + Apps + MCP)
The plugin bundle includes:
Direct integrations:
- Atlassian Rovo (JIRA)
- CircleCI, GitLab Issues, CodeRabbit
- Microsoft Suite (Office 365)
- Neon (Databricks), Remotion, Render, Superpowers
Third-party: Box, Figma, Linear, Notion, Sentry, Slack, Gmail, Hugging Face
Here's the thing: the structure is nearly identical to Anthropic's Claude plugins. MCP is converging as the shared standard.
Developer-only additions
| Feature | Description |
|---|---|
| GitHub PR reviews | Reply to review comments from inside Codex |
| Multi-terminal tabs | Multiple terminals in one session |
| Remote devbox SSH | Alpha — SSH into remote dev env |
| File sidebar | PDF / spreadsheet / slides / doc preview |
| Summary pane | Agent plan, sources, outputs in one view |
| Fast frontend iteration | Browser + computer use + image gen combined |
The strategic read for developers
Anthropic Claude Code and OpenAI Codex are converging to nearly identical stacks: plugins, MCP, Skills, memory, computer use. Which you pick matters less than understanding both, because:
- MCP means plugins port across vendors fast
- Pricing structure is the same (monthly subscription to the host app)
- The winning axis isn't intelligence — it's ecosystem velocity
For solo developers and 1-person companies, the self-scheduling automations feature is the one to study first. Overnight builds, PR follow-ups, and recurring data collection become genuinely autonomous workflows.
What to do this week
- Check which of your repetitive tasks could move to self-scheduling agents
- Scan the 90-plugin catalog even if you're on Claude — the integrations will cross-port
- If you're on macOS, try Background Computer Use on a legacy app you hate
- Benchmark in-app browser against your current dev → review → fix loop
Links
Are you on Claude Code, Codex, or both? Which of these six features matters most to your workflow?
Top comments (0)