OpenAI shipped Codex CLI to general availability this week, graduating from the research preview that has been running since late 2025. The GA release includes stable tool-calling, streaming completions, and a local sandbox mode that runs entirely on-device for sensitive codebases.

AGNT's fleet layer has had a codex_local adapter since v0.2. The adapter implements the same four-method interface as every other fleet backend: complete(), stream(), supports_tools(), and health_check(). With the GA release, the Codex API surface is now stable enough that we are promoting codex_local from experimental to supported status.

What this means in practice: any AGNT fleet agent can be hot-swapped from claude_local to codex_local via a single API-PATCH call. The swap preserves working directory, instructions file path, and all agent state. We documented the full swap procedure in /guides/ecosystem/swap-codex-as-fleet-adapter and tested it live against three fleet agents without restarts.

The interesting architectural question is where Codex fits in the routing hierarchy. Claude Sonnet remains the default for complex multi-tool reasoning. Gemma handles lightweight local tasks. Codex occupies the middle ground — cloud-capable but with strong code-specific performance that makes it a natural fit for developer-facing fleet agents like the PR reviewer, the migration writer, and the test generator.

For builders who want to try: set CODEX_API_KEY in your fleet env, declare codex_local as the preferred adapter on any agent, and restart. The adapter handles the rest. Full walkthrough at /stack/codex.