Commit Graph

69 Commits

Author SHA1 Message Date
jif-oai
774892c6d7 feat: add auto-approval for codex exec (#5043) 2025-10-15 19:03:54 +01:00
pakrym-oai
60f9e85c16 Set codex SDK TypeScript originator (#4894)
## Summary
- ensure the TypeScript SDK sets CODEX_INTERNAL_ORIGINATOR_OVERRIDE to
codex_sdk_ts when spawning the Codex CLI
- extend the responses proxy test helper to capture request headers for
assertions
- add coverage that verifies Codex threads launched from the TypeScript
SDK send the codex_sdk_ts originator header

## Testing
- Not Run (not requested)


------
https://chatgpt.com/codex/tasks/task_i_68e561b125248320a487f129093d16e7
2025-10-07 14:06:41 -07:00
Dylan
4764fc1ee7 feat: Freeform apply_patch with simple shell output (#4718)
## Summary
This PR is an alternative approach to #4711, but instead of changing our
storage, parses out shell calls in the client and reserializes them on
the fly before we send them out as part of the request.

What this changes:
1. Adds additional serialization logic when the
ApplyPatchToolType::Freeform is in use.
2. Adds a --custom-apply-patch flag to enable this setting on a
session-by-session basis.

This change is delicate, but is not meant to be permanent. It is meant
to be the first step in a migration:
1. (This PR) Add in-flight serialization with config
2. Update model_family default
3. Update serialization logic to store turn outputs in a structured
format, with logic to serialize based on model_family setting.
4. Remove this rewrite in-flight logic.

## Test Plan
- [x] Additional unit tests added
- [x] Integration tests added
- [x] Tested locally
2025-10-04 19:16:36 -07:00
Fouad Matin
a5b7675e42 add(core): managed config (#3868)
## Summary

- Factor `load_config_as_toml` into `core::config_loader` so config
loading is reusable across callers.
- Layer `~/.codex/config.toml`, optional `~/.codex/managed_config.toml`,
and macOS managed preferences (base64) with recursive table merging and
scoped threads per source.

## Config Flow

```
Managed prefs (macOS profile: com.openai.codex/config_toml_base64)
                               ▲
                               │
~/.codex/managed_config.toml   │  (optional file-based override)
                               ▲
                               │
                ~/.codex/config.toml (user-defined settings)
```

- The loader searches under the resolved `CODEX_HOME` directory
(defaults to `~/.codex`).
- Managed configs let administrators ship fleet-wide overrides via
device profiles which is useful for enforcing certain settings like
sandbox or approval defaults.
- For nested hash tables: overlays merge recursively. Child tables are
merged key-by-key, while scalar or array values replace the prior layer
entirely. This lets admins add or tweak individual fields without
clobbering unrelated user settings.
2025-10-03 13:02:26 -07:00
Michael Bolin
042d4d55d9 feat: codex exec writes only the final message to stdout (#4644)
This updates `codex exec` so that, by default, most of the agent's
activity is written to stderr so that only the final agent message is
written to stdout. This makes it easier to pipe `codex exec` into
another tool without extra filtering.

I introduced `#![deny(clippy::print_stdout)]` to help enforce this
change and renamed the `ts_println!()` macro to `ts_msg()` because (1)
it no longer calls `println!()` and (2), `ts_eprintln!()` seemed too
long of a name.

While here, this also adds `-o` as an alias for `--output-last-message`.

Fixes https://github.com/openai/codex/issues/1670
2025-10-03 16:22:12 +00:00
jif-oai
33d3ecbccc chore: refactor tool handling (#4510)
# Tool System Refactor

- Centralizes tool definitions and execution in `core/src/tools/*`:
specs (`spec.rs`), handlers (`handlers/*`), router (`router.rs`),
registry/dispatch (`registry.rs`), and shared context (`context.rs`).
One registry now builds the model-visible tool list and binds handlers.
- Router converts model responses to tool calls; Registry dispatches
with consistent telemetry via `codex-rs/otel` and unified error
handling. Function, Local Shell, MCP, and experimental `unified_exec`
all flow through this path; legacy shell aliases still work.
- Rationale: reduce per‑tool boilerplate, keep spec/handler in sync, and
make adding tools predictable and testable.

Example: `read_file`
- Spec: `core/src/tools/spec.rs` (see `create_read_file_tool`,
registered by `build_specs`).
- Handler: `core/src/tools/handlers/read_file.rs` (absolute `file_path`,
1‑indexed `offset`, `limit`, `L#: ` prefixes, safe truncation).
- E2E test: `core/tests/suite/read_file.rs` validates the tool returns
the requested lines.

## Next steps:
- Decompose `handle_container_exec_with_params` 
- Add parallel tool calls
2025-10-03 13:21:06 +01:00
pakrym-oai
1d94b9111c Use supports_color in codex exec (#4633)
It knows how to detect github actions
2025-10-03 01:15:03 +00:00
pakrym-oai
f895d4cbb3 Minor cleanup of codex exec output (#4585)
<img width="850" height="723" alt="image"
src="https://github.com/user-attachments/assets/2ae067bf-ba6b-47bf-9ffe-d1c3f3aa1870"
/>
<img width="872" height="547" alt="image"
src="https://github.com/user-attachments/assets/9058be24-6513-4423-9dae-2d5fd4cbf162"
/>
2025-10-02 14:17:42 -07:00
pakrym-oai
4c566d484a Separate interactive and non-interactive sessions (#4612)
Do not show exec session in VSCode/TUI selector.
2025-10-02 13:06:21 -07:00
pakrym-oai
2f6fb37d72 Support CODEX_API_KEY for codex exec (#4615)
Allows to set API key per invocation of `codex exec`
2025-10-02 09:59:45 -07:00
pakrym-oai
7fc3edf8a7 Remove legacy codex exec --json format (#4525)
`codex exec --json` now maps to the behavior of `codex exec
--experimental-json` with new event and item shapes.

Thread events:
- thread.started
- turn.started
- turn.completed
- turn.failed
- item.started
- item.updated
- item.completed

Item types: 
- assistant_message
- reasoning
- command_execution
- file_change
- mcp_tool_call
- web_search
- todo_list
- error

Sample output:

<details>
`codex exec "list my assigned github issues"  --json | jq`

```
{
  "type": "thread.started",
  "thread_id": "01999ce5-f229-7661-8570-53312bd47ea3"
}
{
  "type": "turn.started"
}
{
  "type": "item.completed",
  "item": {
    "id": "item_0",
    "item_type": "reasoning",
    "text": "**Planning to list assigned GitHub issues**"
  }
}
{
  "type": "item.started",
  "item": {
    "id": "item_1",
    "item_type": "mcp_tool_call",
    "server": "github",
    "tool": "search_issues",
    "status": "in_progress"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_1",
    "item_type": "mcp_tool_call",
    "server": "github",
    "tool": "search_issues",
    "status": "completed"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_2",
    "item_type": "reasoning",
    "text": "**Organizing final message structure**"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_3",
    "item_type": "assistant_message",
    "text": "**Assigned Issues**\n- openai/codex#3267 – “stream error: stream disconnected before completion…” (bug) – last update 2025-09-08\n- openai/codex#3257 – “You've hit your usage limit. Try again in 4 days 20 hours 9 minutes.” – last update 2025-09-23\n- openai/codex#3054 – “reqwest SSL panic (library has no ciphers)” (bug) – last update 2025-09-03\n- openai/codex#3051 – “thread 'main' panicked at linux-sandbox/src/linux_run_main.rs:53:5:” (bug) – last update 2025-09-10\n- openai/codex#3004 – “Auto-compact when approaching context limit” (enhancement) – last update 2025-09-26\n- openai/codex#2916 – “Feature request: Add OpenAI service tier support for cost optimization” – last update 2025-09-12\n- openai/codex#1581 – “stream error: stream disconnected before completion: stream closed before response.complete; retrying...” (bug) – last update 2025-09-17"
  }
}
{
  "type": "turn.completed",
  "usage": {
    "input_tokens": 34785,
    "cached_input_tokens": 12544,
    "output_tokens": 560
  }
}
```

</details>
2025-09-30 17:21:37 -07:00
pakrym-oai
c09e131653 Set originator for codex exec (#4485)
Distinct from the main CLI.
2025-09-29 20:59:19 -07:00
vishnu-oai
04c1782e52 OpenTelemetry events (#2103)
### Title

## otel

Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events**
that
describe each run: outbound API requests, streamed responses, user
input,
tool-approval decisions, and the result of every tool invocation. Export
is
**disabled by default** so local runs remain self-contained. Opt in by
adding an
`[otel]` table and choosing an exporter.

```toml
[otel]
environment = "staging"   # defaults to "dev"
exporter = "none"          # defaults to "none"; set to otlp-http or otlp-grpc to send events
log_user_prompt = false    # defaults to false; redact prompt text unless explicitly enabled
```

Codex tags every exported event with `service.name = "codex-cli"`, the
CLI
version, and an `env` attribute so downstream collectors can distinguish
dev/staging/prod traffic. Only telemetry produced inside the
`codex_otel`
crate—the events listed below—is forwarded to the exporter.

### Event catalog

Every event shares a common set of metadata fields: `event.timestamp`,
`conversation.id`, `app.version`, `auth_mode` (when available),
`user.account_id` (when available), `terminal.type`, `model`, and
`slug`.

With OTEL enabled Codex emits the following event types (in addition to
the
metadata above):

- `codex.api_request`
  - `cf_ray` (optional)
  - `attempt`
  - `duration_ms`
  - `http.response.status_code` (optional)
  - `error.message` (failures)
- `codex.sse_event`
  - `event.kind`
  - `duration_ms`
  - `error.message` (failures)
  - `input_token_count` (completion only)
  - `output_token_count` (completion only)
  - `cached_token_count` (completion only, optional)
  - `reasoning_token_count` (completion only, optional)
  - `tool_token_count` (completion only)
- `codex.user_prompt`
  - `prompt_length`
  - `prompt` (redacted unless `log_user_prompt = true`)
- `codex.tool_decision`
  - `tool_name`
  - `call_id`
- `decision` (`approved`, `approved_for_session`, `denied`, or `abort`)
  - `source` (`config` or `user`)
- `codex.tool_result`
  - `tool_name`
  - `call_id`
  - `arguments`
  - `duration_ms` (execution time for the tool)
  - `success` (`"true"` or `"false"`)
  - `output`

### Choosing an exporter

Set `otel.exporter` to control where events go:

- `none` – leaves instrumentation active but skips exporting. This is
the
  default.
- `otlp-http` – posts OTLP log records to an OTLP/HTTP collector.
Specify the
  endpoint, protocol, and headers your collector expects:

  ```toml
  [otel]
  exporter = { otlp-http = {
    endpoint = "https://otel.example.com/v1/logs",
    protocol = "binary",
    headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
  }}
  ```

- `otlp-grpc` – streams OTLP log records over gRPC. Provide the endpoint
and any
  metadata headers:

  ```toml
  [otel]
  exporter = { otlp-grpc = {
    endpoint = "https://otel.example.com:4317",
    headers = { "x-otlp-meta" = "abc123" }
  }}
  ```

If the exporter is `none` nothing is written anywhere; otherwise you
must run or point to your
own collector. All exporters run on a background batch worker that is
flushed on
shutdown.

If you build Codex from source the OTEL crate is still behind an `otel`
feature
flag; the official prebuilt binaries ship with the feature enabled. When
the
feature is disabled the telemetry hooks become no-ops so the CLI
continues to
function without the extra dependencies.

---------

Co-authored-by: Anton Panasenko <apanasenko@openai.com>
2025-09-29 11:30:55 -07:00
pakrym-oai
cc1b21e47f Add turn started/completed events and correct exit code on error (#4309)
Adds new event for session completed that includes usage. Also ensures
we return 1 on failures.
```
{
  "type": "session.created",
  "session_id": "019987a7-93e7-7b20-9e05-e90060e411ea"
}
{
  "type": "turn.started"
}
...
{
  "type": "turn.completed",
  "usage": {
    "input_tokens": 78913,
    "cached_input_tokens": 65280,
    "output_tokens": 1099
  }
}
```
2025-09-26 16:21:50 -07:00
pakrym-oai
344d4a1d68 Add explicit codex exec events (#4177)
This pull request add a new experimental format of JSON output.

You can try it using `codex exec --experimental-json`.

Design takes a lot of inspiration from Responses API items and stream
format.

# Session and items
Each invocation of `codex exec` starts or resumes a session. 

Session contains multiple high-level item types:
1. Assistant message 
2. Assistant thinking 
3. Command execution 
4. File changes
5. To-do lists
6. etc.

# Events 
Session and items are going through their life cycles which is
represented by events.

Session is `session.created` or `session.resumed`
Items are `item.added`, `item.updated`, `item.completed`,
`item.require_approval` (or other item types like `item.output_delta`
when we need streaming).

So a typical session can look like:

<details>

```
{
  "type": "session.created",
  "session_id": "01997dac-9581-7de3-b6a0-1df8256f2752"
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_0",
    "item_type": "assistant_message",
    "text": "I’ll locate the top-level README and remove its first line. Then I’ll show a quick summary of what changed."
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_1",
    "item_type": "command_execution",
    "command": "bash -lc ls -la | sed -n '1,200p'",
    "aggregated_output": "pyenv: cannot rehash: /Users/pakrym/.pyenv/shims isn't writable\ntotal 192\ndrwxr-xr-x@  33 pakrym  staff   1056 Sep 24 14:36 .\ndrwxr-xr-x   41 pakrym  staff   1312 Sep 24 09:17 ..\n-rw-r--r--@   1 pakrym  staff      6 Jul  9 16:16 .codespellignore\n-rw-r--r--@   1 pakrym  staff    258 Aug 13 09:40 .codespellrc\ndrwxr-xr-x@   5 pakrym  staff    160 Jul 23 08:26 .devcontainer\n-rw-r--r--@   1 pakrym  staff   6148 Jul 22 10:03 .DS_Store\ndrwxr-xr-x@  15 pakrym  staff    480 Sep 24 14:38 .git\ndrwxr-xr-x@  12 pakrym  staff    384 Sep  2 16:00 .github\n-rw-r--r--@   1 pakrym  staff    778 Jul  9 16:16 .gitignore\ndrwxr-xr-x@   3 pakrym  staff     96 Aug 11 09:37 .husky\n-rw-r--r--@   1 pakrym  staff    104 Jul  9 16:16 .npmrc\n-rw-r--r--@   1 pakrym  staff     96 Sep  2 08:52 .prettierignore\n-rw-r--r--@   1 pakrym  staff    170 Jul  9 16:16 .prettierrc.toml\ndrwxr-xr-x@   5 pakrym  staff    160 Sep 14 17:43 .vscode\ndrwxr-xr-x@   2 pakrym  staff     64 Sep 11 11:37 2025-09-11\n-rw-r--r--@   1 pakrym  staff   5505 Sep 18 09:28 AGENTS.md\n-rw-r--r--@   1 pakrym  staff     92 Sep  2 08:52 CHANGELOG.md\n-rw-r--r--@   1 pakrym  staff   1145 Jul  9 16:16 cliff.toml\ndrwxr-xr-x@  11 pakrym  staff    352 Sep 24 13:03 codex-cli\ndrwxr-xr-x@  38 pakrym  staff   1216 Sep 24 14:38 codex-rs\ndrwxr-xr-x@  18 pakrym  staff    576 Sep 23 11:01 docs\n-rw-r--r--@   1 pakrym  staff   2038 Jul  9 16:16 flake.lock\n-rw-r--r--@   1 pakrym  staff   1434 Jul  9 16:16 flake.nix\n-rw-r--r--@   1 pakrym  staff  10926 Jul  9 16:16 LICENSE\ndrwxr-xr-x@ 465 pakrym  staff  14880 Jul 15 07:36 node_modules\n-rw-r--r--@   1 pakrym  staff    242 Aug  5 08:25 NOTICE\n-rw-r--r--@   1 pakrym  staff    578 Aug 14 12:31 package.json\n-rw-r--r--@   1 pakrym  staff    498 Aug 11 09:37 pnpm-lock.yaml\n-rw-r--r--@   1 pakrym  staff     58 Aug 11 09:37 pnpm-workspace.yaml\n-rw-r--r--@   1 pakrym  staff   2402 Jul  9 16:16 PNPM.md\n-rw-r--r--@   1 pakrym  staff   4393 Sep 12 14:36 README.md\ndrwxr-xr-x@   4 pakrym  staff    128 Sep 18 09:28 scripts\ndrwxr-xr-x@   2 pakrym  staff     64 Sep 11 11:34 tmp\n",
    "exit_code": 0,
    "status": "completed"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_2",
    "item_type": "reasoning",
    "text": "**Reviewing README.md file**\n\nI've located the README.md file at the root, and it’s 4393 bytes. Now, I need to remove the first line, but first, I should check its content to make sure I’m patching it correctly. I’ll use sed to display the first 20 lines. By reviewing those lines, I can determine exactly what needs to be removed before I proceed with the editing. Let's do this carefully!"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_3",
    "item_type": "command_execution",
    "command": "bash -lc sed -n '1,40p' README.md",
    "aggregated_output": "<h1 align=\"center\">OpenAI Codex CLI</h1>\n\n<p align=\"center\"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>\n\n<p align=\"center\"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.\n</br>\n</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href=\"https://developers.openai.com/codex/ide\">install in your IDE</a>\n</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href=\"https://chatgpt.com/codex\">chatgpt.com/codex</a></p>\n\n<p align=\"center\">\n  <img src=\"./.github/codex-cli-splash.png\" alt=\"Codex CLI splash\" width=\"80%\" />\n  </p>\n\n---\n\n## Quickstart\n\n### Installing and running Codex CLI\n\nInstall globally with your preferred package manager. If you use npm:\n\n```shell\nnpm install -g @openai/codex\n```\n\nAlternatively, if you use Homebrew:\n\n```shell\nbrew install codex\n```\n\nThen simply run `codex` to get started:\n\n```shell\ncodex\n```\n\n<details>\n<summary>You can also go to the <a href=\"https://github.com/openai/codex/releases/latest\">latest GitHub Release</a> and download the appropriate binary for your platform.</summary>\n\n",
    "exit_code": 0,
    "status": "completed"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_4",
    "item_type": "reasoning",
    "text": "**Updating README file**\n\nThe user wants to update the README by removing the first line, which is an HTML h1 tag. I'll proceed to delete that line while keeping the rest intact. To do this, I need to use the apply_patch tool and create a patch in the diff format. I’ll take care to include context lines around the change. After removing the header, I’m considering whether to delete the following blank line as well to keep it tidy."
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_5",
    "item_type": "reasoning",
    "text": "**Crafting the patch**\n\nThe user has asked to remove just the first line, without mentioning the blank lines. Technically, leaving a blank line at the start is acceptable, but it's not ideal. Ideally, I'd prefer to remove the first line while maintaining the subsequent blank line. However, if I can also remove that blank line, it could tidy things up even more. Still, since the user's request is strictly to remove only the first line, I'll stick to that and craft the patch accordingly."
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_6",
    "item_type": "assistant_message",
    "text": "Applying the change by removing the first line from `README.md`."
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_7",
    "item_type": "file_change",
    "changes": [
      {
        "path": "/Users/pakrym/code/codex/README.md",
        "kind": "update"
      }
    ],
    "status": "completed"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_8",
    "item_type": "reasoning",
    "text": "**Validating README Update**\n\nI updated the README.md and now I need to validate that the first line has been successfully removed. To do this, I'll check the first 10 lines of the document to ensure everything looks correct. It’s always good to double-check these kinds of changes, so I’m going to take that extra step to make sure the update was executed properly. Let's see what those initial lines say!"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_9",
    "item_type": "command_execution",
    "command": "bash -lc sed -n '1,20p' README.md",
    "aggregated_output": "\n<p align=\"center\"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>\n\n<p align=\"center\"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.\n</br>\n</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href=\"https://developers.openai.com/codex/ide\">install in your IDE</a>\n</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href=\"https://chatgpt.com/codex\">chatgpt.com/codex</a></p>\n\n<p align=\"center\">\n  <img src=\"./.github/codex-cli-splash.png\" alt=\"Codex CLI splash\" width=\"80%\" />\n  </p>\n\n---\n\n## Quickstart\n\n### Installing and running Codex CLI\n\nInstall globally with your preferred package manager. If you use npm:\n\n",
    "exit_code": 0,
    "status": "completed"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "itm_10",
    "item_type": "assistant_message",
    "text": "- Updated `README.md:1` by removing the opening H1 line.\n- The README now starts with the install instructions paragraph.\n\nWant me to also remove the leading blank line at the top?"
  }
}
```

</details>

The idea is to give users fully formatted items they can use directly in
their rendering/application logic and avoid having them building up
items manually based on events (unless they want to for streaming).

This PR implements only the `item.completed` payload for some event
types, more event types and item types to come.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-09-25 17:47:09 +00:00
Dylan
594248f415 [exec] add include-plan-tool flag and print it nicely (#3461)
### Summary
Sometimes in exec runs, we want to allow the model to use the
`update_plan` tool, but that's not easily configurable. This change adds
a feature flag for this, and formats the output so it's human-readable

## Test Plan
<img width="1280" height="354" alt="Screenshot 2025-09-11 at 12 39
44 AM"
src="https://github.com/user-attachments/assets/72e11070-fb98-47f5-a784-5123ca7333d9"
/>
2025-09-23 16:50:59 -07:00
pakrym-oai
fdb8dadcae Add exec output-schema parameter (#4079)
Adds structured output to `exec` via the `--structured-output`
parameter.
2025-09-23 13:59:16 -07:00
Ahmed Ibrahim
a30e5e40ee enable-resume (#3537)
Adding the ability to resume conversations.
we have one verb `resume`. 

Behavior:

`tui`:
`codex resume`: opens session picker
`codex resume --last`: continue last message
`codex resume <session id>`: continue conversation with `session id`

`exec`:
`codex resume --last`: continue last conversation
`codex resume <session id>`: continue conversation with `session id`

Implementation:
- I added a function to find the path in `~/.codex/sessions/` with a
`UUID`. This is helpful in resuming with session id.
- Added the above mentioned flags
- Added lots of testing
2025-09-14 19:33:19 -04:00
dedrisian-oai
90a0fd342f Review Mode (Core) (#3401)
## 📝 Review Mode -- Core

This PR introduces the Core implementation for Review mode:

- New op `Op::Review { prompt: String }:` spawns a child review task
with isolated context, a review‑specific system prompt, and a
`Config.review_model`.
- `EnteredReviewMode`: emitted when the child review session starts.
Every event from this point onwards reflects the review session.
- `ExitedReviewMode(Option<ReviewOutputEvent>)`: emitted when the review
finishes or is interrupted, with optional structured findings:

```json
{
  "findings": [
    {
      "title": "<≤ 80 chars, imperative>",
      "body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
      "confidence_score": <float 0.0-1.0>,
      "priority": <int 0-3>,
      "code_location": {
        "absolute_file_path": "<file path>",
        "line_range": {"start": <int>, "end": <int>}
      }
    }
  ],
  "overall_correctness": "patch is correct" | "patch is incorrect",
  "overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
  "overall_confidence_score": <float 0.0-1.0>
}
```

## Questions

### Why separate out its own message history?

We want the review thread to match the training of our review models as
much as possible -- that means using a custom prompt, removing user
instructions, and starting a clean chat history.

We also want to make sure the review thread doesn't leak into the parent
thread.

### Why do this as a mode, vs. sub-agents?

1. We want review to be a synchronous task, so it's fine for now to do a
bespoke implementation.
2. We're still unclear about the final structure for sub-agents. We'd
prefer to land this quickly and then refactor into sub-agents without
rushing that implementation.
2025-09-12 23:25:10 +00:00
Eric Traut
e13b35ecb0 Simplify auth flow and reconcile differences between ChatGPT and API Key auth (#3189)
This PR does the following:
* Adds the ability to paste or type an API key.
* Removes the `preferred_auth_method` config option. The last login
method is always persisted in auth.json, so this isn't needed.
* If OPENAI_API_KEY env variable is defined, the value is used to
prepopulate the new UI. The env variable is otherwise ignored by the
CLI.
* Adds a new MCP server entry point "login_api_key" so we can implement
this same API key behavior for the VS Code extension.
<img width="473" height="140" alt="Screenshot 2025-09-04 at 3 51 04 PM"
src="https://github.com/user-attachments/assets/c11bbd5b-8a4d-4d71-90fd-34130460f9d9"
/>
<img width="726" height="254" alt="Screenshot 2025-09-04 at 3 51 32 PM"
src="https://github.com/user-attachments/assets/6cc76b34-309a-4387-acbc-15ee5c756db9"
/>
2025-09-11 09:16:34 -07:00
Gabriel Peal
5eab4c7ab4 Replace config.responses_originator_header_internal_override with CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR (#3388)
The previous config approach had a few issues:
1. It is part of the config but not designed to be used externally
2. It had to be wired through many places (look at the +/- on this PR
3. It wasn't guaranteed to be set consistently everywhere because we
don't have a super well defined way that configs stack. For example, the
extension would configure during newConversation but anything that
happened outside of that (like login) wouldn't get it.

This env var approach is cleaner and also creates one less thing we have
to deal with when coming up with a better holistic story around configs.

One downside is that I removed the unit test testing for the override
because I don't want to deal with setting the global env or spawning
child processes and figuring out how to introspect their originator
header. The new code is sufficiently simple and I tested it e2e that I
feel as if this is still worth it.
2025-09-09 17:23:23 -04:00
pakrym-oai
5775174ec2 Never store requests (#3212)
When item ids are sent to Responses API it will load them from the
database ignoring the provided values. This adds extra latency.

Not having the mode to store requests also allows us to simplify the
code.

## Breaking change

The `disable_response_storage` configuration option is removed.
2025-09-05 10:41:47 -07:00
pakrym-oai
c636f821ae Add a common way to create HTTP client (#3110)
Ensure User-Agent and originator are always sent.
2025-09-03 10:11:02 -07:00
pakrym-oai
03e2796ca4 Move CodexAuth and AuthManager to the core crate (#3074)
Fix a long standing layering issue.
2025-09-02 18:36:19 -07:00
Jeremy Rose
e442ecedab rework message styling (#2877)
https://github.com/user-attachments/assets/cf07f62b-1895-44bb-b9c3-7a12032eb371
2025-09-02 17:29:58 +00:00
dedrisian-oai
4e9ad23864 Add "View Image" tool (#2723)
Adds a "View Image" tool so Codex can find and see images by itself:

<img width="1772" height="420" alt="Screenshot 2025-08-26 at 10 40
04 AM"
src="https://github.com/user-attachments/assets/7a459c7b-0b86-4125-82d9-05fbb35ade03"
/>
2025-08-27 17:41:23 -07:00
Reuben Narad
363636f5eb Add web search tool (#2371)
Adds web_search tool, enabling the model to use Responses API web_search
tool.
- Disabled by default, enabled by --search flag
- When --search is passed, exposes web_search_request function tool to
the model, which triggers user approval. When approved, the model can
use the web_search tool for the remainder of the turn
<img width="1033" height="294" alt="image"
src="https://github.com/user-attachments/assets/62ac6563-b946-465c-ba5d-9325af28b28f"
/>

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-23 22:58:56 -07:00
Eric Traut
dc42ec0eb4 Add AuthManager and enhance GetAuthStatus command (#2577)
This PR adds a central `AuthManager` struct that manages the auth
information used across conversations and the MCP server. Prior to this,
each conversation and the MCP server got their own private snapshots of
the auth information, and changes to one (such as a logout or token
refresh) were not seen by others.

This is especially problematic when multiple instances of the CLI are
run. For example, consider the case where you start CLI 1 and log in to
ChatGPT account X and then start CLI 2 and log out and then log in to
ChatGPT account Y. The conversation in CLI 1 is still using account X,
but if you create a new conversation, it will suddenly (and
unexpectedly) switch to account Y.

With the `AuthManager`, auth information is read from disk at the time
the `ConversationManager` is constructed, and it is cached in memory.
All new conversations use this same auth information, as do any token
refreshes.

The `AuthManager` is also used by the MCP server's GetAuthStatus
command, which now returns the auth method currently used by the MCP
server.

This PR also includes an enhancement to the GetAuthStatus command. It
now accepts two new (optional) input parameters: `include_token` and
`refresh_token`. Callers can use this to request the in-use auth token
and can optionally request to refresh the token.

The PR also adds tests for the login and auth APIs that I recently added
to the MCP server.
2025-08-22 13:10:11 -07:00
Michael Bolin
712bfa04ac chore: move mcp-server/src/wire_format.rs to protocol/src/mcp_protocol.rs (#2423)
The existing `wire_format.rs` should share more types with the
`codex-protocol` crate (like `AskForApproval` instead of maintaining a
parallel `CodexToolCallApprovalPolicy` enum), so this PR moves
`wire_format.rs` into `codex-protocol`, renaming it as
`mcp-protocol.rs`. We also de-dupe types, where appropriate.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2423).
* #2424
* __->__ #2423
2025-08-18 09:36:57 -07:00
Dylan
6df8e35314 [tools] Add apply_patch tool (#2303)
## Summary
We've been seeing a number of issues and reports with our synthetic
`apply_patch` tool, e.g. #802. Let's make this a real tool - in my
anecdotal testing, it's critical for GPT-OSS models, but I'd like to
make it the standard across GPT-5 and codex models as well.

## Testing
- [x] Tested locally
- [x] Integration test
2025-08-15 11:55:53 -04:00
Michael Bolin
08ed618f72 chore: introduce ConversationManager as a clearinghouse for all conversations (#2240)
This PR does two things because after I got deep into the first one I
started pulling on the thread to the second:

- Makes `ConversationManager` the place where all in-memory
conversations are created and stored. Previously, `MessageProcessor` in
the `codex-mcp-server` crate was doing this via its `session_map`, but
this is something that should be done in `codex-core`.
- It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded
throughout our code. I think this made sense at one time, but now that
we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event,
I don't think this was quite right, so I removed it. For `codex exec`
and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we
no longer make `Notify` a field of `Codex` or `CodexConversation`.

Changes of note:

- Adds the files `conversation_manager.rs` and `codex_conversation.rs`
to `codex-core`.
- `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`:
other crates must use `CodexConversation` instead (which is created via
`ConversationManager`).
- `core/src/codex_wrapper.rs` has been deleted in favor of
`ConversationManager`.
- `ConversationManager::new_conversation()` returns `NewConversation`,
which is in line with the `new_conversation` tool we want to add to the
MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so
we eliminate checks in cases like `codex-rs/core/tests/client.rs` to
verify `SessionConfiguredEvent` is the first event because that is now
internal to `ConversationManager`.
- Quite a bit of code was deleted from
`codex-rs/mcp-server/src/message_processor.rs` since it no longer has to
manage multiple conversations itself: it goes through
`ConversationManager` instead.
- `core/tests/live_agent.rs` has been deleted because I had to update a
bunch of tests and all the tests in here were ignored, and I don't think
anyone ever ran them, so this was just technical debt, at this point.
- Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I
hope to refactor the blandly-named `util.rs` into more descriptive
files).
- In general, I started replacing local variables named `codex` as
`conversation`, where appropriate, though admittedly I didn't do it
through all the integration tests because that would have added a lot of
noise to this PR.




---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240).
* #2264
* #2263
* __->__ #2240
2025-08-13 13:38:18 -07:00
Dylan
bc28b87c7b [config] Onboarding flow with persistence (#1929)
## Summary
In collaboration with @gpeal: upgrade the onboarding flow, and persist
user settings.

---------

Co-authored-by: Gabriel Peal <gabriel@openai.com>
2025-08-07 09:27:38 -07:00
Gabriel Peal
8a990b5401 Migrate GitWarning to OnboardingScreen (#1915)
This paves the way to do per-directory approval settings
(https://github.com/openai/codex/pull/1912).

This also lets us pass in a Config/ChatWidgetArgs into onboarding which
can then mutate it and emit the ChatWidgetArgs it wants at the end which
may be modified by the said approval settings.

<img width="1180" height="428" alt="CleanShot 2025-08-06 at 19 30 55"
src="https://github.com/user-attachments/assets/4dcfda42-0f5e-4b6d-a16d-2597109cc31c"
/>
2025-08-06 22:39:07 -04:00
Michael Bolin
afa8f0d617 fix: exit cleanly when ShutdownComplete is received (#1864)
Previous to this PR, `ShutdownComplete` was not being handled correctly
in `codex exec`, so it always ended up printing the following to stderr:

```
ERROR codex_exec: Error receiving event: InternalAgentDied
```

Because we were not breaking out of the loop for `ShutdownComplete`,
inevitably `codex.next_event()` would get called again and
`rx_event.recv()` would fail and the error would get mapped to
`InternalAgentDied`:


ea7d3f27bd/codex-rs/core/src/codex.rs (L190-L197)

For reference, https://github.com/openai/codex/pull/1647 introduced the
`ShutdownComplete` variant.
2025-08-05 19:19:36 -07:00
Michael Bolin
42bd73e150 chore: remove unnecessary default_ prefix (#1854)
This prefix is not inline with the other fields on the `ConfigOverrides`
struct.
2025-08-05 14:42:49 -07:00
Michael Bolin
d365cae077 fix: when using --oss, ensure correct configuration is threaded through correctly (#1859)
This PR started as an investigation with the goal of eliminating the use
of `unsafe { std::env::set_var() }` in `ollama/src/client.rs`, as
setting environment variables in a multithreaded context is indeed
unsafe and these tests were observed to be flaky, as a result.

Though as I dug deeper into the issue, I discovered that the logic for
instantiating `OllamaClient` under test scenarios was not quite right.
In this PR, I aimed to:

- share more code between the two creation codepaths,
`try_from_oss_provider()` and `try_from_provider_with_base_url()`
- use the values from `Config` when setting up Ollama, as we have
various mechanisms for overriding config values, so we should be sure
that we are always using the ultimate `Config` for things such as the
`ModelProviderInfo` associated with the `oss` id

Once this was in place,
`OllamaClient::try_from_provider_with_base_url()` could be used in unit
tests for `OllamaClient` so it was possible to create a properly
configured client without having to set environment variables.
2025-08-05 13:55:32 -07:00
easong-openai
9285350842 Introduce --oss flag to use gpt-oss models (#1848)
This adds support for easily running Codex backed by a local Ollama
instance running our new open source models. See
https://github.com/openai/gpt-oss for details.

If you pass in `--oss` you'll be prompted to install/launch ollama, and
it will automatically download the 20b model and attempt to use it.

We'll likely want to expand this with some options later to make the
experience smoother for users who can't run the 20b or want to run the
120b.

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-05 11:31:11 -07:00
Gabriel Peal
8828f6f082 Add an experimental plan tool (#1726)
This adds a tool the model can call to update a plan. The tool doesn't
actually _do_ anything but it gives clients a chance to read and render
the structured plan. We will likely iterate on the prompt and tools
exposed for planning over time.
2025-07-29 14:22:02 -04:00
easong-openai
f8fcaaaf6f Relative instruction file (#1722)
Passing in an instruction file with a bad path led to silent failures,
also instruction relative paths were handled in an unintuitive fashion.
2025-07-29 10:06:05 -07:00
Michael Bolin
2405c40026 chore: update Codex::spawn() to return a struct instead of a tuple (#1677)
Also update `init_codex()` to return a `struct` instead of a tuple, as well.
2025-07-27 20:01:35 -07:00
aibrahim-oai
b4ab7c1b73 Flaky CI fix (#1647)
Flushing before sending `TaskCompleteEvent` and ending the submission
loop to avoid race conditions.
2025-07-23 15:03:26 -07:00
pakrym-oai
6d82907082 Add support for custom base instructions (#1645)
Allows providing custom instructions file as a config parameter and
custom instruction text via MCP tool call.
2025-07-22 09:42:22 -07:00
Dylan
18b2b30841 [mcp-server] Add reply tool call (#1643)
## Summary
Adds a new mcp tool call, `codex-reply`, so we can continue existing
sessions. This is a first draft and does not yet support sessions from
previous processes.

## Testing
- [x] tested with mcp client
2025-07-21 21:01:56 -07:00
Michael Bolin
e16657ca45 feat: add --json flag to codex exec (#1603)
This is designed to facilitate programmatic use of Codex in a more
lightweight way than using `codex mcp`.

Passing `--json` to `codex exec` will print each event as a line of JSON
to stdout. Note that it does not print the individual tokens as they are
streamed, only full messages, as this is aimed at programmatic use
rather than to power UI.

<img width="1348" height="1307" alt="image"
src="https://github.com/user-attachments/assets/fc7908de-b78d-46e4-a6ff-c85de28415c7"
/>

I changed the existing `EventProcessor` into a trait and moved the
implementation to `EventProcessorWithHumanOutput`. Then I introduced an
alternative implementation, `EventProcessorWithJsonOutput`. The `--json`
flag determines which implementation to use.
2025-07-17 15:10:15 -07:00
aibrahim-oai
643ab1f582 Add streaming to exec and tui (#1594)
Added support for streaming in `tui`
Added support for streaming in `exec`


https://github.com/user-attachments/assets/4215892e-d940-452c-a1d0-416ed0cf14eb
2025-07-16 22:26:31 -07:00
Michael Bolin
e0c08cea4f feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.

Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:

```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```

but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:

```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```

Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.

Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.

In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:

```toml
sandbox_mode = "workspace-write"

[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```

This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:

```toml
sandbox_mode = "workspace-write"

[sandbox_read_only]
# read-only options here...

[sandbox_workspace_write]
writable_roots = [ "/tmp" ]

[sandbox_danger_full_access]
# danger-full-access options here...
```

In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.

Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).

Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.

Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).

Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
Michael Bolin
50924101d2 feat: add --dangerously-bypass-approvals-and-sandbox (#1384)
This PR reworks `assess_command_safety()` so that the combination of
`AskForApproval::Never` and `SandboxPolicy::DangerFullAccess` ensures
that commands are run without _any_ sandbox and the user should never be
prompted. In turn, it adds support for a new
`--dangerously-bypass-approvals-and-sandbox` flag (that cannot be used
with `--approval-policy` or `--full-auto`) that sets both of those
options.

Fixes https://github.com/openai/codex/issues/1254
2025-06-25 12:36:10 -07:00
Michael Bolin
0776d78357 feat: redesign sandbox config (#1373)
This is a major redesign of how sandbox configuration works and aims to
fix https://github.com/openai/codex/issues/1248. Specifically, it
replaces `sandbox_permissions` in `config.toml` (and the
`-s`/`--sandbox-permission` CLI flags) with a "table" with effectively
three variants:

```toml
# Safest option: full disk is read-only, but writes and network access are disallowed.
[sandbox]
mode = "read-only"

# The cwd of the Codex task is writable, as well as $TMPDIR on macOS.
# writable_roots can be used to specify additional writable folders.
[sandbox]
mode = "workspace-write"
writable_roots = []  # Optional, defaults to the empty list.
network_access = false  # Optional, defaults to false.

# Disable sandboxing: use at your own risk!!!
[sandbox]
mode = "danger-full-access"
```

This should make sandboxing easier to reason about. While we have
dropped support for `-s`, the way it works now is:

- no flags => `read-only`
- `--full-auto` => `workspace-write`
- currently, there is no way to specify `danger-full-access` via a CLI
flag, but we will revisit that as part of
https://github.com/openai/codex/issues/1254

Outstanding issue:

- As noted in the `TODO` on `SandboxPolicy::is_unrestricted()`, we are
still conflating sandbox preferences with approval preferences in that
case, which needs to be cleaned up.
2025-06-24 16:59:47 -07:00
Michael Bolin
e81327e5f4 feat: add hide_agent_reasoning config option (#1181)
This PR introduces a `hide_agent_reasoning` config option (that defaults
to `false`) that users can enable to make the output less verbose by
suppressing reasoning output.

To test, verified that this includes agent reasoning in the output:

```
echo hello | just exec
```

whereas this does not:

```
echo hello | just exec --config hide_agent_reasoning=false
```
2025-05-30 23:14:56 -07:00
Michael Bolin
cf1d070538 feat: grab-bag of improvements to exec output (#1179)
Fixes:

* Instantiate `EventProcessor` earlier in `lib.rs` so
`print_config_summary()` can be an instance method of it and leverage
its various `Style` fields to ensure it honors `with_ansi` properly.
* After printing the config summary, print out user's prompt with the
heading `User instructions:`. As noted in the comment, now that we can
read the instructions via stdin as of #1178, it is helpful to the user
to ensure they know what instructions were given to Codex.
* Use same colors/bold/italic settings for headers as the TUI, making
the output a bit easier to read.
2025-05-30 16:22:10 -07:00