2025-05-29 16:59:35 -07:00
# Config
2025-10-19 15:07:33 -07:00
Codex configuration gives you fine-grained control over the model, execution environment, and integrations available to the CLI. Use this guide alongside the workflows in [`codex exec` ](./exec.md ), the guardrails in [Sandbox & approvals ](./sandbox.md ), and project guidance from [AGENTS.md discovery ](./agents_md.md ).
## Quick navigation
2025-11-04 13:02:27 -08:00
- [Feature flags ](#feature-flags )
2025-10-19 15:07:33 -07:00
- [Model selection ](#model-selection )
- [Execution environment ](#execution-environment )
- [MCP integration ](#mcp-integration )
- [Observability and telemetry ](#observability-and-telemetry )
- [Profiles and overrides ](#profiles-and-overrides )
- [Reference table ](#config-reference )
2025-05-29 16:59:35 -07:00
Codex supports several mechanisms for setting config values:
- Config-specific command-line flags, such as `--model o3` (highest precedence).
- A generic `-c` /`--config` flag that takes a `key=value` pair, such as `--config model="o3"` .
- The key can contain dots to set a value deeper than the root, e.g. `--config model_providers.openai.wire_api="chat"` .
2025-09-08 18:58:25 -04:00
- For consistency with `config.toml` , values are a string in TOML format rather than JSON format, so use `key='{a = 1, b = 2}'` rather than `key='{"a": 1, "b": 2}'` .
- The quotes around the value are necessary, as without them your shell would split the config argument on spaces, resulting in `codex` receiving `-c key={a` with (invalid) additional arguments `=` , `1,` , `b` , `=` , `2}` .
- Values can contain any TOML object, such as `--config shell_environment_policy.include_only='["PATH", "HOME", "USER"]'` .
- If `value` cannot be parsed as a valid TOML value, it is treated as a string value. This means that `-c model='"o3"'` and `-c model=o3` are equivalent.
- In the first case, the value is the TOML string `"o3"` , while in the second the value is `o3` , which is not valid TOML and therefore treated as the TOML string `"o3"` .
- Because quotes are interpreted by one's shell, `-c key="true"` will be correctly interpreted in TOML as `key = true` (a boolean) and not `key = "true"` (a string). If for some reason you needed the string `"true"` , you would need to use `-c key='"true"'` (note the two sets of quotes).
2025-05-29 16:59:35 -07:00
- The `$CODEX_HOME/config.toml` configuration file where the `CODEX_HOME` environment value defaults to `~/.codex` . (Note `CODEX_HOME` will also be where logs and other Codex-related information are stored.)
Both the `--config` flag and the `config.toml` file support the following options:
2025-11-04 13:02:27 -08:00
## Feature flags
Optional and experimental capabilities are toggled via the `[features]` table in `$CODEX_HOME/config.toml` . If you see a deprecation notice mentioning a legacy key (for example `experimental_use_exec_command_tool` ), move the setting into `[features]` or pass `--enable <feature>` .
```toml
[features]
streamable_shell = true # enable the streamable exec tool
web_search_request = true # allow the model to request web searches
# view_image_tool defaults to true; omit to keep defaults
```
Supported features:
| Key | Default | Stage | Description |
| ----------------------------------------- | :-----: | ------------ | ---------------------------------------------------- |
| `unified_exec` | false | Experimental | Use the unified PTY-backed exec tool |
| `streamable_shell` | false | Experimental | Use the streamable exec-command/write-stdin pair |
| `rmcp_client` | false | Experimental | Enable oauth support for streamable HTTP MCP servers |
| `apply_patch_freeform` | false | Beta | Include the freeform `apply_patch` tool |
| `view_image_tool` | true | Stable | Include the `view_image` tool |
| `web_search_request` | false | Stable | Allow the model to issue web searches |
| `experimental_sandbox_command_assessment` | false | Experimental | Enable model-based sandbox risk assessment |
| `ghost_commit` | false | Experimental | Create a ghost commit each turn |
| `enable_experimental_windows_sandbox` | false | Experimental | Use the Windows restricted-token sandbox |
Notes:
- Omit a key to accept its default.
- Legacy booleans such as `experimental_use_exec_command_tool` , `experimental_use_unified_exec_tool` , `include_apply_patch_tool` , and similar `experimental_use_*` keys are deprecated; setting the corresponding `[features].<key>` avoids repeated warnings.
2025-10-19 15:07:33 -07:00
## Model selection
### model
2025-05-29 16:59:35 -07:00
The model that Codex should use.
```toml
2025-10-19 15:32:13 -07:00
model = "gpt-5" # overrides the default ("gpt-5-codex" on macOS/Linux, "gpt-5" on Windows)
2025-05-29 16:59:35 -07:00
```
2025-10-19 15:07:33 -07:00
### model_providers
2025-05-29 16:59:35 -07:00
2025-10-19 15:32:13 -07:00
This option lets you add to the default set of model providers bundled with Codex. The map key becomes the value you use with `model_provider` to select the provider.
> [!NOTE]
> Built-in providers are not overwritten when you reuse their key. Entries you add only take effect when the key is **new**; for example `[model_providers.openai]` leaves the original OpenAI definition untouched. To customize the bundled OpenAI provider, prefer the dedicated knobs (for example the `OPENAI_BASE_URL` environment variable) or register a new provider key and point `model_provider` at it.
2025-05-29 16:59:35 -07:00
2025-06-27 14:49:55 -07:00
For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you could add the following configuration:
2025-05-29 16:59:35 -07:00
```toml
2025-06-27 14:49:55 -07:00
# Recall that in TOML, root keys must be listed before tables.
model = "gpt-4o"
model_provider = "openai-chat-completions"
[model_providers.openai-chat-completions]
# Name of the provider that will be displayed in the Codex UI.
name = "OpenAI using Chat Completions"
# The path `/chat/completions` will be amended to this URL to make the POST
# request for the chat completions.
base_url = "https://api.openai.com/v1"
# If `env_key` is set, identifies an environment variable that must be set when
# using Codex with this provider. The value of the environment variable must be
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
env_key = "OPENAI_API_KEY"
2025-06-30 11:39:54 -07:00
# Valid values for wire_api are "chat" and "responses". Defaults to "chat" if omitted.
2025-06-27 14:49:55 -07:00
wire_api = "chat"
2025-06-30 11:39:54 -07:00
# If necessary, extra query params that need to be added to the URL.
# See the Azure example below.
query_params = {}
2025-05-29 16:59:35 -07:00
```
2025-06-27 14:49:55 -07:00
Note this makes it possible to use Codex CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use Codex CLI with Ollama running locally:
2025-05-29 16:59:35 -07:00
```toml
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
```
2025-06-27 14:49:55 -07:00
Or a third-party provider (using a distinct environment variable for the API key):
2025-05-29 16:59:35 -07:00
```toml
2025-06-27 14:49:55 -07:00
[model_providers.mistral]
name = "Mistral"
base_url = "https://api.mistral.ai/v1"
env_key = "MISTRAL_API_KEY"
2025-06-30 11:39:54 -07:00
```
2025-07-07 13:09:16 -07:00
It is also possible to configure a provider to include extra HTTP headers with a request. These can be hardcoded values (`http_headers` ) or values read from environment variables (`env_http_headers` ):
```toml
[model_providers.example]
# name, base_url, ...
# This will add the HTTP header `X-Example-Header` with value `example-value`
# to each request to the model provider.
http_headers = { "X-Example-Header" = "example-value" }
# This will add the HTTP header `X-Example-Features` with the value of the
# `EXAMPLE_FEATURES` environment variable to each request to the model provider
# _if_ the environment variable is set and its value is non-empty.
2025-09-03 14:14:57 +08:00
env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
2025-07-07 13:09:16 -07:00
```
2025-10-19 15:07:33 -07:00
#### Azure model provider example
2025-09-16 08:43:29 -07:00
Note that Azure requires `api-version` to be passed as a query parameter, so be sure to specify it as part of `query_params` when defining the Azure provider:
```toml
[model_providers.azure]
name = "Azure"
# Make sure you set the appropriate subdomain for this URL.
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY" # Or "OPENAI_API_KEY", whichever you use.
query_params = { api-version = "2025-04-01-preview" }
wire_api = "responses"
```
Export your key before launching Codex: `export AZURE_OPENAI_API_KEY=…`
2025-10-19 15:07:33 -07:00
#### Per-provider network tuning
2025-07-18 12:12:39 -07:00
The following optional settings control retry behaviour and streaming idle timeouts **per model provider ** . They must be specified inside the corresponding `[model_providers.<id>]` block in `config.toml` . (Older releases accepted top‑ level keys; those are now ignored.)
Example:
```toml
[model_providers.openai]
name = "OpenAI"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
# network tuning overrides (all optional; falls back to built‑ in defaults)
request_max_retries = 4 # retry failed HTTP requests
stream_max_retries = 10 # retry dropped SSE streams
stream_idle_timeout_ms = 300000 # 5m idle timeout
```
2025-10-19 15:07:33 -07:00
##### request_max_retries
2025-07-30 12:40:15 -07:00
2025-07-18 12:12:39 -07:00
How many times Codex will retry a failed HTTP request to the model provider. Defaults to `4` .
2025-10-19 15:07:33 -07:00
##### stream_max_retries
2025-07-30 12:40:15 -07:00
2025-09-02 13:03:11 -07:00
Number of times Codex will attempt to reconnect when a streaming response is interrupted. Defaults to `5` .
2025-07-18 12:12:39 -07:00
2025-10-19 15:07:33 -07:00
##### stream_idle_timeout_ms
2025-07-30 12:40:15 -07:00
2025-07-18 12:12:39 -07:00
How long Codex will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
2025-10-19 15:07:33 -07:00
### model_provider
2025-05-29 16:59:35 -07:00
2025-07-08 12:39:52 -07:00
Identifies which provider to use from the `model_providers` map. Defaults to `"openai"` . You can override the `base_url` for the built-in `openai` provider via the `OPENAI_BASE_URL` environment variable.
2025-05-29 16:59:35 -07:00
2025-06-27 14:49:55 -07:00
Note that if you override `model_provider` , then you likely want to override
`model` , as well. For example, if you are running ollama with Mistral locally,
then you would need to add the following to your config in addition to the new entry in the `model_providers` map:
2025-05-29 16:59:35 -07:00
```toml
2025-06-27 14:49:55 -07:00
model_provider = "ollama"
2025-07-08 12:39:52 -07:00
model = "mistral"
2025-05-29 16:59:35 -07:00
```
2025-10-19 15:07:33 -07:00
### model_reasoning_effort
feat: make reasoning effort/summaries configurable (#1199)
Previous to this PR, we always set `reasoning` when making a request
using the Responses API:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-rs/core/src/client.rs#L108-L111
Though if you tried to use the Rust CLI with `--model gpt-4.1`, this
would fail with:
```shell
"Unsupported parameter: 'reasoning.effort' is not supported with this model."
```
We take a cue from the TypeScript CLI, which does a check on the model
name:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-cli/src/utils/agent/agent-loop.ts#L786-L789
This PR does a similar check, though also adds support for the following
config options:
```
model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"
```
This way, if you have a model whose name happens to start with `"o"` (or
`"codex"`?), you can set these to `"none"` to explicitly disable
reasoning, if necessary. (That said, it seems unlikely anyone would use
the Responses API with non-OpenAI models, but we provide an escape
hatch, anyway.)
This PR also updates both the TUI and `codex exec` to show `reasoning
effort` and `reasoning summaries` in the header.
2025-06-02 16:01:34 -07:00
2025-09-22 20:10:52 -07:00
If the selected model is known to support reasoning (for example: `o3` , `o4-mini` , `codex-*` , `gpt-5` , `gpt-5-codex` ), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation ](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning ), this can be set to:
feat: make reasoning effort/summaries configurable (#1199)
Previous to this PR, we always set `reasoning` when making a request
using the Responses API:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-rs/core/src/client.rs#L108-L111
Though if you tried to use the Rust CLI with `--model gpt-4.1`, this
would fail with:
```shell
"Unsupported parameter: 'reasoning.effort' is not supported with this model."
```
We take a cue from the TypeScript CLI, which does a check on the model
name:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-cli/src/utils/agent/agent-loop.ts#L786-L789
This PR does a similar check, though also adds support for the following
config options:
```
model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"
```
This way, if you have a model whose name happens to start with `"o"` (or
`"codex"`?), you can set these to `"none"` to explicitly disable
reasoning, if necessary. (That said, it seems unlikely anyone would use
the Responses API with non-OpenAI models, but we provide an escape
hatch, anyway.)
This PR also updates both the TUI and `codex exec` to show `reasoning
effort` and `reasoning summaries` in the header.
2025-06-02 16:01:34 -07:00
2025-08-19 10:55:07 -07:00
- `"minimal"`
feat: make reasoning effort/summaries configurable (#1199)
Previous to this PR, we always set `reasoning` when making a request
using the Responses API:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-rs/core/src/client.rs#L108-L111
Though if you tried to use the Rust CLI with `--model gpt-4.1`, this
would fail with:
```shell
"Unsupported parameter: 'reasoning.effort' is not supported with this model."
```
We take a cue from the TypeScript CLI, which does a check on the model
name:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-cli/src/utils/agent/agent-loop.ts#L786-L789
This PR does a similar check, though also adds support for the following
config options:
```
model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"
```
This way, if you have a model whose name happens to start with `"o"` (or
`"codex"`?), you can set these to `"none"` to explicitly disable
reasoning, if necessary. (That said, it seems unlikely anyone would use
the Responses API with non-OpenAI models, but we provide an escape
hatch, anyway.)
This PR also updates both the TUI and `codex exec` to show `reasoning
effort` and `reasoning summaries` in the header.
2025-06-02 16:01:34 -07:00
- `"low"`
- `"medium"` (default)
- `"high"`
2025-08-19 10:55:07 -07:00
Note: to minimize reasoning, choose `"minimal"` .
feat: make reasoning effort/summaries configurable (#1199)
Previous to this PR, we always set `reasoning` when making a request
using the Responses API:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-rs/core/src/client.rs#L108-L111
Though if you tried to use the Rust CLI with `--model gpt-4.1`, this
would fail with:
```shell
"Unsupported parameter: 'reasoning.effort' is not supported with this model."
```
We take a cue from the TypeScript CLI, which does a check on the model
name:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-cli/src/utils/agent/agent-loop.ts#L786-L789
This PR does a similar check, though also adds support for the following
config options:
```
model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"
```
This way, if you have a model whose name happens to start with `"o"` (or
`"codex"`?), you can set these to `"none"` to explicitly disable
reasoning, if necessary. (That said, it seems unlikely anyone would use
the Responses API with non-OpenAI models, but we provide an escape
hatch, anyway.)
This PR also updates both the TUI and `codex exec` to show `reasoning
effort` and `reasoning summaries` in the header.
2025-06-02 16:01:34 -07:00
2025-10-19 15:07:33 -07:00
### model_reasoning_summary
feat: make reasoning effort/summaries configurable (#1199)
Previous to this PR, we always set `reasoning` when making a request
using the Responses API:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-rs/core/src/client.rs#L108-L111
Though if you tried to use the Rust CLI with `--model gpt-4.1`, this
would fail with:
```shell
"Unsupported parameter: 'reasoning.effort' is not supported with this model."
```
We take a cue from the TypeScript CLI, which does a check on the model
name:
https://github.com/openai/codex/blob/d7245cbbc9d8ff5446da45e5951761103492476d/codex-cli/src/utils/agent/agent-loop.ts#L786-L789
This PR does a similar check, though also adds support for the following
config options:
```
model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"
```
This way, if you have a model whose name happens to start with `"o"` (or
`"codex"`?), you can set these to `"none"` to explicitly disable
reasoning, if necessary. (That said, it seems unlikely anyone would use
the Responses API with non-OpenAI models, but we provide an escape
hatch, anyway.)
This PR also updates both the TUI and `codex exec` to show `reasoning
effort` and `reasoning summaries` in the header.
2025-06-02 16:01:34 -07:00
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"` ) or `"codex"` , reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation ](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries ), this can be set to:
- `"auto"` (default)
- `"concise"`
- `"detailed"`
To disable reasoning summaries, set `model_reasoning_summary` to `"none"` in your config:
```toml
model_reasoning_summary = "none" # disable reasoning summaries
```
2025-10-19 15:07:33 -07:00
### model_verbosity
2025-08-22 17:12:10 +01:00
Controls output length/detail on GPT‑ 5 family models when using the Responses API. Supported values:
- `"low"`
- `"medium"` (default when omitted)
- `"high"`
When set, Codex includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }` .
Example:
```toml
model = "gpt-5"
model_verbosity = "low"
```
Note: This applies only to providers using the Responses API. Chat Completions providers are unaffected.
2025-10-19 15:07:33 -07:00
### model_supports_reasoning_summaries
2025-07-10 14:30:33 -07:00
By default, `reasoning` is only set on requests to OpenAI models that are known to support them. To force `reasoning` to set on requests to the current model, you can force this behavior by setting the following in `config.toml` :
```toml
model_supports_reasoning_summaries = true
```
2025-10-19 15:07:33 -07:00
### model_context_window
The size of the context window for the model, in tokens.
In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use `model_context_window` to tell Codex what value to use to determine how much context is left during a conversation.
### model_max_output_tokens
This is analogous to `model_context_window` , but for the maximum number of output tokens for the model.
> See also [`codex exec`](./exec.md) to see how these model settings influence non-interactive runs.
## Execution environment
### approval_policy
Determines when the user should be prompted to approve whether Codex can execute a command:
```toml
# Codex has hardcoded logic that defines a set of "trusted" commands.
# Setting the approval_policy to `untrusted` means that Codex will prompt the
# user before running a command not in the "trusted" set.
#
# See https://github.com/openai/codex/issues/1260 for the plan to enable
# end-users to define their own trusted commands.
approval_policy = "untrusted"
```
If you want to be notified whenever a command fails, use "on-failure":
```toml
# If the command fails when run in the sandbox, Codex asks for permission to
# retry the command outside the sandbox.
approval_policy = "on-failure"
```
If you want the model to run until it decides that it needs to ask you for escalated permissions, use "on-request":
```toml
# The model decides when to escalate
approval_policy = "on-request"
```
Alternatively, you can have the model run until it is done, and never ask to run a command with escalated permissions:
```toml
# User is never prompted: if the command fails, Codex will automatically try
# something out. Note the `exec` subcommand always uses this mode.
approval_policy = "never"
```
### sandbox_mode
2025-05-29 16:59:35 -07:00
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
Codex executes model-generated shell commands inside an OS-level sandbox.
feat: redesign sandbox config (#1373)
This is a major redesign of how sandbox configuration works and aims to
fix https://github.com/openai/codex/issues/1248. Specifically, it
replaces `sandbox_permissions` in `config.toml` (and the
`-s`/`--sandbox-permission` CLI flags) with a "table" with effectively
three variants:
```toml
# Safest option: full disk is read-only, but writes and network access are disallowed.
[sandbox]
mode = "read-only"
# The cwd of the Codex task is writable, as well as $TMPDIR on macOS.
# writable_roots can be used to specify additional writable folders.
[sandbox]
mode = "workspace-write"
writable_roots = [] # Optional, defaults to the empty list.
network_access = false # Optional, defaults to false.
# Disable sandboxing: use at your own risk!!!
[sandbox]
mode = "danger-full-access"
```
This should make sandboxing easier to reason about. While we have
dropped support for `-s`, the way it works now is:
- no flags => `read-only`
- `--full-auto` => `workspace-write`
- currently, there is no way to specify `danger-full-access` via a CLI
flag, but we will revisit that as part of
https://github.com/openai/codex/issues/1254
Outstanding issue:
- As noted in the `TODO` on `SandboxPolicy::is_unrestricted()`, we are
still conflating sandbox preferences with approval preferences in that
case, which needs to be cleaned up.
2025-06-24 16:59:47 -07:00
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
In most cases you can pick the desired behaviour with a single option:
2025-05-29 16:59:35 -07:00
```toml
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
# same as `--sandbox read-only`
sandbox_mode = "read-only"
2025-05-29 16:59:35 -07:00
```
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
The default policy is `read-only` , which means commands can read any file on
disk, but attempts to write a file or access the network will be blocked.
A more relaxed policy is `workspace-write` . When specified, the current working directory for the Codex task will be writable (as well as `$TMPDIR` on macOS). Note that the CLI defaults to using the directory where it was spawned as `cwd` , though this can be overridden using `--cwd/-C` .
2025-05-29 16:59:35 -07:00
2025-08-01 16:11:24 -07:00
On macOS (and soon Linux), all writable roots (including `cwd` ) that contain a `.git/` folder _ as an immediate child _ will configure the `.git/` folder to be read-only while the rest of the Git repository will be writable. This means that commands like `git commit` will fail, by default (as it entails writing to `.git/` ), and will require Codex to ask for permission.
2025-05-29 16:59:35 -07:00
```toml
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
# same as `--sandbox workspace-write`
sandbox_mode = "workspace-write"
feat: redesign sandbox config (#1373)
This is a major redesign of how sandbox configuration works and aims to
fix https://github.com/openai/codex/issues/1248. Specifically, it
replaces `sandbox_permissions` in `config.toml` (and the
`-s`/`--sandbox-permission` CLI flags) with a "table" with effectively
three variants:
```toml
# Safest option: full disk is read-only, but writes and network access are disallowed.
[sandbox]
mode = "read-only"
# The cwd of the Codex task is writable, as well as $TMPDIR on macOS.
# writable_roots can be used to specify additional writable folders.
[sandbox]
mode = "workspace-write"
writable_roots = [] # Optional, defaults to the empty list.
network_access = false # Optional, defaults to false.
# Disable sandboxing: use at your own risk!!!
[sandbox]
mode = "danger-full-access"
```
This should make sandboxing easier to reason about. While we have
dropped support for `-s`, the way it works now is:
- no flags => `read-only`
- `--full-auto` => `workspace-write`
- currently, there is no way to specify `danger-full-access` via a CLI
flag, but we will revisit that as part of
https://github.com/openai/codex/issues/1254
Outstanding issue:
- As noted in the `TODO` on `SandboxPolicy::is_unrestricted()`, we are
still conflating sandbox preferences with approval preferences in that
case, which needs to be cleaned up.
2025-06-24 16:59:47 -07:00
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
# Extra settings that only apply when `sandbox = "workspace-write"`.
[sandbox_workspace_write]
2025-08-07 00:17:00 -07:00
# By default, the cwd for the Codex session will be writable as well as $TMPDIR
2025-08-07 01:30:13 -07:00
# (if set) and /tmp (if it exists). Setting the respective options to `true`
2025-08-07 00:17:00 -07:00
# will override those defaults.
exclude_tmpdir_env_var = false
exclude_slash_tmp = false
2025-08-19 11:39:31 -07:00
# Optional list of _additional_ writable roots beyond $TMPDIR and /tmp.
writable_roots = ["/Users/YOU/.pyenv/shims"]
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
# Allow the command being run inside the sandbox to make outbound network
# requests. Disabled by default.
network_access = false
2025-05-29 16:59:35 -07:00
```
feat: redesign sandbox config (#1373)
This is a major redesign of how sandbox configuration works and aims to
fix https://github.com/openai/codex/issues/1248. Specifically, it
replaces `sandbox_permissions` in `config.toml` (and the
`-s`/`--sandbox-permission` CLI flags) with a "table" with effectively
three variants:
```toml
# Safest option: full disk is read-only, but writes and network access are disallowed.
[sandbox]
mode = "read-only"
# The cwd of the Codex task is writable, as well as $TMPDIR on macOS.
# writable_roots can be used to specify additional writable folders.
[sandbox]
mode = "workspace-write"
writable_roots = [] # Optional, defaults to the empty list.
network_access = false # Optional, defaults to false.
# Disable sandboxing: use at your own risk!!!
[sandbox]
mode = "danger-full-access"
```
This should make sandboxing easier to reason about. While we have
dropped support for `-s`, the way it works now is:
- no flags => `read-only`
- `--full-auto` => `workspace-write`
- currently, there is no way to specify `danger-full-access` via a CLI
flag, but we will revisit that as part of
https://github.com/openai/codex/issues/1254
Outstanding issue:
- As noted in the `TODO` on `SandboxPolicy::is_unrestricted()`, we are
still conflating sandbox preferences with approval preferences in that
case, which needs to be cleaned up.
2025-06-24 16:59:47 -07:00
To disable sandboxing altogether, specify `danger-full-access` like so:
```toml
feat: add support for --sandbox flag (#1476)
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
2025-07-07 22:31:30 -07:00
# same as `--sandbox danger-full-access`
sandbox_mode = "danger-full-access"
feat: redesign sandbox config (#1373)
This is a major redesign of how sandbox configuration works and aims to
fix https://github.com/openai/codex/issues/1248. Specifically, it
replaces `sandbox_permissions` in `config.toml` (and the
`-s`/`--sandbox-permission` CLI flags) with a "table" with effectively
three variants:
```toml
# Safest option: full disk is read-only, but writes and network access are disallowed.
[sandbox]
mode = "read-only"
# The cwd of the Codex task is writable, as well as $TMPDIR on macOS.
# writable_roots can be used to specify additional writable folders.
[sandbox]
mode = "workspace-write"
writable_roots = [] # Optional, defaults to the empty list.
network_access = false # Optional, defaults to false.
# Disable sandboxing: use at your own risk!!!
[sandbox]
mode = "danger-full-access"
```
This should make sandboxing easier to reason about. While we have
dropped support for `-s`, the way it works now is:
- no flags => `read-only`
- `--full-auto` => `workspace-write`
- currently, there is no way to specify `danger-full-access` via a CLI
flag, but we will revisit that as part of
https://github.com/openai/codex/issues/1254
Outstanding issue:
- As noted in the `TODO` on `SandboxPolicy::is_unrestricted()`, we are
still conflating sandbox preferences with approval preferences in that
case, which needs to be cleaned up.
2025-06-24 16:59:47 -07:00
```
This is reasonable to use if Codex is running in an environment that provides its own sandboxing (such as a Docker container) such that further sandboxing is unnecessary.
Though using this option may also be necessary if you try to use Codex in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
2025-10-19 16:41:10 -07:00
### tools.\*
2025-10-30 13:23:24 -07:00
Use the optional `[tools]` table to toggle built-in tools that the agent may call. `web_search` stays off unless you opt in, while `view_image` is now enabled by default:
2025-10-19 16:41:10 -07:00
```toml
[tools]
2025-11-10 09:57:56 -06:00
web_search = true # allow Codex to issue first-party web searches without prompting you (deprecated)
2025-10-30 13:23:24 -07:00
view_image = false # disable image uploads (they're enabled by default)
2025-10-19 16:41:10 -07:00
```
2025-11-10 09:57:56 -06:00
`web_search` is deprecated; use the `web_search_request` feature flag instead.
The `view_image` toggle is useful when you want to include screenshots or diagrams from your repo without pasting them manually. Codex still respects sandboxing: it can only attach files inside the workspace roots you allow.
2025-10-19 16:41:10 -07:00
2025-10-19 15:07:33 -07:00
### approval_presets
2025-08-19 22:34:37 -07:00
Codex provides three main Approval Presets:
- Read Only: Codex can read files and answer questions; edits, running commands, and network access require approval.
- Auto: Codex can read files, make edits, and run commands in the workspace without approval; asks for approval outside the workspace or for network access.
- Full Access: Full disk and network access without prompts; extremely risky.
You can further customize how Codex runs at the command line using the `--ask-for-approval` and `--sandbox` options.
2025-10-19 15:07:33 -07:00
> See also [Sandbox & approvals](./sandbox.md) for in-depth examples and platform-specific behaviour.
### shell_environment_policy
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment ** to those subprocesses. You can tune this behavior via the * * `shell_environment_policy` ** block in `config.toml` :
```toml
[shell_environment_policy]
# inherit can be "all" (default), "core", or "none"
inherit = "core"
# set to true to *skip* the filter for `"*KEY*"` and `"*TOKEN*"`
ignore_default_excludes = false
# exclude patterns (case-insensitive globs)
exclude = ["AWS_*", "AZURE_*"]
# force-set / override values
set = { CI = "1" }
# if provided, *only* vars matching these patterns are kept
include_only = ["PATH", "HOME"]
```
| Field | Type | Default | Description |
| ------------------------- | -------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `inherit` | string | `all` | Starting template for the environment:<br>`all` (clone full parent env), `core` (`HOME` , `PATH` , `USER` , …), or `none` (start empty). |
| `ignore_default_excludes` | boolean | `false` | When `false` , Codex removes any var whose **name ** contains `KEY` , `SECRET` , or `TOKEN` (case-insensitive) before other rules run. |
| `exclude` | array<string> | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"` , `"AZURE_*"` . |
| `set` | table<string,string> | `{}` | Explicit key/value overrides or additions – always win over inherited values. |
| `include_only` | array<string> | `[]` | If non-empty, a whitelist of patterns; only variables that match _ one _ pattern survive the final step. (Generally used with `inherit = "all"` .) |
The patterns are **glob style ** , not full regular expressions: `*` matches any
number of characters, `?` matches exactly one, and character classes like
`[A-Z]` /`[^0-9]` are supported. Matching is always **case-insensitive ** . This
syntax is documented in code as `EnvironmentVariablePattern` (see
`core/src/config_types.rs` ).
If you just need a clean slate with a few custom entries you can write:
```toml
[shell_environment_policy]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG = "1" }
```
Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment, assuming network is disabled. This is not configurable.
## MCP integration
### mcp_servers
2025-05-29 16:59:35 -07:00
2025-10-06 08:43:50 -07:00
You can configure Codex to use [MCP servers ](https://modelcontextprotocol.io/about ) to give Codex access to external applications, resources, or services.
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
#### Server configuration
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
##### STDIO
2025-05-29 16:59:35 -07:00
2025-10-06 08:43:50 -07:00
[STDIO servers ](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#stdio ) are MCP servers that you can launch directly via commands on your computer.
2025-05-29 16:59:35 -07:00
```toml
2025-09-29 23:44:16 -07:00
# The top-level table name must be `mcp_servers`
# The sub-table name (`server-name` in this example) can be anything you would like.
2025-10-06 08:43:50 -07:00
[mcp_servers.server_name]
2025-05-29 16:59:35 -07:00
command = "npx"
2025-09-29 23:44:16 -07:00
# Optional
2025-05-29 16:59:35 -07:00
args = ["-y", "mcp-server"]
2025-09-29 23:44:16 -07:00
# Optional: propagate additional env vars to the MVP server.
# A default whitelist of env vars will be propagated to the MCP server.
# https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/utils.rs#L82
2025-05-29 16:59:35 -07:00
env = { "API_KEY" = "value" }
2025-10-06 08:43:50 -07:00
# or
[mcp_servers.server_name.env]
API_KEY = "value"
2025-10-16 21:24:43 -07:00
# Optional: Additional list of environment variables that will be whitelisted in the MCP server's environment.
env_vars = ["API_KEY2"]
# Optional: cwd that the command will be run from
cwd = "/Users/<user>/code/my-server"
2025-09-29 23:44:16 -07:00
```
2025-10-19 15:07:33 -07:00
##### Streamable HTTP
2025-09-29 23:44:16 -07:00
2025-10-06 08:43:50 -07:00
[Streamable HTTP servers ](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http ) enable Codex to talk to resources that are accessed via a http url (either on localhost or another domain).
2025-09-29 23:44:16 -07:00
```toml
[mcp_servers.figma]
2025-10-31 16:41:05 -07:00
url = "https://mcp.figma.com/mcp"
2025-10-10 17:23:41 -07:00
# Optional environment variable containing a bearer token to use for auth
2025-10-26 10:38:30 -07:00
bearer_token_env_var = "ENV_VAR"
2025-10-16 22:05:15 -07:00
# Optional map of headers with hard-coded values.
http_headers = { "HEADER_NAME" = "HEADER_VALUE" }
# Optional map of headers whose values will be replaced with the environment variable.
env_http_headers = { "HEADER_NAME" = "ENV_VAR" }
2025-09-29 23:44:16 -07:00
```
2025-10-19 15:32:13 -07:00
Streamable HTTP connections always use the experimental Rust MCP client under the hood, so expect occasional rough edges. OAuth login flows are gated on the `experimental_use_rmcp_client = true` flag:
```toml
experimental_use_rmcp_client = true
```
After enabling it, run `codex mcp login <server-name>` when the server supports OAuth.
2025-10-03 10:43:12 -07:00
2025-10-19 15:07:33 -07:00
#### Other configuration options
2025-09-29 23:44:16 -07:00
```toml
2025-09-08 08:12:08 -07:00
# Optional: override the default 10s startup timeout
2025-09-22 10:30:59 -07:00
startup_timeout_sec = 20
# Optional: override the default 60s per-tool timeout
tool_timeout_sec = 30
2025-10-08 13:24:51 -07:00
# Optional: disable a server without removing it
enabled = false
2025-10-20 15:35:36 -07:00
# Optional: only expose a subset of tools from this server
enabled_tools = ["search", "summarize"]
# Optional: hide specific tools (applied after `enabled_tools`, if set)
disabled_tools = ["search"]
2025-05-29 16:59:35 -07:00
```
2025-10-20 15:35:36 -07:00
When both `enabled_tools` and `disabled_tools` are specified, Codex first restricts the server to the allow-list and then removes any tools that appear in the deny-list.
2025-10-19 15:07:33 -07:00
#### Experimental RMCP client
2025-09-29 23:44:16 -07:00
2025-10-22 12:06:59 -07:00
This flag enables OAuth support for streamable HTTP servers.
2025-09-29 23:44:16 -07:00
```toml
experimental_use_rmcp_client = true
2025-10-06 08:43:50 -07:00
[mcp_servers.server_name]
…
2025-09-29 23:44:16 -07:00
```
2025-10-19 15:07:33 -07:00
#### MCP CLI commands
2025-09-14 21:30:56 -07:00
```shell
2025-10-06 08:43:50 -07:00
# List all available commands
codex mcp --help
2025-09-14 21:30:56 -07:00
# Add a server (env can be repeated; `--` separates the launcher command)
codex mcp add docs -- docs-server --port 4000
# List configured servers (pretty table or JSON)
codex mcp list
codex mcp list --json
# Show one server (table or JSON)
codex mcp get docs
codex mcp get docs --json
# Remove a server
codex mcp remove docs
2025-10-03 10:43:12 -07:00
# Log in to a streamable HTTP server that supports oauth
codex mcp login SERVER_NAME
# Log out from a streamable HTTP server that supports oauth
codex mcp logout SERVER_NAME
2025-09-14 21:30:56 -07:00
```
2025-10-19 15:07:33 -07:00
### Examples of useful MCPs
2025-10-06 08:43:50 -07:00
There is an ever growing list of useful MCP servers that can be helpful while you are working with Codex.
Some of the most common MCPs we've seen are:
- [Context7 ](https://github.com/upstash/context7 ) — connect to a wide range of up-to-date developer documentation
- Figma [Local ](https://developers.figma.com/docs/figma-mcp-server/local-server-installation/ ) and [Remote ](https://developers.figma.com/docs/figma-mcp-server/remote-server-installation/ ) - access to your Figma designs
- [Playwright ](https://www.npmjs.com/package/@playwright/mcp ) - control and inspect a browser using Playwright
- [Chrome Developer Tools ](https://github.com/ChromeDevTools/chrome-devtools-mcp/ ) — control and inspect a Chrome browser
- [Sentry ](https://docs.sentry.io/product/sentry-mcp/#codex ) — access to your Sentry logs
- [GitHub ](https://github.com/github/github-mcp-server ) — Control over your GitHub account beyond what git allows (like controlling PRs, issues, etc.)
2025-10-19 15:07:33 -07:00
## Observability and telemetry
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
### otel
OpenTelemetry events (#2103)
### Title
## otel
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events**
that
describe each run: outbound API requests, streamed responses, user
input,
tool-approval decisions, and the result of every tool invocation. Export
is
**disabled by default** so local runs remain self-contained. Opt in by
adding an
`[otel]` table and choosing an exporter.
```toml
[otel]
environment = "staging" # defaults to "dev"
exporter = "none" # defaults to "none"; set to otlp-http or otlp-grpc to send events
log_user_prompt = false # defaults to false; redact prompt text unless explicitly enabled
```
Codex tags every exported event with `service.name = "codex-cli"`, the
CLI
version, and an `env` attribute so downstream collectors can distinguish
dev/staging/prod traffic. Only telemetry produced inside the
`codex_otel`
crate—the events listed below—is forwarded to the exporter.
### Event catalog
Every event shares a common set of metadata fields: `event.timestamp`,
`conversation.id`, `app.version`, `auth_mode` (when available),
`user.account_id` (when available), `terminal.type`, `model`, and
`slug`.
With OTEL enabled Codex emits the following event types (in addition to
the
metadata above):
- `codex.api_request`
- `cf_ray` (optional)
- `attempt`
- `duration_ms`
- `http.response.status_code` (optional)
- `error.message` (failures)
- `codex.sse_event`
- `event.kind`
- `duration_ms`
- `error.message` (failures)
- `input_token_count` (completion only)
- `output_token_count` (completion only)
- `cached_token_count` (completion only, optional)
- `reasoning_token_count` (completion only, optional)
- `tool_token_count` (completion only)
- `codex.user_prompt`
- `prompt_length`
- `prompt` (redacted unless `log_user_prompt = true`)
- `codex.tool_decision`
- `tool_name`
- `call_id`
- `decision` (`approved`, `approved_for_session`, `denied`, or `abort`)
- `source` (`config` or `user`)
- `codex.tool_result`
- `tool_name`
- `call_id`
- `arguments`
- `duration_ms` (execution time for the tool)
- `success` (`"true"` or `"false"`)
- `output`
### Choosing an exporter
Set `otel.exporter` to control where events go:
- `none` – leaves instrumentation active but skips exporting. This is
the
default.
- `otlp-http` – posts OTLP log records to an OTLP/HTTP collector.
Specify the
endpoint, protocol, and headers your collector expects:
```toml
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```
- `otlp-grpc` – streams OTLP log records over gRPC. Provide the endpoint
and any
metadata headers:
```toml
[otel]
exporter = { otlp-grpc = {
endpoint = "https://otel.example.com:4317",
headers = { "x-otlp-meta" = "abc123" }
}}
```
If the exporter is `none` nothing is written anywhere; otherwise you
must run or point to your
own collector. All exporters run on a background batch worker that is
flushed on
shutdown.
If you build Codex from source the OTEL crate is still behind an `otel`
feature
flag; the official prebuilt binaries ship with the feature enabled. When
the
feature is disabled the telemetry hooks become no-ops so the CLI
continues to
function without the extra dependencies.
---------
Co-authored-by: Anton Panasenko <apanasenko@openai.com>
2025-09-29 19:30:55 +01:00
Codex can emit [OpenTelemetry ](https://opentelemetry.io/ ) **log events ** that
describe each run: outbound API requests, streamed responses, user input,
tool-approval decisions, and the result of every tool invocation. Export is
**disabled by default** so local runs remain self-contained. Opt in by adding an
`[otel]` table and choosing an exporter.
```toml
[otel]
environment = "staging" # defaults to "dev"
exporter = "none" # defaults to "none"; set to otlp-http or otlp-grpc to send events
log_user_prompt = false # defaults to false; redact prompt text unless explicitly enabled
```
Codex tags every exported event with `service.name = $ORIGINATOR` (the same
value sent in the `originator` header, `codex_cli_rs` by default), the CLI
version, and an `env` attribute so downstream collectors can distinguish
dev/staging/prod traffic. Only telemetry produced inside the `codex_otel`
crate—the events listed below—is forwarded to the exporter.
### Event catalog
Every event shares a common set of metadata fields: `event.timestamp` ,
`conversation.id` , `app.version` , `auth_mode` (when available),
2025-10-15 17:53:33 -07:00
`user.account_id` (when available), `user.email` (when available), `terminal.type` , `model` , and `slug` .
OpenTelemetry events (#2103)
### Title
## otel
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events**
that
describe each run: outbound API requests, streamed responses, user
input,
tool-approval decisions, and the result of every tool invocation. Export
is
**disabled by default** so local runs remain self-contained. Opt in by
adding an
`[otel]` table and choosing an exporter.
```toml
[otel]
environment = "staging" # defaults to "dev"
exporter = "none" # defaults to "none"; set to otlp-http or otlp-grpc to send events
log_user_prompt = false # defaults to false; redact prompt text unless explicitly enabled
```
Codex tags every exported event with `service.name = "codex-cli"`, the
CLI
version, and an `env` attribute so downstream collectors can distinguish
dev/staging/prod traffic. Only telemetry produced inside the
`codex_otel`
crate—the events listed below—is forwarded to the exporter.
### Event catalog
Every event shares a common set of metadata fields: `event.timestamp`,
`conversation.id`, `app.version`, `auth_mode` (when available),
`user.account_id` (when available), `terminal.type`, `model`, and
`slug`.
With OTEL enabled Codex emits the following event types (in addition to
the
metadata above):
- `codex.api_request`
- `cf_ray` (optional)
- `attempt`
- `duration_ms`
- `http.response.status_code` (optional)
- `error.message` (failures)
- `codex.sse_event`
- `event.kind`
- `duration_ms`
- `error.message` (failures)
- `input_token_count` (completion only)
- `output_token_count` (completion only)
- `cached_token_count` (completion only, optional)
- `reasoning_token_count` (completion only, optional)
- `tool_token_count` (completion only)
- `codex.user_prompt`
- `prompt_length`
- `prompt` (redacted unless `log_user_prompt = true`)
- `codex.tool_decision`
- `tool_name`
- `call_id`
- `decision` (`approved`, `approved_for_session`, `denied`, or `abort`)
- `source` (`config` or `user`)
- `codex.tool_result`
- `tool_name`
- `call_id`
- `arguments`
- `duration_ms` (execution time for the tool)
- `success` (`"true"` or `"false"`)
- `output`
### Choosing an exporter
Set `otel.exporter` to control where events go:
- `none` – leaves instrumentation active but skips exporting. This is
the
default.
- `otlp-http` – posts OTLP log records to an OTLP/HTTP collector.
Specify the
endpoint, protocol, and headers your collector expects:
```toml
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```
- `otlp-grpc` – streams OTLP log records over gRPC. Provide the endpoint
and any
metadata headers:
```toml
[otel]
exporter = { otlp-grpc = {
endpoint = "https://otel.example.com:4317",
headers = { "x-otlp-meta" = "abc123" }
}}
```
If the exporter is `none` nothing is written anywhere; otherwise you
must run or point to your
own collector. All exporters run on a background batch worker that is
flushed on
shutdown.
If you build Codex from source the OTEL crate is still behind an `otel`
feature
flag; the official prebuilt binaries ship with the feature enabled. When
the
feature is disabled the telemetry hooks become no-ops so the CLI
continues to
function without the extra dependencies.
---------
Co-authored-by: Anton Panasenko <apanasenko@openai.com>
2025-09-29 19:30:55 +01:00
With OTEL enabled Codex emits the following event types (in addition to the
metadata above):
- `codex.conversation_starts`
- `provider_name`
- `reasoning_effort` (optional)
- `reasoning_summary`
- `context_window` (optional)
- `max_output_tokens` (optional)
- `auto_compact_token_limit` (optional)
- `approval_policy`
- `sandbox_policy`
- `mcp_servers` (comma-separated list)
- `active_profile` (optional)
- `codex.api_request`
- `attempt`
- `duration_ms`
- `http.response.status_code` (optional)
- `error.message` (failures)
- `codex.sse_event`
- `event.kind`
- `duration_ms`
- `error.message` (failures)
- `input_token_count` (responses only)
- `output_token_count` (responses only)
- `cached_token_count` (responses only, optional)
- `reasoning_token_count` (responses only, optional)
- `tool_token_count` (responses only)
- `codex.user_prompt`
- `prompt_length`
- `prompt` (redacted unless `log_user_prompt = true` )
- `codex.tool_decision`
- `tool_name`
- `call_id`
- `decision` (`approved` , `approved_for_session` , `denied` , or `abort` )
- `source` (`config` or `user` )
- `codex.tool_result`
- `tool_name`
- `call_id` (optional)
- `arguments` (optional)
- `duration_ms` (execution time for the tool)
- `success` (`"true"` or `"false"` )
- `output`
These event shapes may change as we iterate.
### Choosing an exporter
Set `otel.exporter` to control where events go:
- `none` – leaves instrumentation active but skips exporting. This is the
default.
- `otlp-http` – posts OTLP log records to an OTLP/HTTP collector. Specify the
endpoint, protocol, and headers your collector expects:
```toml
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```
- `otlp-grpc` – streams OTLP log records over gRPC. Provide the endpoint and any
metadata headers:
```toml
[otel]
exporter = { otlp-grpc = {
endpoint = "https://otel.example.com:4317",
headers = { "x-otlp-meta" = "abc123" }
}}
```
If the exporter is `none` nothing is written anywhere; otherwise you must run or point to your
own collector. All exporters run on a background batch worker that is flushed on
shutdown.
If you build Codex from source the OTEL crate is still behind an `otel` feature
flag; the official prebuilt binaries ship with the feature enabled. When the
feature is disabled the telemetry hooks become no-ops so the CLI continues to
function without the extra dependencies.
2025-10-19 15:07:33 -07:00
### notify
2025-05-29 16:59:35 -07:00
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
```json
{
"type": "agent-turn-complete",
2025-10-14 11:55:10 -07:00
"thread-id": "b5f6c1c2-1111-2222-3333-444455556666",
2025-05-29 16:59:35 -07:00
"turn-id": "12345",
2025-10-20 16:53:03 -07:00
"cwd": "/Users/alice/projects/example",
2025-05-29 16:59:35 -07:00
"input-messages": ["Rename `foo` to `bar` and update the callsites."],
"last-assistant-message": "Rename complete and verified `cargo build` succeeds."
}
```
The `"type"` property will always be set. Currently, `"agent-turn-complete"` is the only notification type that is supported.
2025-10-14 11:55:10 -07:00
`"thread-id"` contains a string that identifies the Codex session that produced the notification; you can use it to correlate multiple turns that belong to the same task.
2025-10-20 16:53:03 -07:00
`"cwd"` reports the absolute working directory for the session so scripts can disambiguate which project triggered the notification.
2025-05-29 16:59:35 -07:00
As an example, here is a Python script that parses the JSON and decides whether to show a desktop push notification using [terminal-notifier ](https://github.com/julienXX/terminal-notifier ) on macOS:
```python
#!/usr/bin/env python3
import json
import subprocess
import sys
def main() -> int:
if len(sys.argv) != 2:
print("Usage: notify.py <NOTIFICATION_JSON>")
return 1
try:
notification = json.loads(sys.argv[1])
except json.JSONDecodeError:
return 1
match notification_type := notification.get("type"):
case "agent-turn-complete":
assistant_message = notification.get("last-assistant-message")
if assistant_message:
title = f"Codex: {assistant_message}"
else:
title = "Codex: Turn Complete!"
2025-10-15 23:22:39 -07:00
input_messages = notification.get("input-messages", [])
2025-05-29 16:59:35 -07:00
message = " ".join(input_messages)
title += message
case _:
print(f"not sending a push notification for: {notification_type}")
return 0
2025-10-14 11:55:10 -07:00
thread_id = notification.get("thread-id", "")
2025-05-29 16:59:35 -07:00
subprocess.check_output(
[
"terminal-notifier",
"-title",
title,
"-message",
message,
"-group",
2025-10-14 11:55:10 -07:00
"codex-" + thread_id,
2025-05-29 16:59:35 -07:00
"-ignoreDnD",
"-activate",
"com.googlecode.iterm2",
]
)
return 0
if __name __ == "__main__":
sys.exit(main())
```
To have Codex use this script for notifications, you would configure it via `notify` in `~/.codex/config.toml` using the appropriate path to `notify.py` on your computer:
```toml
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
```
2025-09-15 10:22:02 -07:00
> [!NOTE]
> Use `notify` for automation and integrations: Codex invokes your external program with a single JSON argument for each event, independent of the TUI. If you only want lightweight desktop notifications while using the TUI, prefer `tui.notifications`, which uses terminal escape codes and requires no external program. You can enable both; `tui.notifications` covers in‑ TUI alerts (e.g., approval prompts), while `notify` is best for system‑ level hooks or custom notifiers. Currently, `notify` emits only `agent-turn-complete`, whereas `tui.notifications` supports `agent-turn-complete` and `approval-requested` with optional filtering.
2025-10-19 15:07:33 -07:00
### hide_agent_reasoning
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
Codex intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
Setting `hide_agent_reasoning` to `true` suppresses these events in **both ** the TUI as well as the headless `exec` sub-command:
2025-05-29 16:59:35 -07:00
```toml
2025-10-19 15:07:33 -07:00
hide_agent_reasoning = true # defaults to false
2025-05-29 16:59:35 -07:00
```
2025-10-19 15:07:33 -07:00
### show_raw_agent_reasoning
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
Surfaces the model’ s raw chain-of-thought ("raw reasoning content") when available.
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
Notes:
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
- Only takes effect if the selected model/provider actually emits raw reasoning content. Many models do not. When unsupported, this option has no visible effect.
- Raw reasoning may include intermediate thoughts or sensitive context. Enable only if acceptable for your workflow.
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
Example:
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
```toml
show_raw_agent_reasoning = true # defaults to false
```
2025-05-29 16:59:35 -07:00
2025-10-19 15:07:33 -07:00
## Profiles and overrides
2025-05-30 23:14:56 -07:00
2025-10-19 15:07:33 -07:00
### profiles
2025-05-30 23:14:56 -07:00
2025-10-19 15:07:33 -07:00
A _ profile _ is a collection of configuration values that can be set together. Multiple profiles can be defined in `config.toml` and you can specify the one you
want to use at runtime via the `--profile` flag.
Here is an example of a `config.toml` that defines multiple profiles:
2025-05-30 23:14:56 -07:00
```toml
2025-10-19 15:07:33 -07:00
model = "o3"
approval_policy = "untrusted"
# Setting `profile` is equivalent to specifying `--profile o3` on the command
# line, though the `--profile` flag can still be used to override this value.
profile = "o3"
[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
[profiles.o3]
model = "o3"
model_provider = "openai"
approval_policy = "never"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
[profiles.gpt3]
model = "gpt-3.5-turbo"
model_provider = "openai-chat-completions"
[profiles.zdr]
model = "o3"
model_provider = "openai"
approval_policy = "on-failure"
2025-05-30 23:14:56 -07:00
```
2025-10-19 15:07:33 -07:00
Users can specify config values at multiple levels. Order of precedence is as follows:
2025-08-05 01:56:13 -07:00
2025-10-19 15:07:33 -07:00
1. custom command-line argument, e.g., `--model o3`
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
3. as an entry in `config.toml` , e.g., `model = "o3"`
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `gpt-5-codex` )
2025-08-05 01:56:13 -07:00
2025-10-19 15:07:33 -07:00
### history
2025-08-19 11:39:31 -07:00
2025-10-19 15:07:33 -07:00
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl` . Note that on UNIX, the file permissions are set to `o600` , so it should only be readable and writable by the owner.
2025-08-05 01:56:13 -07:00
2025-10-19 15:07:33 -07:00
To disable this behavior, configure `[history]` as follows:
2025-08-19 11:39:31 -07:00
2025-08-05 01:56:13 -07:00
```toml
2025-10-19 15:07:33 -07:00
[history]
persistence = "none" # "save-all" is the default value
2025-08-05 01:56:13 -07:00
```
2025-10-19 15:07:33 -07:00
### file_opener
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
2025-10-19 15:07:33 -07:00
Identifies the editor/URI scheme to use for hyperlinking citations in model output. If set, citations to files in the model output will be hyperlinked using the specified URI scheme so they can be ctrl/cmd-clicked from the terminal to open them.
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
2025-10-19 15:07:33 -07:00
For example, if the model output includes a reference such as `【F:/home/user/project/main.py†L42-L50】` , then this would be rewritten to link to the URI `vscode://file/home/user/project/main.py:42` .
Note this is **not ** a general editor setting (like `$EDITOR` ), as it only accepts a fixed set of values:
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
2025-10-19 15:07:33 -07:00
- `"vscode"` (default)
- `"vscode-insiders"`
- `"windsurf"`
- `"cursor"`
- `"none"` to explicitly disable this feature
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
2025-10-19 15:07:33 -07:00
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
2025-10-19 15:07:33 -07:00
### project_doc_max_bytes
2025-05-29 16:59:35 -07:00
Maximum number of bytes to read from an `AGENTS.md` file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.
2025-10-19 15:07:33 -07:00
### project_doc_fallback_filenames
2025-10-01 11:19:59 -07:00
Ordered list of additional filenames to look for when `AGENTS.md` is missing at a given directory level. The CLI always checks `AGENTS.md` first; the configured fallbacks are tried in the order provided. This lets monorepos that already use alternate instruction files (for example, `CLAUDE.md` ) work out of the box while you migrate to `AGENTS.md` over time.
```toml
project_doc_fallback_filenames = ["CLAUDE.md", ".exampleagentrules.md"]
```
We recommend migrating instructions to AGENTS.md; other filenames may reduce model performance.
2025-10-19 15:07:33 -07:00
> See also [AGENTS.md discovery](./agents_md.md) for how Codex locates these files during a session.
### tui
2025-05-29 16:59:35 -07:00
Options that are specific to the TUI.
```toml
[tui]
2025-09-15 10:22:02 -07:00
# Send desktop notifications when approvals are required or a turn completes.
# Defaults to false.
notifications = true
# You can optionally filter to specific notification types.
# Available types are "agent-turn-complete" and "approval-requested".
notifications = [ "agent-turn-complete", "approval-requested" ]
2025-05-29 16:59:35 -07:00
```
README / docs refactor (#2724)
This PR cleans up the monolithic README by breaking it into a set
navigable pages under docs/ (install, getting started, configuration,
authentication, sandboxing and approvals, platform details, FAQ, ZDR,
contributing, license). The top‑level README is now more concise and
intuitive, (with corrected screenshots).
It also consolidates overlapping content from codex-rs/README.md into
the top‑level docs and updates links accordingly. The codex-rs README
remains in place for now as a pointer and for continuity.
Finally, added an extensive config reference table at the bottom of
docs/config.md.
---------
Co-authored-by: easong-openai <easong@openai.com>
2025-08-27 10:30:39 -07:00
2025-09-15 10:22:02 -07:00
> [!NOTE]
> Codex emits desktop notifications using terminal escape codes. Not all terminals support these (notably, macOS Terminal.app and VS Code's terminal do not support custom notifications. iTerm2, Ghostty and WezTerm do support these notifications).
2025-10-03 11:35:48 -07:00
> [!NOTE] > `tui.notifications` is built‑ in and limited to the TUI session. For programmatic or cross‑ environment notifications—or to integrate with OS‑ specific notifiers—use the top‑ level `notify` option to run an external program that receives event JSON. The two settings are independent and can be used together.
2025-09-15 10:22:02 -07:00
2025-10-27 19:41:49 -07:00
## Authentication and authorization
### Forcing a login method
2025-10-20 08:50:54 -07:00
To force users on a given machine to use a specific login method or workspace, use a combination of [managed configurations ](https://developers.openai.com/codex/security#managed-configuration ) as well as either or both of the following fields:
```toml
# Force the user to log in with ChatGPT or via an api key.
forced_login_method = "chatgpt" or "api"
# When logging in with ChatGPT, only the specified workspace ID will be presented during the login
# flow and the id will be validated during the oauth callback as well as every time Codex starts.
forced_chatgpt_workspace_id = "00000000-0000-0000-0000-000000000000"
```
If the active credentials don't match the config, the user will be logged out and Codex will exit.
If `forced_chatgpt_workspace_id` is set but `forced_login_method` is not set, API key login will still work.
2025-10-27 19:41:49 -07:00
### Control where login credentials are stored
```toml
cli_auth_credentials_store = "keyring"
```
Valid values:
- `file` (default) – Store credentials in `auth.json` under `$CODEX_HOME` .
- `keyring` – Store credentials in the operating system keyring via the [`keyring` crate ](https://crates.io/crates/keyring ); the CLI reports an error if secure storage is unavailable. Backends by OS:
- macOS: macOS Keychain
- Windows: Windows Credential Manager
- Linux: DBus‑ based Secret Service, the kernel keyutils, or a combination
- FreeBSD/OpenBSD: DBus‑ based Secret Service
- `auto` – Save credentials to the operating system keyring when available; otherwise, fall back to `auth.json` under `$CODEX_HOME` .
README / docs refactor (#2724)
This PR cleans up the monolithic README by breaking it into a set
navigable pages under docs/ (install, getting started, configuration,
authentication, sandboxing and approvals, platform details, FAQ, ZDR,
contributing, license). The top‑level README is now more concise and
intuitive, (with corrected screenshots).
It also consolidates overlapping content from codex-rs/README.md into
the top‑level docs and updates links accordingly. The codex-rs README
remains in place for now as a pointer and for continuity.
Finally, added an extensive config reference table at the bottom of
docs/config.md.
---------
Co-authored-by: easong-openai <easong@openai.com>
2025-08-27 10:30:39 -07:00
## Config reference
2025-10-03 11:35:48 -07:00
| Key | Type / Values | Notes |
| ------------------------------------------------ | ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |
| `model` | string | Model to use (e.g., `gpt-5-codex` ). |
| `model_provider` | string | Provider id from `model_providers` (default: `openai` ). |
| `model_context_window` | number | Context window tokens. |
| `model_max_output_tokens` | number | Max output tokens. |
| `approval_policy` | `untrusted` \| `on-failure` \| `on-request` \| `never` | When to prompt for approval. |
| `sandbox_mode` | `read-only` \| `workspace-write` \| `danger-full-access` | OS sandbox policy. |
| `sandbox_workspace_write.writable_roots` | array<string> | Extra writable roots in workspace‑ write. |
| `sandbox_workspace_write.network_access` | boolean | Allow network in workspace‑ write (default: false). |
| `sandbox_workspace_write.exclude_tmpdir_env_var` | boolean | Exclude `$TMPDIR` from writable roots (default: false). |
| `sandbox_workspace_write.exclude_slash_tmp` | boolean | Exclude `/tmp` from writable roots (default: false). |
| `notify` | array<string> | External program for notifications. |
| `instructions` | string | Currently ignored; use `experimental_instructions_file` or `AGENTS.md` . |
2025-11-10 09:57:56 -06:00
| `features.<feature-flag>` | boolean | See [feature flags ](#feature-flags ) for details |
2025-10-08 13:24:51 -07:00
| `mcp_servers.<id>.command` | string | MCP server launcher command (stdio servers only). |
| `mcp_servers.<id>.args` | array<string> | MCP server args (stdio servers only). |
| `mcp_servers.<id>.env` | map<string,string> | MCP server env vars (stdio servers only). |
| `mcp_servers.<id>.url` | string | MCP server url (streamable http servers only). |
| `mcp_servers.<id>.bearer_token_env_var` | string | environment variable containing a bearer token to use for auth (streamable http servers only). |
| `mcp_servers.<id>.enabled` | boolean | When false, Codex skips starting the server (default: true). |
2025-10-03 11:35:48 -07:00
| `mcp_servers.<id>.startup_timeout_sec` | number | Startup timeout in seconds (default: 10). Timeout is applied both for initializing MCP server and initially listing tools. |
| `mcp_servers.<id>.tool_timeout_sec` | number | Per-tool timeout in seconds (default: 60). Accepts fractional values; omit to use the default. |
2025-10-20 15:35:36 -07:00
| `mcp_servers.<id>.enabled_tools` | array<string> | Restrict the server to the listed tool names. |
| `mcp_servers.<id>.disabled_tools` | array<string> | Remove the listed tool names after applying `enabled_tools` , if any. |
2025-10-03 11:35:48 -07:00
| `model_providers.<id>.name` | string | Display name. |
| `model_providers.<id>.base_url` | string | API base URL. |
| `model_providers.<id>.env_key` | string | Env var for API key. |
| `model_providers.<id>.wire_api` | `chat` \| `responses` | Protocol used (default: `chat` ). |
| `model_providers.<id>.query_params` | map<string,string> | Extra query params (e.g., Azure `api-version` ). |
| `model_providers.<id>.http_headers` | map<string,string> | Additional static headers. |
| `model_providers.<id>.env_http_headers` | map<string,string> | Headers sourced from env vars. |
| `model_providers.<id>.request_max_retries` | number | Per‑ provider HTTP retry count (default: 4). |
| `model_providers.<id>.stream_max_retries` | number | SSE stream retry count (default: 5). |
| `model_providers.<id>.stream_idle_timeout_ms` | number | SSE idle timeout (ms) (default: 300000). |
| `project_doc_max_bytes` | number | Max bytes to read from `AGENTS.md` . |
| `profile` | string | Active profile name. |
| `profiles.<name>.*` | various | Profile‑ scoped overrides of the same keys. |
| `history.persistence` | `save-all` \| `none` | History file persistence (default: `save-all` ). |
| `history.max_bytes` | number | Currently ignored (not enforced). |
| `file_opener` | `vscode` \| `vscode-insiders` \| `windsurf` \| `cursor` \| `none` | URI scheme for clickable citations (default: `vscode` ). |
| `tui` | table | TUI‑ specific options. |
| `tui.notifications` | boolean \| array<string> | Enable desktop notifications in the tui (default: false). |
| `hide_agent_reasoning` | boolean | Hide model reasoning events. |
| `show_raw_agent_reasoning` | boolean | Show raw reasoning (when available). |
| `model_reasoning_effort` | `minimal` \| `low` \| `medium` \| `high` | Responses API reasoning effort. |
| `model_reasoning_summary` | `auto` \| `concise` \| `detailed` \| `none` | Reasoning summaries. |
| `model_verbosity` | `low` \| `medium` \| `high` | GPT‑ 5 text verbosity (Responses API). |
| `model_supports_reasoning_summaries` | boolean | Force‑ enable reasoning summaries. |
| `model_reasoning_summary_format` | `none` \| `experimental` | Force reasoning summary format. |
| `chatgpt_base_url` | string | Base URL for ChatGPT auth flow. |
| `experimental_instructions_file` | string (path) | Replace built‑ in instructions (experimental). |
| `experimental_use_exec_command_tool` | boolean | Use experimental exec command tool. |
| `projects.<path>.trust_level` | string | Mark project/worktree as trusted (only `"trusted"` is recognized). |
2025-11-10 09:57:56 -06:00
| `tools.web_search` | boolean | Enable web search tool (deprecated) (default: false). |
2025-10-30 13:23:24 -07:00
| `tools.view_image` | boolean | Enable or disable the `view_image` tool so Codex can attach local image files from the workspace (default: true). |
2025-10-20 08:50:54 -07:00
| `forced_login_method` | `chatgpt` \| `api` | Only allow Codex to be used with ChatGPT or API keys. |
| `forced_chatgpt_workspace_id` | string (uuid) | Only allow Codex to be used with the specified ChatGPT workspace. |
2025-10-27 19:41:49 -07:00
| `cli_auth_credentials_store` | `file` \| `keyring` \| `auto` | Where to store CLI login credentials (default: `file` ). |