docs: improve overall documentation (#5354)
Update FAQ, improve general structure for config, add more links across the sections in the documentation, remove out of date and duplicate content and better explain certain concepts such as approvals and sandboxing.
This commit is contained in:
committed by
GitHub
parent
1d9b27387b
commit
c127062b40
@@ -1,6 +1,12 @@
|
|||||||
## Advanced
|
## Advanced
|
||||||
|
|
||||||
## Tracing / verbose logging
|
If you already lean on Codex every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](./advanced.md#model-context-protocol), and script full runs with [`codex exec`](./exec.md). Jump to the section you need and keep building.
|
||||||
|
|
||||||
|
## Config quickstart {#config-quickstart}
|
||||||
|
|
||||||
|
Most day-to-day tuning lives in `config.toml`: set approval + sandbox presets, pin model defaults, and add MCP server launchers. The [Config guide](./config.md) walks through every option and provides copy-paste examples for common setups.
|
||||||
|
|
||||||
|
## Tracing / verbose logging {#tracing-verbose-logging}
|
||||||
|
|
||||||
Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
|
Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
|
||||||
|
|
||||||
@@ -14,15 +20,15 @@ By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=err
|
|||||||
|
|
||||||
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
|
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
|
||||||
|
|
||||||
## Model Context Protocol (MCP)
|
## Model Context Protocol (MCP) {#model-context-protocol}
|
||||||
|
|
||||||
The Codex CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#connecting-to-mcp-servers).
|
The Codex CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#connecting-to-mcp-servers).
|
||||||
|
|
||||||
## Using Codex as an MCP Server
|
## Using Codex as an MCP Server {#mcp-server}
|
||||||
|
|
||||||
The Codex CLI can also be run as an MCP _server_ via `codex mcp-server`. For example, you can use `codex mcp-server` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `codex mcp` separately to add/list/get/remove MCP server launchers in your configuration.
|
The Codex CLI can also be run as an MCP _server_ via `codex mcp-server`. For example, you can use `codex mcp-server` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `codex mcp` separately to add/list/get/remove MCP server launchers in your configuration.
|
||||||
|
|
||||||
### Codex MCP Server Quickstart
|
### Codex MCP Server Quickstart {#mcp-server-quickstart}
|
||||||
|
|
||||||
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
|
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
|
||||||
|
|
||||||
@@ -53,7 +59,7 @@ Send a `tools/list` request and you will see that there are two tools available:
|
|||||||
| **`prompt`** (required) | string | The next user prompt to continue the Codex conversation. |
|
| **`prompt`** (required) | string | The next user prompt to continue the Codex conversation. |
|
||||||
| **`conversationId`** (required) | string | The id of the conversation to continue. |
|
| **`conversationId`** (required) | string | The id of the conversation to continue. |
|
||||||
|
|
||||||
### Trying it Out
|
### Trying it Out {#mcp-server-trying-it-out}
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
|
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
|
||||||
|
|||||||
561
docs/config.md
561
docs/config.md
@@ -1,5 +1,16 @@
|
|||||||
# Config
|
# Config
|
||||||
|
|
||||||
|
Codex configuration gives you fine-grained control over the model, execution environment, and integrations available to the CLI. Use this guide alongside the workflows in [`codex exec`](./exec.md), the guardrails in [Sandbox & approvals](./sandbox.md), and project guidance from [AGENTS.md discovery](./agents_md.md).
|
||||||
|
|
||||||
|
## Quick navigation
|
||||||
|
|
||||||
|
- [Model selection](#model-selection)
|
||||||
|
- [Execution environment](#execution-environment)
|
||||||
|
- [MCP integration](#mcp-integration)
|
||||||
|
- [Observability and telemetry](#observability-and-telemetry)
|
||||||
|
- [Profiles and overrides](#profiles-and-overrides)
|
||||||
|
- [Reference table](#config-reference)
|
||||||
|
|
||||||
Codex supports several mechanisms for setting config values:
|
Codex supports several mechanisms for setting config values:
|
||||||
|
|
||||||
- Config-specific command-line flags, such as `--model o3` (highest precedence).
|
- Config-specific command-line flags, such as `--model o3` (highest precedence).
|
||||||
@@ -15,15 +26,17 @@ Codex supports several mechanisms for setting config values:
|
|||||||
|
|
||||||
Both the `--config` flag and the `config.toml` file support the following options:
|
Both the `--config` flag and the `config.toml` file support the following options:
|
||||||
|
|
||||||
## model
|
## Model selection
|
||||||
|
|
||||||
|
### model
|
||||||
|
|
||||||
The model that Codex should use.
|
The model that Codex should use.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
model = "o3" # overrides the default of "gpt-5-codex"
|
model = "gpt-5" # overrides the default of "gpt-5-codex"
|
||||||
```
|
```
|
||||||
|
|
||||||
## model_providers
|
### model_providers
|
||||||
|
|
||||||
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the corresponding provider.
|
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the corresponding provider.
|
||||||
|
|
||||||
@@ -84,7 +97,7 @@ http_headers = { "X-Example-Header" = "example-value" }
|
|||||||
env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
|
env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
|
||||||
```
|
```
|
||||||
|
|
||||||
### Azure model provider example
|
#### Azure model provider example
|
||||||
|
|
||||||
Note that Azure requires `api-version` to be passed as a query parameter, so be sure to specify it as part of `query_params` when defining the Azure provider:
|
Note that Azure requires `api-version` to be passed as a query parameter, so be sure to specify it as part of `query_params` when defining the Azure provider:
|
||||||
|
|
||||||
@@ -100,7 +113,7 @@ wire_api = "responses"
|
|||||||
|
|
||||||
Export your key before launching Codex: `export AZURE_OPENAI_API_KEY=…`
|
Export your key before launching Codex: `export AZURE_OPENAI_API_KEY=…`
|
||||||
|
|
||||||
### Per-provider network tuning
|
#### Per-provider network tuning
|
||||||
|
|
||||||
The following optional settings control retry behaviour and streaming idle timeouts **per model provider**. They must be specified inside the corresponding `[model_providers.<id>]` block in `config.toml`. (Older releases accepted top‑level keys; those are now ignored.)
|
The following optional settings control retry behaviour and streaming idle timeouts **per model provider**. They must be specified inside the corresponding `[model_providers.<id>]` block in `config.toml`. (Older releases accepted top‑level keys; those are now ignored.)
|
||||||
|
|
||||||
@@ -117,19 +130,19 @@ stream_max_retries = 10 # retry dropped SSE streams
|
|||||||
stream_idle_timeout_ms = 300000 # 5m idle timeout
|
stream_idle_timeout_ms = 300000 # 5m idle timeout
|
||||||
```
|
```
|
||||||
|
|
||||||
#### request_max_retries
|
##### request_max_retries
|
||||||
|
|
||||||
How many times Codex will retry a failed HTTP request to the model provider. Defaults to `4`.
|
How many times Codex will retry a failed HTTP request to the model provider. Defaults to `4`.
|
||||||
|
|
||||||
#### stream_max_retries
|
##### stream_max_retries
|
||||||
|
|
||||||
Number of times Codex will attempt to reconnect when a streaming response is interrupted. Defaults to `5`.
|
Number of times Codex will attempt to reconnect when a streaming response is interrupted. Defaults to `5`.
|
||||||
|
|
||||||
#### stream_idle_timeout_ms
|
##### stream_idle_timeout_ms
|
||||||
|
|
||||||
How long Codex will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
|
How long Codex will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
|
||||||
|
|
||||||
## model_provider
|
### model_provider
|
||||||
|
|
||||||
Identifies which provider to use from the `model_providers` map. Defaults to `"openai"`. You can override the `base_url` for the built-in `openai` provider via the `OPENAI_BASE_URL` environment variable.
|
Identifies which provider to use from the `model_providers` map. Defaults to `"openai"`. You can override the `base_url` for the built-in `openai` provider via the `OPENAI_BASE_URL` environment variable.
|
||||||
|
|
||||||
@@ -142,7 +155,73 @@ model_provider = "ollama"
|
|||||||
model = "mistral"
|
model = "mistral"
|
||||||
```
|
```
|
||||||
|
|
||||||
## approval_policy
|
### model_reasoning_effort
|
||||||
|
|
||||||
|
If the selected model is known to support reasoning (for example: `o3`, `o4-mini`, `codex-*`, `gpt-5`, `gpt-5-codex`), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
|
||||||
|
|
||||||
|
- `"minimal"`
|
||||||
|
- `"low"`
|
||||||
|
- `"medium"` (default)
|
||||||
|
- `"high"`
|
||||||
|
|
||||||
|
Note: to minimize reasoning, choose `"minimal"`.
|
||||||
|
|
||||||
|
### model_reasoning_summary
|
||||||
|
|
||||||
|
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"codex"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
|
||||||
|
|
||||||
|
- `"auto"` (default)
|
||||||
|
- `"concise"`
|
||||||
|
- `"detailed"`
|
||||||
|
|
||||||
|
To disable reasoning summaries, set `model_reasoning_summary` to `"none"` in your config:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
model_reasoning_summary = "none" # disable reasoning summaries
|
||||||
|
```
|
||||||
|
|
||||||
|
### model_verbosity
|
||||||
|
|
||||||
|
Controls output length/detail on GPT‑5 family models when using the Responses API. Supported values:
|
||||||
|
|
||||||
|
- `"low"`
|
||||||
|
- `"medium"` (default when omitted)
|
||||||
|
- `"high"`
|
||||||
|
|
||||||
|
When set, Codex includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }`.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
model = "gpt-5"
|
||||||
|
model_verbosity = "low"
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: This applies only to providers using the Responses API. Chat Completions providers are unaffected.
|
||||||
|
|
||||||
|
### model_supports_reasoning_summaries
|
||||||
|
|
||||||
|
By default, `reasoning` is only set on requests to OpenAI models that are known to support them. To force `reasoning` to set on requests to the current model, you can force this behavior by setting the following in `config.toml`:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
model_supports_reasoning_summaries = true
|
||||||
|
```
|
||||||
|
|
||||||
|
### model_context_window
|
||||||
|
|
||||||
|
The size of the context window for the model, in tokens.
|
||||||
|
|
||||||
|
In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use `model_context_window` to tell Codex what value to use to determine how much context is left during a conversation.
|
||||||
|
|
||||||
|
### model_max_output_tokens
|
||||||
|
|
||||||
|
This is analogous to `model_context_window`, but for the maximum number of output tokens for the model.
|
||||||
|
|
||||||
|
> See also [`codex exec`](./exec.md) to see how these model settings influence non-interactive runs.
|
||||||
|
|
||||||
|
## Execution environment
|
||||||
|
|
||||||
|
### approval_policy
|
||||||
|
|
||||||
Determines when the user should be prompted to approve whether Codex can execute a command:
|
Determines when the user should be prompted to approve whether Codex can execute a command:
|
||||||
|
|
||||||
@@ -179,104 +258,7 @@ Alternatively, you can have the model run until it is done, and never ask to run
|
|||||||
approval_policy = "never"
|
approval_policy = "never"
|
||||||
```
|
```
|
||||||
|
|
||||||
## profiles
|
### sandbox_mode
|
||||||
|
|
||||||
A _profile_ is a collection of configuration values that can be set together. Multiple profiles can be defined in `config.toml` and you can specify the one you
|
|
||||||
want to use at runtime via the `--profile` flag.
|
|
||||||
|
|
||||||
Here is an example of a `config.toml` that defines multiple profiles:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
model = "o3"
|
|
||||||
approval_policy = "untrusted"
|
|
||||||
|
|
||||||
# Setting `profile` is equivalent to specifying `--profile o3` on the command
|
|
||||||
# line, though the `--profile` flag can still be used to override this value.
|
|
||||||
profile = "o3"
|
|
||||||
|
|
||||||
[model_providers.openai-chat-completions]
|
|
||||||
name = "OpenAI using Chat Completions"
|
|
||||||
base_url = "https://api.openai.com/v1"
|
|
||||||
env_key = "OPENAI_API_KEY"
|
|
||||||
wire_api = "chat"
|
|
||||||
|
|
||||||
[profiles.o3]
|
|
||||||
model = "o3"
|
|
||||||
model_provider = "openai"
|
|
||||||
approval_policy = "never"
|
|
||||||
model_reasoning_effort = "high"
|
|
||||||
model_reasoning_summary = "detailed"
|
|
||||||
|
|
||||||
[profiles.gpt3]
|
|
||||||
model = "gpt-3.5-turbo"
|
|
||||||
model_provider = "openai-chat-completions"
|
|
||||||
|
|
||||||
[profiles.zdr]
|
|
||||||
model = "o3"
|
|
||||||
model_provider = "openai"
|
|
||||||
approval_policy = "on-failure"
|
|
||||||
```
|
|
||||||
|
|
||||||
Users can specify config values at multiple levels. Order of precedence is as follows:
|
|
||||||
|
|
||||||
1. custom command-line argument, e.g., `--model o3`
|
|
||||||
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
|
|
||||||
3. as an entry in `config.toml`, e.g., `model = "o3"`
|
|
||||||
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `gpt-5-codex`)
|
|
||||||
|
|
||||||
## model_reasoning_effort
|
|
||||||
|
|
||||||
If the selected model is known to support reasoning (for example: `o3`, `o4-mini`, `codex-*`, `gpt-5`, `gpt-5-codex`), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
|
|
||||||
|
|
||||||
- `"minimal"`
|
|
||||||
- `"low"`
|
|
||||||
- `"medium"` (default)
|
|
||||||
- `"high"`
|
|
||||||
|
|
||||||
Note: to minimize reasoning, choose `"minimal"`.
|
|
||||||
|
|
||||||
## model_reasoning_summary
|
|
||||||
|
|
||||||
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"codex"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
|
|
||||||
|
|
||||||
- `"auto"` (default)
|
|
||||||
- `"concise"`
|
|
||||||
- `"detailed"`
|
|
||||||
|
|
||||||
To disable reasoning summaries, set `model_reasoning_summary` to `"none"` in your config:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
model_reasoning_summary = "none" # disable reasoning summaries
|
|
||||||
```
|
|
||||||
|
|
||||||
## model_verbosity
|
|
||||||
|
|
||||||
Controls output length/detail on GPT‑5 family models when using the Responses API. Supported values:
|
|
||||||
|
|
||||||
- `"low"`
|
|
||||||
- `"medium"` (default when omitted)
|
|
||||||
- `"high"`
|
|
||||||
|
|
||||||
When set, Codex includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }`.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
model = "gpt-5"
|
|
||||||
model_verbosity = "low"
|
|
||||||
```
|
|
||||||
|
|
||||||
Note: This applies only to providers using the Responses API. Chat Completions providers are unaffected.
|
|
||||||
|
|
||||||
## model_supports_reasoning_summaries
|
|
||||||
|
|
||||||
By default, `reasoning` is only set on requests to OpenAI models that are known to support them. To force `reasoning` to set on requests to the current model, you can force this behavior by setting the following in `config.toml`:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
model_supports_reasoning_summaries = true
|
|
||||||
```
|
|
||||||
|
|
||||||
## sandbox_mode
|
|
||||||
|
|
||||||
Codex executes model-generated shell commands inside an OS-level sandbox.
|
Codex executes model-generated shell commands inside an OS-level sandbox.
|
||||||
|
|
||||||
@@ -325,7 +307,7 @@ This is reasonable to use if Codex is running in an environment that provides it
|
|||||||
|
|
||||||
Though using this option may also be necessary if you try to use Codex in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
|
Though using this option may also be necessary if you try to use Codex in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
|
||||||
|
|
||||||
## Approval presets
|
### approval_presets
|
||||||
|
|
||||||
Codex provides three main Approval Presets:
|
Codex provides three main Approval Presets:
|
||||||
|
|
||||||
@@ -335,123 +317,9 @@ Codex provides three main Approval Presets:
|
|||||||
|
|
||||||
You can further customize how Codex runs at the command line using the `--ask-for-approval` and `--sandbox` options.
|
You can further customize how Codex runs at the command line using the `--ask-for-approval` and `--sandbox` options.
|
||||||
|
|
||||||
## Connecting to MCP servers
|
> See also [Sandbox & approvals](./sandbox.md) for in-depth examples and platform-specific behaviour.
|
||||||
|
|
||||||
You can configure Codex to use [MCP servers](https://modelcontextprotocol.io/about) to give Codex access to external applications, resources, or services.
|
### shell_environment_policy
|
||||||
|
|
||||||
### Server configuration
|
|
||||||
|
|
||||||
#### STDIO
|
|
||||||
|
|
||||||
[STDIO servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#stdio) are MCP servers that you can launch directly via commands on your computer.
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# The top-level table name must be `mcp_servers`
|
|
||||||
# The sub-table name (`server-name` in this example) can be anything you would like.
|
|
||||||
[mcp_servers.server_name]
|
|
||||||
command = "npx"
|
|
||||||
# Optional
|
|
||||||
args = ["-y", "mcp-server"]
|
|
||||||
# Optional: propagate additional env vars to the MVP server.
|
|
||||||
# A default whitelist of env vars will be propagated to the MCP server.
|
|
||||||
# https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/utils.rs#L82
|
|
||||||
env = { "API_KEY" = "value" }
|
|
||||||
# or
|
|
||||||
[mcp_servers.server_name.env]
|
|
||||||
API_KEY = "value"
|
|
||||||
# Optional: Additional list of environment variables that will be whitelisted in the MCP server's environment.
|
|
||||||
env_vars = ["API_KEY2"]
|
|
||||||
|
|
||||||
# Optional: cwd that the command will be run from
|
|
||||||
cwd = "/Users/<user>/code/my-server"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Streamable HTTP
|
|
||||||
|
|
||||||
[Streamable HTTP servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http) enable Codex to talk to resources that are accessed via a http url (either on localhost or another domain).
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Streamable HTTP requires the experimental rmcp client
|
|
||||||
experimental_use_rmcp_client = true
|
|
||||||
[mcp_servers.figma]
|
|
||||||
url = "https://mcp.linear.app/mcp"
|
|
||||||
# Optional environment variable containing a bearer token to use for auth
|
|
||||||
bearer_token_env_var = "<token>"
|
|
||||||
# Optional map of headers with hard-coded values.
|
|
||||||
http_headers = { "HEADER_NAME" = "HEADER_VALUE" }
|
|
||||||
# Optional map of headers whose values will be replaced with the environment variable.
|
|
||||||
env_http_headers = { "HEADER_NAME" = "ENV_VAR" }
|
|
||||||
```
|
|
||||||
|
|
||||||
For oauth login, you must enable `experimental_use_rmcp_client = true` and then run `codex mcp login server_name`
|
|
||||||
|
|
||||||
### Other configuration options
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Optional: override the default 10s startup timeout
|
|
||||||
startup_timeout_sec = 20
|
|
||||||
# Optional: override the default 60s per-tool timeout
|
|
||||||
tool_timeout_sec = 30
|
|
||||||
# Optional: disable a server without removing it
|
|
||||||
enabled = false
|
|
||||||
```
|
|
||||||
|
|
||||||
### Experimental RMCP client
|
|
||||||
|
|
||||||
Codex is transitioning to the [official Rust MCP SDK](https://github.com/modelcontextprotocol/rust-sdk).
|
|
||||||
|
|
||||||
The flag enabled OAuth support for streamable HTTP servers and uses a new STDIO client implementation.
|
|
||||||
|
|
||||||
Please try and report issues with the new client. To enable it, add this to the top level of your `config.toml`
|
|
||||||
|
|
||||||
```toml
|
|
||||||
experimental_use_rmcp_client = true
|
|
||||||
|
|
||||||
[mcp_servers.server_name]
|
|
||||||
…
|
|
||||||
```
|
|
||||||
|
|
||||||
### MCP CLI commands
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# List all available commands
|
|
||||||
codex mcp --help
|
|
||||||
|
|
||||||
# Add a server (env can be repeated; `--` separates the launcher command)
|
|
||||||
codex mcp add docs -- docs-server --port 4000
|
|
||||||
|
|
||||||
# List configured servers (pretty table or JSON)
|
|
||||||
codex mcp list
|
|
||||||
codex mcp list --json
|
|
||||||
|
|
||||||
# Show one server (table or JSON)
|
|
||||||
codex mcp get docs
|
|
||||||
codex mcp get docs --json
|
|
||||||
|
|
||||||
# Remove a server
|
|
||||||
codex mcp remove docs
|
|
||||||
|
|
||||||
# Log in to a streamable HTTP server that supports oauth
|
|
||||||
codex mcp login SERVER_NAME
|
|
||||||
|
|
||||||
# Log out from a streamable HTTP server that supports oauth
|
|
||||||
codex mcp logout SERVER_NAME
|
|
||||||
```
|
|
||||||
|
|
||||||
## Examples of useful MCPs
|
|
||||||
|
|
||||||
There is an ever growing list of useful MCP servers that can be helpful while you are working with Codex.
|
|
||||||
|
|
||||||
Some of the most common MCPs we've seen are:
|
|
||||||
|
|
||||||
- [Context7](https://github.com/upstash/context7) — connect to a wide range of up-to-date developer documentation
|
|
||||||
- Figma [Local](https://developers.figma.com/docs/figma-mcp-server/local-server-installation/) and [Remote](https://developers.figma.com/docs/figma-mcp-server/remote-server-installation/) - access to your Figma designs
|
|
||||||
- [Playwright](https://www.npmjs.com/package/@playwright/mcp) - control and inspect a browser using Playwright
|
|
||||||
- [Chrome Developer Tools](https://github.com/ChromeDevTools/chrome-devtools-mcp/) — control and inspect a Chrome browser
|
|
||||||
- [Sentry](https://docs.sentry.io/product/sentry-mcp/#codex) — access to your Sentry logs
|
|
||||||
- [GitHub](https://github.com/github/github-mcp-server) — Control over your GitHub account beyond what git allows (like controlling PRs, issues, etc.)
|
|
||||||
|
|
||||||
## shell_environment_policy
|
|
||||||
|
|
||||||
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment** to those subprocesses. You can tune this behavior via the **`shell_environment_policy`** block in `config.toml`:
|
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment** to those subprocesses. You can tune this behavior via the **`shell_environment_policy`** block in `config.toml`:
|
||||||
|
|
||||||
@@ -493,7 +361,127 @@ set = { PATH = "/usr/bin", MY_FLAG = "1" }
|
|||||||
|
|
||||||
Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment, assuming network is disabled. This is not configurable.
|
Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment, assuming network is disabled. This is not configurable.
|
||||||
|
|
||||||
## otel
|
## MCP integration
|
||||||
|
|
||||||
|
### mcp_servers
|
||||||
|
|
||||||
|
You can configure Codex to use [MCP servers](https://modelcontextprotocol.io/about) to give Codex access to external applications, resources, or services.
|
||||||
|
|
||||||
|
#### Server configuration
|
||||||
|
|
||||||
|
##### STDIO
|
||||||
|
|
||||||
|
[STDIO servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#stdio) are MCP servers that you can launch directly via commands on your computer.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# The top-level table name must be `mcp_servers`
|
||||||
|
# The sub-table name (`server-name` in this example) can be anything you would like.
|
||||||
|
[mcp_servers.server_name]
|
||||||
|
command = "npx"
|
||||||
|
# Optional
|
||||||
|
args = ["-y", "mcp-server"]
|
||||||
|
# Optional: propagate additional env vars to the MVP server.
|
||||||
|
# A default whitelist of env vars will be propagated to the MCP server.
|
||||||
|
# https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/utils.rs#L82
|
||||||
|
env = { "API_KEY" = "value" }
|
||||||
|
# or
|
||||||
|
[mcp_servers.server_name.env]
|
||||||
|
API_KEY = "value"
|
||||||
|
# Optional: Additional list of environment variables that will be whitelisted in the MCP server's environment.
|
||||||
|
env_vars = ["API_KEY2"]
|
||||||
|
|
||||||
|
# Optional: cwd that the command will be run from
|
||||||
|
cwd = "/Users/<user>/code/my-server"
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Streamable HTTP
|
||||||
|
|
||||||
|
[Streamable HTTP servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http) enable Codex to talk to resources that are accessed via a http url (either on localhost or another domain).
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Streamable HTTP requires the experimental rmcp client
|
||||||
|
experimental_use_rmcp_client = true
|
||||||
|
[mcp_servers.figma]
|
||||||
|
url = "https://mcp.linear.app/mcp"
|
||||||
|
# Optional environment variable containing a bearer token to use for auth
|
||||||
|
bearer_token_env_var = "<token>"
|
||||||
|
# Optional map of headers with hard-coded values.
|
||||||
|
http_headers = { "HEADER_NAME" = "HEADER_VALUE" }
|
||||||
|
# Optional map of headers whose values will be replaced with the environment variable.
|
||||||
|
env_http_headers = { "HEADER_NAME" = "ENV_VAR" }
|
||||||
|
```
|
||||||
|
|
||||||
|
For oauth login, you must enable `experimental_use_rmcp_client = true` and then run `codex mcp login server_name`
|
||||||
|
|
||||||
|
#### Other configuration options
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Optional: override the default 10s startup timeout
|
||||||
|
startup_timeout_sec = 20
|
||||||
|
# Optional: override the default 60s per-tool timeout
|
||||||
|
tool_timeout_sec = 30
|
||||||
|
# Optional: disable a server without removing it
|
||||||
|
enabled = false
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Experimental RMCP client
|
||||||
|
|
||||||
|
Codex is transitioning to the [official Rust MCP SDK](https://github.com/modelcontextprotocol/rust-sdk).
|
||||||
|
|
||||||
|
The flag enabled OAuth support for streamable HTTP servers and uses a new STDIO client implementation.
|
||||||
|
|
||||||
|
Please try and report issues with the new client. To enable it, add this to the top level of your `config.toml`
|
||||||
|
|
||||||
|
```toml
|
||||||
|
experimental_use_rmcp_client = true
|
||||||
|
|
||||||
|
[mcp_servers.server_name]
|
||||||
|
…
|
||||||
|
```
|
||||||
|
|
||||||
|
#### MCP CLI commands
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# List all available commands
|
||||||
|
codex mcp --help
|
||||||
|
|
||||||
|
# Add a server (env can be repeated; `--` separates the launcher command)
|
||||||
|
codex mcp add docs -- docs-server --port 4000
|
||||||
|
|
||||||
|
# List configured servers (pretty table or JSON)
|
||||||
|
codex mcp list
|
||||||
|
codex mcp list --json
|
||||||
|
|
||||||
|
# Show one server (table or JSON)
|
||||||
|
codex mcp get docs
|
||||||
|
codex mcp get docs --json
|
||||||
|
|
||||||
|
# Remove a server
|
||||||
|
codex mcp remove docs
|
||||||
|
|
||||||
|
# Log in to a streamable HTTP server that supports oauth
|
||||||
|
codex mcp login SERVER_NAME
|
||||||
|
|
||||||
|
# Log out from a streamable HTTP server that supports oauth
|
||||||
|
codex mcp logout SERVER_NAME
|
||||||
|
```
|
||||||
|
|
||||||
|
### Examples of useful MCPs
|
||||||
|
|
||||||
|
There is an ever growing list of useful MCP servers that can be helpful while you are working with Codex.
|
||||||
|
|
||||||
|
Some of the most common MCPs we've seen are:
|
||||||
|
|
||||||
|
- [Context7](https://github.com/upstash/context7) — connect to a wide range of up-to-date developer documentation
|
||||||
|
- Figma [Local](https://developers.figma.com/docs/figma-mcp-server/local-server-installation/) and [Remote](https://developers.figma.com/docs/figma-mcp-server/remote-server-installation/) - access to your Figma designs
|
||||||
|
- [Playwright](https://www.npmjs.com/package/@playwright/mcp) - control and inspect a browser using Playwright
|
||||||
|
- [Chrome Developer Tools](https://github.com/ChromeDevTools/chrome-devtools-mcp/) — control and inspect a Chrome browser
|
||||||
|
- [Sentry](https://docs.sentry.io/product/sentry-mcp/#codex) — access to your Sentry logs
|
||||||
|
- [GitHub](https://github.com/github/github-mcp-server) — Control over your GitHub account beyond what git allows (like controlling PRs, issues, etc.)
|
||||||
|
|
||||||
|
## Observability and telemetry
|
||||||
|
|
||||||
|
### otel
|
||||||
|
|
||||||
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events** that
|
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events** that
|
||||||
describe each run: outbound API requests, streamed responses, user input,
|
describe each run: outbound API requests, streamed responses, user input,
|
||||||
@@ -604,7 +592,7 @@ flag; the official prebuilt binaries ship with the feature enabled. When the
|
|||||||
feature is disabled the telemetry hooks become no-ops so the CLI continues to
|
feature is disabled the telemetry hooks become no-ops so the CLI continues to
|
||||||
function without the extra dependencies.
|
function without the extra dependencies.
|
||||||
|
|
||||||
## notify
|
### notify
|
||||||
|
|
||||||
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
|
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
|
||||||
|
|
||||||
@@ -689,7 +677,79 @@ notify = ["python3", "/Users/mbolin/.codex/notify.py"]
|
|||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> Use `notify` for automation and integrations: Codex invokes your external program with a single JSON argument for each event, independent of the TUI. If you only want lightweight desktop notifications while using the TUI, prefer `tui.notifications`, which uses terminal escape codes and requires no external program. You can enable both; `tui.notifications` covers in‑TUI alerts (e.g., approval prompts), while `notify` is best for system‑level hooks or custom notifiers. Currently, `notify` emits only `agent-turn-complete`, whereas `tui.notifications` supports `agent-turn-complete` and `approval-requested` with optional filtering.
|
> Use `notify` for automation and integrations: Codex invokes your external program with a single JSON argument for each event, independent of the TUI. If you only want lightweight desktop notifications while using the TUI, prefer `tui.notifications`, which uses terminal escape codes and requires no external program. You can enable both; `tui.notifications` covers in‑TUI alerts (e.g., approval prompts), while `notify` is best for system‑level hooks or custom notifiers. Currently, `notify` emits only `agent-turn-complete`, whereas `tui.notifications` supports `agent-turn-complete` and `approval-requested` with optional filtering.
|
||||||
|
|
||||||
## history
|
### hide_agent_reasoning
|
||||||
|
|
||||||
|
Codex intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
|
||||||
|
|
||||||
|
Setting `hide_agent_reasoning` to `true` suppresses these events in **both** the TUI as well as the headless `exec` sub-command:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
hide_agent_reasoning = true # defaults to false
|
||||||
|
```
|
||||||
|
|
||||||
|
### show_raw_agent_reasoning
|
||||||
|
|
||||||
|
Surfaces the model’s raw chain-of-thought ("raw reasoning content") when available.
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- Only takes effect if the selected model/provider actually emits raw reasoning content. Many models do not. When unsupported, this option has no visible effect.
|
||||||
|
- Raw reasoning may include intermediate thoughts or sensitive context. Enable only if acceptable for your workflow.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
show_raw_agent_reasoning = true # defaults to false
|
||||||
|
```
|
||||||
|
|
||||||
|
## Profiles and overrides
|
||||||
|
|
||||||
|
### profiles
|
||||||
|
|
||||||
|
A _profile_ is a collection of configuration values that can be set together. Multiple profiles can be defined in `config.toml` and you can specify the one you
|
||||||
|
want to use at runtime via the `--profile` flag.
|
||||||
|
|
||||||
|
Here is an example of a `config.toml` that defines multiple profiles:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
model = "o3"
|
||||||
|
approval_policy = "untrusted"
|
||||||
|
|
||||||
|
# Setting `profile` is equivalent to specifying `--profile o3` on the command
|
||||||
|
# line, though the `--profile` flag can still be used to override this value.
|
||||||
|
profile = "o3"
|
||||||
|
|
||||||
|
[model_providers.openai-chat-completions]
|
||||||
|
name = "OpenAI using Chat Completions"
|
||||||
|
base_url = "https://api.openai.com/v1"
|
||||||
|
env_key = "OPENAI_API_KEY"
|
||||||
|
wire_api = "chat"
|
||||||
|
|
||||||
|
[profiles.o3]
|
||||||
|
model = "o3"
|
||||||
|
model_provider = "openai"
|
||||||
|
approval_policy = "never"
|
||||||
|
model_reasoning_effort = "high"
|
||||||
|
model_reasoning_summary = "detailed"
|
||||||
|
|
||||||
|
[profiles.gpt3]
|
||||||
|
model = "gpt-3.5-turbo"
|
||||||
|
model_provider = "openai-chat-completions"
|
||||||
|
|
||||||
|
[profiles.zdr]
|
||||||
|
model = "o3"
|
||||||
|
model_provider = "openai"
|
||||||
|
approval_policy = "on-failure"
|
||||||
|
```
|
||||||
|
|
||||||
|
Users can specify config values at multiple levels. Order of precedence is as follows:
|
||||||
|
|
||||||
|
1. custom command-line argument, e.g., `--model o3`
|
||||||
|
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
|
||||||
|
3. as an entry in `config.toml`, e.g., `model = "o3"`
|
||||||
|
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `gpt-5-codex`)
|
||||||
|
|
||||||
|
### history
|
||||||
|
|
||||||
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
|
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
|
||||||
|
|
||||||
@@ -700,7 +760,7 @@ To disable this behavior, configure `[history]` as follows:
|
|||||||
persistence = "none" # "save-all" is the default value
|
persistence = "none" # "save-all" is the default value
|
||||||
```
|
```
|
||||||
|
|
||||||
## file_opener
|
### file_opener
|
||||||
|
|
||||||
Identifies the editor/URI scheme to use for hyperlinking citations in model output. If set, citations to files in the model output will be hyperlinked using the specified URI scheme so they can be ctrl/cmd-clicked from the terminal to open them.
|
Identifies the editor/URI scheme to use for hyperlinking citations in model output. If set, citations to files in the model output will be hyperlinked using the specified URI scheme so they can be ctrl/cmd-clicked from the terminal to open them.
|
||||||
|
|
||||||
@@ -716,46 +776,11 @@ Note this is **not** a general editor setting (like `$EDITOR`), as it only accep
|
|||||||
|
|
||||||
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
|
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
|
||||||
|
|
||||||
## hide_agent_reasoning
|
### project_doc_max_bytes
|
||||||
|
|
||||||
Codex intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
|
|
||||||
|
|
||||||
Setting `hide_agent_reasoning` to `true` suppresses these events in **both** the TUI as well as the headless `exec` sub-command:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
hide_agent_reasoning = true # defaults to false
|
|
||||||
```
|
|
||||||
|
|
||||||
## show_raw_agent_reasoning
|
|
||||||
|
|
||||||
Surfaces the model’s raw chain-of-thought ("raw reasoning content") when available.
|
|
||||||
|
|
||||||
Notes:
|
|
||||||
|
|
||||||
- Only takes effect if the selected model/provider actually emits raw reasoning content. Many models do not. When unsupported, this option has no visible effect.
|
|
||||||
- Raw reasoning may include intermediate thoughts or sensitive context. Enable only if acceptable for your workflow.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
show_raw_agent_reasoning = true # defaults to false
|
|
||||||
```
|
|
||||||
|
|
||||||
## model_context_window
|
|
||||||
|
|
||||||
The size of the context window for the model, in tokens.
|
|
||||||
|
|
||||||
In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use `model_context_window` to tell Codex what value to use to determine how much context is left during a conversation.
|
|
||||||
|
|
||||||
## model_max_output_tokens
|
|
||||||
|
|
||||||
This is analogous to `model_context_window`, but for the maximum number of output tokens for the model.
|
|
||||||
|
|
||||||
## project_doc_max_bytes
|
|
||||||
|
|
||||||
Maximum number of bytes to read from an `AGENTS.md` file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.
|
Maximum number of bytes to read from an `AGENTS.md` file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.
|
||||||
|
|
||||||
## project_doc_fallback_filenames
|
### project_doc_fallback_filenames
|
||||||
|
|
||||||
Ordered list of additional filenames to look for when `AGENTS.md` is missing at a given directory level. The CLI always checks `AGENTS.md` first; the configured fallbacks are tried in the order provided. This lets monorepos that already use alternate instruction files (for example, `CLAUDE.md`) work out of the box while you migrate to `AGENTS.md` over time.
|
Ordered list of additional filenames to look for when `AGENTS.md` is missing at a given directory level. The CLI always checks `AGENTS.md` first; the configured fallbacks are tried in the order provided. This lets monorepos that already use alternate instruction files (for example, `CLAUDE.md`) work out of the box while you migrate to `AGENTS.md` over time.
|
||||||
|
|
||||||
@@ -765,7 +790,9 @@ project_doc_fallback_filenames = ["CLAUDE.md", ".exampleagentrules.md"]
|
|||||||
|
|
||||||
We recommend migrating instructions to AGENTS.md; other filenames may reduce model performance.
|
We recommend migrating instructions to AGENTS.md; other filenames may reduce model performance.
|
||||||
|
|
||||||
## tui
|
> See also [AGENTS.md discovery](./agents_md.md) for how Codex locates these files during a session.
|
||||||
|
|
||||||
|
### tui
|
||||||
|
|
||||||
Options that are specific to the TUI.
|
Options that are specific to the TUI.
|
||||||
|
|
||||||
|
|||||||
27
docs/faq.md
27
docs/faq.md
@@ -1,23 +1,44 @@
|
|||||||
## FAQ
|
## FAQ
|
||||||
|
|
||||||
|
This FAQ highlights the most common questions and points you to the right deep-dive guides in `docs/`.
|
||||||
|
|
||||||
### OpenAI released a model called Codex in 2021 - is this related?
|
### OpenAI released a model called Codex in 2021 - is this related?
|
||||||
|
|
||||||
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
|
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
|
||||||
|
|
||||||
### Which models are supported?
|
### Which models are supported?
|
||||||
|
|
||||||
We recommend using Codex with GPT-5, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
|
We recommend using Codex with GPT-5 Codex, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
|
||||||
|
|
||||||
You can also use older models by using API-based auth and launching codex with the `--model` flag.
|
You can also use older models by using API-based auth and launching codex with the `--model` flag.
|
||||||
|
|
||||||
### Why does `o3` or `o4-mini` not work for me?
|
### How do approvals and sandbox modes work together?
|
||||||
|
|
||||||
It's possible that your [API account needs to be verified](https://help.openai.com/en/articles/10910291-api-organization-verification) in order to start streaming responses and seeing chain of thought summaries from the API. If you're still running into issues, please let us know!
|
Approvals are the mechanism Codex uses to ask before running a tool call with elevated permissions - typically to leave the sandbox or re-run a failed command without isolation. Sandbox mode provides the baseline isolation (`Read Only`, `Workspace Write`, or `Danger Full Access`; see [Sandbox & approvals](./sandbox.md)).
|
||||||
|
|
||||||
|
### Can I automate tasks without the TUI?
|
||||||
|
|
||||||
|
Yes. [`codex exec`](./exec.md) runs Codex in non-interactive mode with streaming logs, JSONL output, and structured schema support. The command respects the same sandbox and approval settings you configure in the [Config guide](./config.md).
|
||||||
|
|
||||||
### How do I stop Codex from editing my files?
|
### How do I stop Codex from editing my files?
|
||||||
|
|
||||||
By default, Codex can modify files in your current working directory (Auto mode). To prevent edits, run `codex` in read-only mode with the CLI flag `--sandbox read-only`. Alternatively, you can change the approval level mid-conversation with `/approvals`.
|
By default, Codex can modify files in your current working directory (Auto mode). To prevent edits, run `codex` in read-only mode with the CLI flag `--sandbox read-only`. Alternatively, you can change the approval level mid-conversation with `/approvals`.
|
||||||
|
|
||||||
|
### How do I connect Codex to MCP servers?
|
||||||
|
|
||||||
|
Configure MCP servers through your `config.toml` using the examples in [Config -> Connecting to MCP servers](./config.md#connecting-to-mcp-servers).
|
||||||
|
|
||||||
|
### I'm having trouble logging in. What should I check?
|
||||||
|
|
||||||
|
Confirm your setup in three steps:
|
||||||
|
|
||||||
|
1. Walk through the auth flows in [Authentication](./authentication.md) to ensure the correct credentials are present in `~/.codex/auth.json`.
|
||||||
|
2. If you're on a headless or remote machine, make sure port-forwarding is configured as described in [Authentication -> Connecting on a "Headless" Machine](./authentication.md#connecting-on-a-headless-machine).
|
||||||
|
|
||||||
### Does it work on Windows?
|
### Does it work on Windows?
|
||||||
|
|
||||||
Running Codex directly on Windows may work, but is not officially supported. We recommend using [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install).
|
Running Codex directly on Windows may work, but is not officially supported. We recommend using [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install).
|
||||||
|
|
||||||
|
### Where should I start after installation?
|
||||||
|
|
||||||
|
Follow the quick setup in [Install & build](./install.md) and then jump into [Getting started](./getting-started.md) for interactive usage tips, prompt examples, and AGENTS.md guidance.
|
||||||
|
|||||||
@@ -1,8 +1,3 @@
|
|||||||
### Platform sandboxing details
|
## Platform sandboxing
|
||||||
|
|
||||||
The mechanism Codex uses to implement the sandbox policy depends on your OS:
|
This content now lives alongside the rest of the sandbox guidance. See [Sandbox mechanics by platform](./sandbox.md#platform-sandboxing-details) for up-to-date details.
|
||||||
|
|
||||||
- **macOS 12+** uses **Apple Seatbelt** and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `--sandbox` that was specified.
|
|
||||||
- **Linux** uses a combination of Landlock/seccomp APIs to enforce the `sandbox` configuration.
|
|
||||||
|
|
||||||
Note that when running Linux in a containerized environment such as Docker, sandboxing may not work if the host/container configuration does not support the necessary Landlock/seccomp APIs. In such cases, we recommend configuring your Docker container so that it provides the sandbox guarantees you are looking for and then running `codex` with `--sandbox danger-full-access` (or, more simply, the `--dangerously-bypass-approvals-and-sandbox` flag) within your container.
|
|
||||||
|
|||||||
@@ -1,8 +1,10 @@
|
|||||||
## Sandbox & approvals
|
## Sandbox & approvals
|
||||||
|
|
||||||
### Approval modes
|
What Codex is allowed to do is governed by a combination of **sandbox modes** (what Codex is allowed to do without supervision) and **approval policies** (when you must confirm an action). This page explains the options, how they interact, and how the sandbox behaves on each platform.
|
||||||
|
|
||||||
We've chosen a powerful default for how Codex works on your computer: `Auto`. In this approval mode, Codex can read files, make edits, and run commands in the working directory automatically. However, Codex will need your approval to work outside the working directory or access network.
|
### Approval policies
|
||||||
|
|
||||||
|
We've chosen a powerful default for how Codex works on your computer: `Auto`. Under this approval policy, Codex can read files, make edits, and run commands in the working directory automatically. However, Codex will need your approval to work outside the working directory or access network.
|
||||||
|
|
||||||
When you just want to chat, or if you want to plan before diving in, you can switch to `Read Only` mode with the `/approvals` command.
|
When you just want to chat, or if you want to plan before diving in, you can switch to `Read Only` mode with the `/approvals` command.
|
||||||
|
|
||||||
@@ -63,9 +65,18 @@ approval_policy = "never"
|
|||||||
sandbox_mode = "read-only"
|
sandbox_mode = "read-only"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Sandbox mechanics by platform {#platform-sandboxing-details}
|
||||||
|
|
||||||
|
The mechanism Codex uses to enforce the sandbox policy depends on your OS:
|
||||||
|
|
||||||
|
- **macOS 12+** uses **Apple Seatbelt**. Codex invokes `sandbox-exec` with a profile that corresponds to the selected `--sandbox` mode, constraining filesystem and network access at the OS level.
|
||||||
|
- **Linux** combines **Landlock** and **seccomp** APIs to approximate the same guarantees. Kernel support is required; older kernels may not expose the necessary features.
|
||||||
|
|
||||||
|
In containerized Linux environments (for example Docker), sandboxing may not work when the host or container configuration does not expose Landlock/seccomp. In those cases, configure the container to provide the isolation you need and run Codex with `--sandbox danger-full-access` (or the shorthand `--dangerously-bypass-approvals-and-sandbox`) inside that container.
|
||||||
|
|
||||||
### Experimenting with the Codex Sandbox
|
### Experimenting with the Codex Sandbox
|
||||||
|
|
||||||
To test to see what happens when a command is run under the sandbox provided by Codex, we provide the following subcommands in Codex CLI:
|
To test how commands behave under Codex's sandbox, use the CLI helpers:
|
||||||
|
|
||||||
```
|
```
|
||||||
# macOS
|
# macOS
|
||||||
@@ -78,12 +89,3 @@ codex sandbox linux [--full-auto] [COMMAND]...
|
|||||||
codex debug seatbelt [--full-auto] [COMMAND]...
|
codex debug seatbelt [--full-auto] [COMMAND]...
|
||||||
codex debug landlock [--full-auto] [COMMAND]...
|
codex debug landlock [--full-auto] [COMMAND]...
|
||||||
```
|
```
|
||||||
|
|
||||||
### Platform sandboxing details
|
|
||||||
|
|
||||||
The mechanism Codex uses to implement the sandbox policy depends on your OS:
|
|
||||||
|
|
||||||
- **macOS 12+** uses **Apple Seatbelt** and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `--sandbox` that was specified.
|
|
||||||
- **Linux** uses a combination of Landlock/seccomp APIs to enforce the `sandbox` configuration.
|
|
||||||
|
|
||||||
Note that when running Linux in a containerized environment such as Docker, sandboxing may not work if the host/container configuration does not support the necessary Landlock/seccomp APIs. In such cases, we recommend configuring your Docker container so that it provides the sandbox guarantees you are looking for and then running `codex` with `--sandbox danger-full-access` (or, more simply, the `--dangerously-bypass-approvals-and-sandbox` flag) within your container.
|
|
||||||
|
|||||||
Reference in New Issue
Block a user