Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
```
tail -F ~/.codex/log/codex-tui.log
```
By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
## Model Context Protocol (MCP)
The Codex CLI can be configured to leverage MCP servers by defining an [`mcp_servers`](./config.md#mcp_servers) section in `~/.codex/config.toml`. It is intended to mirror how tools such as Claude and Cursor define `mcpServers` in their respective JSON config files, though the Codex format is slightly different since it uses TOML rather than JSON, e.g.:
```toml
# IMPORTANT: the top-level key is `mcp_servers` rather than `mcpServers`.
The Codex CLI can also be run as an MCP _server_ via `codex mcp`. For example, you can use `codex mcp` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents).
### Codex MCP Server Quickstart
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
``` bash
npx @modelcontextprotocol/inspector codex mcp
```
Send a `tools/list` request and you will see that there are two tools available:
**`codex`** - Run a Codex session. Accepts configuration parameters matching the Codex Config struct. The `codex` tool takes the following properties:
**`prompt`** (required) | string | The initial user prompt to start the Codex conversation.
`approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `never`.
`base-instructions` | string | The set of instructions to use instead of the default ones.
`config` | object | Individual [config settings](https://github.com/openai/codex/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`.
`cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory.
`include-plan-tool` | boolean | Whether to include the plan tool in the conversation.
`model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`).
`profile` | string | Configuration profile from `config.toml` to specify default options.
`sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
**`codex-reply`** - Continue a Codex session by providing the conversation id and prompt. The `codex-reply` tool takes the following properties:
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
Use the MCP inspector and `codex mcp` to build a simple tic-tac-toe game with the following settings:
**approval-policy:** never
**prompt:** Implement a simple tic-tac-toe game with HTML, Javascript, and CSS. Write the game in a single file called index.html.
**sandbox:** workspace-write
Click "Run Tool" and you should see a list of events emitted from the Codex MCP server as it builds the game.