Commit Graph

23 Commits

Author SHA1 Message Date
Dylan
197f45a3be [mcp-server] Expose fuzzy file search in MCP (#2677)
## Summary
Expose a simple fuzzy file search implementation for mcp clients to work
with

## Testing
- [x] Tested locally
2025-09-29 12:19:09 -07:00
Michael Bolin
cb2f952143 fix: remove unnecessary flush() calls (#2873)
Because we are writing to a pipe, these `flush()` calls are unnecessary,
so removing these saves us one syscall per write in these two cases.
2025-08-28 22:41:10 -07:00
Michael Bolin
970e466ab3 fix: switch to unbounded channel (#2874)
#2747 encouraged me to audit our codebase for similar issues, as now I
am particularly suspicious that our flaky tests are due to a racy
deadlock.

I asked Codex to audit our code, and one of its suggestions was this:

> **High-Risk Patterns**
>
> All `send_*` methods await on a bounded
`mpsc::Sender<OutgoingMessage>`. If the writer blocks, the channel fills
and the processor task blocks on send, stops draining incoming requests,
and stdin reader eventually blocks on its send. This creates a
backpressure deadlock cycle across the three tasks.
>
> **Recommendations**
> * Server outgoing path: break the backpressure cycle
> * Option A (minimal risk): Change `OutgoingMessageSender` to use an
unbounded channel to decouple producer from stdout. Add rate logging so
floods are visible.
> * Option B (bounded + drop policy): Change `send_*` to try_send and
drop messages (or coalesce) when the queue is full, logging a warning.
This prevents processor stalls at the cost of losing messages under
extreme backpressure.
> * Option C (two-stage buffer): Keep bounded channel, but have a
dedicated “egress” task that drains an unbounded internal queue, writing
to stdout with retries and a shutdown timeout. This centralizes
backpressure policy.

So this PR is Option A.

Indeed, we previously used a bounded channel with a capacity of `128`,
but as we discovered recently with #2776, there are certainly cases
where we can get flooded with events.

That said, `test_shell_command_approval_triggers_elicitation` just
failed one one build when I put up this PR, so clearly we are not out of
the woods yet...

**Update:** I think I found the true source of the deadlock! See
https://github.com/openai/codex/pull/2876
2025-08-28 22:20:10 -07:00
unship
f7cb2f87a0 Bug fix: clone of incoming_tx can lead to deadlock (#2747)
POC code

```rust
use tokio::sync::mpsc;
use std::time::Duration;

#[tokio::main]
async fn main() {
    println!("=== Test 1: Simulating original MCP server pattern ===");
    test_original_pattern().await;
}

async fn test_original_pattern() {
    println!("Testing the original pattern from MCP server...");
    
    // Create channel - this simulates the original incoming_tx/incoming_rx
    let (tx, mut rx) = mpsc::channel::<String>(10);
    
    // Task 1: Simulates stdin reader that will naturally terminate
    let stdin_task = tokio::spawn({
        let tx_clone = tx.clone();
        async move {
            println!("  stdin_task: Started, will send 3 messages then exit");
            for i in 0..3 {
                let msg = format!("Message {}", i);
                if tx_clone.send(msg.clone()).await.is_err() {
                    println!("  stdin_task: Receiver dropped, exiting");
                    break;
                }
                println!("  stdin_task: Sent {}", msg);
                tokio::time::sleep(Duration::from_millis(300)).await;
            }
            println!("  stdin_task: Finished (simulating EOF)");
            // tx_clone is dropped here
        }
    });
    
    // Task 2: Simulates message processor
    let processor_task = tokio::spawn(async move {
        println!("  processor_task: Started, waiting for messages");
        while let Some(msg) = rx.recv().await {
            println!("  processor_task: Processing {}", msg);
            tokio::time::sleep(Duration::from_millis(100)).await;
        }
        println!("  processor_task: Finished (channel closed)");
    });
    
    // Task 3: Simulates stdout writer or other background task
    let background_task = tokio::spawn(async move {
        for i in 0..2 {
            tokio::time::sleep(Duration::from_millis(500)).await;
            println!("  background_task: Tick {}", i);
        }
        println!("  background_task: Finished");
    });
    
    println!("  main: Original tx is still alive here");
    println!("  main: About to call tokio::join! - will this deadlock?");
    
    // This is the pattern from the original code
    let _ = tokio::join!(stdin_task, processor_task, background_task);
}

```

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-08-28 19:28:17 -07:00
Eric Traut
dacff9675a Added new auth-related methods and events to mcp server (#2496)
This PR adds the following:
* A getAuthStatus method on the mcp server. This returns the auth method
currently in use (chatgpt or apikey) or none if the user is not
authenticated. It also returns the "preferred auth method" which
reflects the `preferred_auth_method` value in the config.
* A logout method on the mcp server. If called, it logs out the user and
deletes the `auth.json` file — the same behavior in the cli's `/logout`
command.
* An `authStatusChange` event notification that is sent when the auth
status changes due to successful login or logout operations.
* Logic to pass command-line config overrides to the mcp server at
startup time. This allows use cases like `codex mcp -c
preferred_auth_method=apikey`.
2025-08-20 20:36:34 -07:00
Michael Bolin
712bfa04ac chore: move mcp-server/src/wire_format.rs to protocol/src/mcp_protocol.rs (#2423)
The existing `wire_format.rs` should share more types with the
`codex-protocol` crate (like `AskForApproval` instead of maintaining a
parallel `CodexToolCallApprovalPolicy` enum), so this PR moves
`wire_format.rs` into `codex-protocol`, renaming it as
`mcp-protocol.rs`. We also de-dupe types, where appropriate.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2423).
* #2424
* __->__ #2423
2025-08-18 09:36:57 -07:00
Michael Bolin
a269754668 remove mcp-server/src/mcp_protocol.rs and the code that depends on it (#2360) 2025-08-18 00:29:18 -07:00
Michael Bolin
e7bad650ff feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:

```json
{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "id": 42,
  "params": {
    "name": "newConversation",
    "arguments": {
      "model": "gpt-5",
      "approvalPolicy": "on-request"
    }
  }
}
```

we can send something like this:


```json
{
  "jsonrpc": "2.0",
  "method": "newConversation",
  "id": 42,
  "params": {
    "model": "gpt-5",
    "approvalPolicy": "on-request"
  }
}
```

Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)

To start, this introduces four request types:

- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`

The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.

The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.

Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.

For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.



---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
Michael Bolin
37fc4185ef fix: update OutgoingMessageSender::send_response() to take Serialize (#2263)
This makes `send_response()` easier to work with.
2025-08-13 14:29:13 -07:00
aibrahim-oai
97ab8fb610 MCP: add conversation.create tool [Stack 2/2] (#1783)
Introduce conversation.create handler (handle_create_conversation) and
wire it in MessageProcessor.

Stack:
Top: #1783 
Bottom: #1784

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-01 22:18:36 +00:00
aibrahim-oai
f918198bbb Introduce a new function to just send user message [Stack 3/3] (#1686)
- MCP server: add send-user-message tool to send user input to a running
Codex session
- Added an integration tests for the happy and sad paths

Changes:
•	Add tool definition and schema.
•	Expose tool in capabilities.
•	Route and handle tool requests with validation.
•	Tests for success, bad UUID, and missing session.


follow‑ups
• Listen path not implemented yet; the tool is present but marked “don’t
use yet” in code comments.
• Session run flag reset: clear running_session_id_set appropriately
after turn completion/errors.

This is the third PR in a stack.
Stack:
Final: #1686
Intermediate: #1751
First: #1750
2025-08-01 17:04:12 +00:00
aibrahim-oai
ad0295b893 MCP server: route structured tool-call requests and expose mcp_protocol [Stack 2/3] (#1751)
- Expose mcp_protocol from mcp-server for reuse in tests and callers.
- In MessageProcessor, detect structured ToolCallRequestParams in
tools/call and forward to a new handler.
- Add handle_new_tool_calls scaffold (returns error for now).
- Test helper: add send_send_user_message_tool_call to McpProcess to
send ConversationSendMessage requests;

This is the second PR in a stack.
Stack:
Final: #1686
Intermediate: #1751
First: #1750
2025-08-01 02:46:04 +00:00
aibrahim-oai
3823b32b7a Mcp protocol (#1715)
- Add typed MCP protocol surface in
`codex-rs/mcp-server/src/mcp_protocol.rs` for `requests`, `responses`,
and `notifications`
- Requests: `NewConversation`, `Connect`, `SendUserMessage`,
`GetConversations`
- Message content parts: `Text`, `Image` (`ImageUrl`/`FileId`, optional
`ImageDetail`), File (`Url`/`Id`/`inline Data`)
- Responses: `ToolCallResponseEnvelope` with optional `isError` and
`structuredContent` variants (`NewConversation`, `Connect`,
`SendUserMessageAccepted`, `GetConversations`)
- Notifications: `InitialState`, `ConnectionRevoked`, `CodexEvent`,
`Cancelled`
- Uniform `_meta` on `notifications` via `NotificationMeta`
(`conversationId`, `requestId`)
- Unit tests validate JSON wire shapes for key
`requests`/`responses`/`notifications`
2025-07-29 20:14:41 -07:00
pakrym-oai
7ee87123a6 Optionally run using user profile (#1678) 2025-07-25 11:45:23 -07:00
aibrahim-oai
01c0896f0f Adding interrupt Support to MCP (#1646) 2025-07-22 20:33:49 +00:00
Gabriel Peal
710f728124 Add an elicitation for approve patch and refactor tool calls (#1642)
1. Added an elicitation for `approve-patch` which is very similar to
`approve-exec`.
2. Extracted both elicitations to their own files to prevent
`codex_tool_runner` from blowing up in size.
2025-07-22 02:58:41 -04:00
Michael Bolin
d49d802b06 test: add integration test for MCP server (#1633)
This PR introduces a single integration test for `cargo mcp`, though it
also introduces a number of reusable components so that it should be
easier to introduce more integration tests going forward.

The new test is introduced in `codex-rs/mcp-server/tests/elicitation.rs`
and the reusable pieces are in `codex-rs/mcp-server/tests/common`.

The test itself verifies new functionality around elicitations
introduced in https://github.com/openai/codex/pull/1623 (and the fix
introduced in https://github.com/openai/codex/pull/1629) by doing the
following:

- starts a mock model provider with canned responses for
`/v1/chat/completions`
- starts the MCP server with a `config.toml` to use that model provider
(and `approval_policy = "untrusted"`)
- sends the `codex` tool call which causes the mock model provider to
request a shell call for `git init`
- the MCP server sends an elicitation to the client to approve the
request
- the client replies to the elicitation with `"approved"`
- the MCP server runs the command and re-samples the model, getting a
`"finish_reason": "stop"`
- in turn, the MCP server sends the final response to the original
`codex` tool call
- verifies that `git init` ran as expected

To test:

```
cargo test shell_command_approval_triggers_elicitation
```

In writing this test, I discovered that `ExecApprovalResponse` does not
conform to `ElicitResult`, so I added a TODO to fix that, since I think
that should be updated in a separate PR. As it stands, this PR does not
update any business logic, though it does make a number of members of
the `mcp-server` crate `pub` so they can be used in the test.

One additional learning from this PR is that
`std::process::Command::cargo_bin()` from the `assert_cmd` trait is only
available for `std::process::Command`, but we really want to use
`tokio::process::Command` so that everything is async and we can
leverage utilities like `tokio::time::timeout()`. The trick I came up
with was to use `cargo_bin()` to locate the program, and then to use
`std::process::Command::get_program()` when constructing the
`tokio::process::Command`.
2025-07-21 10:27:07 -07:00
Michael Bolin
018003e52f feat: leverage elicitations in the MCP server (#1623)
This updates the MCP server so that if it receives an
`ExecApprovalRequest` from the `Codex` session, it in turn sends an [MCP
elicitation](https://modelcontextprotocol.io/specification/draft/client/elicitation)
to the client to ask for the approval decision. Upon getting a response,
it forwards the client's decision via `Op::ExecApproval`.

Admittedly, we should be doing the same thing for
`ApplyPatchApprovalRequest`, but this is our first time experimenting
with elicitations, so I'm inclined to defer wiring that code path up
until we feel good about how this one works.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1623).
* __->__ #1623
* #1622
* #1621
* #1620
2025-07-19 01:32:03 -04:00
Michael Bolin
11fd3123be chore: introduce OutgoingMessageSender (#1622)
Previous to this change, `MessageProcessor` had a
`tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server
code to send a message down to the MCP client. Because `Sender` is cheap
to `clone()`, it was straightforward to make it available to tasks
scheduled with `tokio::task::spawn()`.

This worked well when we were only sending notifications or responses
back down to the client, but we want to add support for sending
elicitations in #1623, which means that we need to be able to send
_requests_ to the client, and now we need a bit of centralization to
ensure all request ids are unique.

To that end, this PR introduces `OutgoingMessageSender`, which houses
the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint
out new, unique request ids. It has methods like `send_request()` and
`send_response()` so that callers do not have to deal with
`JSONRPCMessage` directly, as having to set the `jsonrpc` for each
message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a
bit).

We do not have `OutgoingMessageSender` implement `Clone` because it is
important that the `AtomicI64` is shared across all users of
`OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be
used instead, as it is frequently shared with new tokio tasks.

As part of this change, we update `message_processor.rs` to embrace
`await`, though we must be careful that no individual handler blocks the
main loop and prevents other messages from being handled.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622).
* #1623
* __->__ #1622
* #1621
* #1620
2025-07-19 00:30:56 -04:00
Michael Bolin
e78ec00e73 chore: support MCP schema 2025-06-18 (#1621)
This updates the schema in `generate_mcp_types.py` from `2025-03-26` to
`2025-06-18`, regenerates `mcp-types/src/lib.rs`, and then updates all
the code that uses `mcp-types` to honor the changes.

Ran

```
npx @modelcontextprotocol/inspector just codex mcp
```

and verified that I was able to invoke the `codex` tool, as expected.


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1621).
* #1623
* #1622
* __->__ #1621
2025-07-19 00:09:34 -04:00
Michael Bolin
d60f350cf8 feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:

```
codex --config model=o4-mini
```

Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.

Effectively, it means there are four levels of configuration for some
values:

- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)

Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.

Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
Michael Bolin
d1de7bb383 feat: add codex_linux_sandbox_exe: Option<PathBuf> field to Config (#1089)
https://github.com/openai/codex/pull/1086 is a work-in-progress to make
Linux sandboxing work more like Seatbelt where, for the command we want
to sandbox, we build up the command and then hand it, and some sandbox
configuration flags, to another command to set up the sandbox and then
run it.

In the case of Seatbelt, macOS provides this helper binary and provides
it at `/usr/bin/sandbox-exec`. For Linux, we have to build our own and
pass it through (which is what #1086 does), so this makes the new
`codex_linux_sandbox_exe` available on `Config` so that it will later be
available in `exec.rs` when we need it in #1086.
2025-05-22 21:52:28 -07:00
Michael Bolin
497c5396c0 feat: add mcp subcommand to CLI to run Codex as an MCP server (#934)
Previously, running Codex as an MCP server required a standalone binary
in our Cargo workspace, but this PR makes it available as a subcommand
(`mcp`) of the main CLI.

Ran this with:

```
RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run --bin codex -- mcp
```

and verified it worked as expected in the inspector at
`http://127.0.0.1:6274/`.
2025-05-14 13:15:41 -07:00