Files
llmx/codex-rs/mcp-server/src/message_processor.rs

649 lines
24 KiB
Rust
Raw Normal View History

use std::collections::HashMap;
use std::path::PathBuf;
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
use crate::codex_message_processor::CodexMessageProcessor;
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
use crate::codex_tool_config::CodexToolCallParam;
use crate::codex_tool_config::CodexToolCallReplyParam;
use crate::codex_tool_config::create_tool_for_codex_tool_call_param;
use crate::codex_tool_config::create_tool_for_codex_tool_call_reply_param;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
use crate::outgoing_message::OutgoingMessageSender;
use codex_protocol::mcp_protocol::ClientRequest;
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
use codex_core::ConversationManager;
use codex_core::config::Config;
use codex_core::protocol::Submission;
use codex_login::AuthManager;
use mcp_types::CallToolRequestParams;
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
use mcp_types::CallToolResult;
use mcp_types::ClientRequest as McpClientRequest;
use mcp_types::ContentBlock;
use mcp_types::JSONRPCError;
use mcp_types::JSONRPCErrorError;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use mcp_types::ListToolsResult;
use mcp_types::ModelContextProtocolRequest;
use mcp_types::RequestId;
use mcp_types::ServerCapabilitiesTools;
use mcp_types::ServerNotification;
use mcp_types::TextContent;
use serde_json::json;
use std::sync::Arc;
use tokio::sync::Mutex;
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
use tokio::task;
use uuid::Uuid;
pub(crate) struct MessageProcessor {
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
codex_message_processor: CodexMessageProcessor,
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
outgoing: Arc<OutgoingMessageSender>,
initialized: bool,
codex_linux_sandbox_exe: Option<PathBuf>,
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
conversation_manager: Arc<ConversationManager>,
running_requests_id_to_codex_uuid: Arc<Mutex<HashMap<RequestId, Uuid>>>,
}
impl MessageProcessor {
/// Create a new `MessageProcessor`, retaining a handle to the outgoing
/// `Sender` so handlers can enqueue messages to be written to stdout.
pub(crate) fn new(
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
outgoing: OutgoingMessageSender,
codex_linux_sandbox_exe: Option<PathBuf>,
config: Arc<Config>,
) -> Self {
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
let outgoing = Arc::new(outgoing);
let auth_manager =
AuthManager::shared(config.codex_home.clone(), config.preferred_auth_method);
let conversation_manager = Arc::new(ConversationManager::new(auth_manager.clone()));
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
let codex_message_processor = CodexMessageProcessor::new(
auth_manager,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
conversation_manager.clone(),
outgoing.clone(),
codex_linux_sandbox_exe.clone(),
config,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
);
Self {
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
codex_message_processor,
outgoing,
initialized: false,
codex_linux_sandbox_exe,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
conversation_manager,
running_requests_id_to_codex_uuid: Arc::new(Mutex::new(HashMap::new())),
}
}
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
pub(crate) async fn process_request(&mut self, request: JSONRPCRequest) {
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
if let Ok(request_json) = serde_json::to_value(request.clone())
&& let Ok(codex_request) = serde_json::from_value::<ClientRequest>(request_json)
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
{
// If the request is a Codex request, handle it with the Codex
// message processor.
self.codex_message_processor
.process_request(codex_request)
.await;
return;
}
// Hold on to the ID so we can respond.
let request_id = request.id.clone();
let client_request = match McpClientRequest::try_from(request) {
Ok(client_request) => client_request,
Err(e) => {
tracing::warn!("Failed to convert request: {e}");
return;
}
};
// Dispatch to a dedicated handler for each request type.
match client_request {
McpClientRequest::InitializeRequest(params) => {
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.handle_initialize(request_id, params).await;
}
McpClientRequest::PingRequest(params) => {
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.handle_ping(request_id, params).await;
}
McpClientRequest::ListResourcesRequest(params) => {
self.handle_list_resources(params);
}
McpClientRequest::ListResourceTemplatesRequest(params) => {
self.handle_list_resource_templates(params);
}
McpClientRequest::ReadResourceRequest(params) => {
self.handle_read_resource(params);
}
McpClientRequest::SubscribeRequest(params) => {
self.handle_subscribe(params);
}
McpClientRequest::UnsubscribeRequest(params) => {
self.handle_unsubscribe(params);
}
McpClientRequest::ListPromptsRequest(params) => {
self.handle_list_prompts(params);
}
McpClientRequest::GetPromptRequest(params) => {
self.handle_get_prompt(params);
}
McpClientRequest::ListToolsRequest(params) => {
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.handle_list_tools(request_id, params).await;
}
McpClientRequest::CallToolRequest(params) => {
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.handle_call_tool(request_id, params).await;
}
McpClientRequest::SetLevelRequest(params) => {
self.handle_set_level(params);
}
McpClientRequest::CompleteRequest(params) => {
self.handle_complete(params);
}
}
}
/// Handle a standalone JSON-RPC response originating from the peer.
pub(crate) async fn process_response(&mut self, response: JSONRPCResponse) {
tracing::info!("<- response: {:?}", response);
let JSONRPCResponse { id, result, .. } = response;
self.outgoing.notify_client_response(id, result).await
}
/// Handle a fire-and-forget JSON-RPC notification.
pub(crate) async fn process_notification(&mut self, notification: JSONRPCNotification) {
let server_notification = match ServerNotification::try_from(notification) {
Ok(n) => n,
Err(e) => {
tracing::warn!("Failed to convert notification: {e}");
return;
}
};
// Similar to requests, route each notification type to its own stub
// handler so additional logic can be implemented incrementally.
match server_notification {
ServerNotification::CancelledNotification(params) => {
self.handle_cancelled_notification(params).await;
}
ServerNotification::ProgressNotification(params) => {
self.handle_progress_notification(params);
}
ServerNotification::ResourceListChangedNotification(params) => {
self.handle_resource_list_changed(params);
}
ServerNotification::ResourceUpdatedNotification(params) => {
self.handle_resource_updated(params);
}
ServerNotification::PromptListChangedNotification(params) => {
self.handle_prompt_list_changed(params);
}
ServerNotification::ToolListChangedNotification(params) => {
self.handle_tool_list_changed(params);
}
ServerNotification::LoggingMessageNotification(params) => {
self.handle_logging_message(params);
}
}
}
/// Handle an error object received from the peer.
pub(crate) fn process_error(&mut self, err: JSONRPCError) {
tracing::error!("<- error: {:?}", err);
}
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
async fn handle_initialize(
&mut self,
id: RequestId,
params: <mcp_types::InitializeRequest as ModelContextProtocolRequest>::Params,
) {
tracing::info!("initialize -> params: {:?}", params);
if self.initialized {
// Already initialised: send JSON-RPC error response.
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
message: "initialize called more than once".to_string(),
data: None,
};
self.outgoing.send_error(id, error).await;
return;
}
self.initialized = true;
// Build a minimal InitializeResult. Fill with placeholders.
let result = mcp_types::InitializeResult {
capabilities: mcp_types::ServerCapabilities {
completions: None,
experimental: None,
logging: None,
prompts: None,
resources: None,
tools: Some(ServerCapabilitiesTools {
list_changed: Some(true),
}),
},
instructions: None,
protocol_version: params.protocol_version.clone(),
server_info: mcp_types::Implementation {
name: "codex-mcp-server".to_string(),
test: add integration test for MCP server (#1633) This PR introduces a single integration test for `cargo mcp`, though it also introduces a number of reusable components so that it should be easier to introduce more integration tests going forward. The new test is introduced in `codex-rs/mcp-server/tests/elicitation.rs` and the reusable pieces are in `codex-rs/mcp-server/tests/common`. The test itself verifies new functionality around elicitations introduced in https://github.com/openai/codex/pull/1623 (and the fix introduced in https://github.com/openai/codex/pull/1629) by doing the following: - starts a mock model provider with canned responses for `/v1/chat/completions` - starts the MCP server with a `config.toml` to use that model provider (and `approval_policy = "untrusted"`) - sends the `codex` tool call which causes the mock model provider to request a shell call for `git init` - the MCP server sends an elicitation to the client to approve the request - the client replies to the elicitation with `"approved"` - the MCP server runs the command and re-samples the model, getting a `"finish_reason": "stop"` - in turn, the MCP server sends the final response to the original `codex` tool call - verifies that `git init` ran as expected To test: ``` cargo test shell_command_approval_triggers_elicitation ``` In writing this test, I discovered that `ExecApprovalResponse` does not conform to `ElicitResult`, so I added a TODO to fix that, since I think that should be updated in a separate PR. As it stands, this PR does not update any business logic, though it does make a number of members of the `mcp-server` crate `pub` so they can be used in the test. One additional learning from this PR is that `std::process::Command::cargo_bin()` from the `assert_cmd` trait is only available for `std::process::Command`, but we really want to use `tokio::process::Command` so that everything is async and we can leverage utilities like `tokio::time::timeout()`. The trick I came up with was to use `cargo_bin()` to locate the program, and then to use `std::process::Command::get_program()` when constructing the `tokio::process::Command`.
2025-07-21 10:27:07 -07:00
version: env!("CARGO_PKG_VERSION").to_string(),
title: Some("Codex".to_string()),
},
};
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.send_response::<mcp_types::InitializeRequest>(id, result)
.await;
}
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
async fn send_response<T>(&self, id: RequestId, result: T::Result)
where
T: ModelContextProtocolRequest,
{
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.outgoing.send_response(id, result).await;
}
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
async fn handle_ping(
&self,
id: RequestId,
params: <mcp_types::PingRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("ping -> params: {:?}", params);
let result = json!({});
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.send_response::<mcp_types::PingRequest>(id, result)
.await;
}
fn handle_list_resources(
&self,
params: <mcp_types::ListResourcesRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("resources/list -> params: {:?}", params);
}
fn handle_list_resource_templates(
&self,
params:
<mcp_types::ListResourceTemplatesRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("resources/templates/list -> params: {:?}", params);
}
fn handle_read_resource(
&self,
params: <mcp_types::ReadResourceRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("resources/read -> params: {:?}", params);
}
fn handle_subscribe(
&self,
params: <mcp_types::SubscribeRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("resources/subscribe -> params: {:?}", params);
}
fn handle_unsubscribe(
&self,
params: <mcp_types::UnsubscribeRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("resources/unsubscribe -> params: {:?}", params);
}
fn handle_list_prompts(
&self,
params: <mcp_types::ListPromptsRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("prompts/list -> params: {:?}", params);
}
fn handle_get_prompt(
&self,
params: <mcp_types::GetPromptRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("prompts/get -> params: {:?}", params);
}
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
async fn handle_list_tools(
&self,
id: RequestId,
params: <mcp_types::ListToolsRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::trace!("tools/list -> {params:?}");
let result = ListToolsResult {
tools: vec![
create_tool_for_codex_tool_call_param(),
create_tool_for_codex_tool_call_reply_param(),
],
next_cursor: None,
};
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.send_response::<mcp_types::ListToolsRequest>(id, result)
.await;
}
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
async fn handle_call_tool(
&self,
id: RequestId,
params: <mcp_types::CallToolRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("tools/call -> params: {:?}", params);
let CallToolRequestParams { name, arguments } = params;
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
match name.as_str() {
"codex" => self.handle_tool_call_codex(id, arguments).await,
"codex-reply" => {
self.handle_tool_call_codex_session_reply(id, arguments)
.await
}
_ => {
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
r#type: "text".to_string(),
text: format!("Unknown tool '{name}'"),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
};
self.send_response::<mcp_types::CallToolRequest>(id, result)
.await;
}
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
}
}
async fn handle_tool_call_codex(&self, id: RequestId, arguments: Option<serde_json::Value>) {
let (initial_prompt, config): (String, Config) = match arguments {
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
Some(json_val) => match serde_json::from_value::<CodexToolCallParam>(json_val) {
Ok(tool_cfg) => match tool_cfg.into_config(self.codex_linux_sandbox_exe.clone()) {
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
Ok(cfg) => cfg,
Err(e) => {
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
r#type: "text".to_owned(),
text: format!(
"Failed to load Codex configuration from overrides: {e}"
),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
};
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.send_response::<mcp_types::CallToolRequest>(id, result)
.await;
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
return;
}
},
Err(e) => {
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
r#type: "text".to_owned(),
text: format!("Failed to parse configuration for Codex tool: {e}"),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
};
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.send_response::<mcp_types::CallToolRequest>(id, result)
.await;
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
return;
}
},
None => {
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
r#type: "text".to_string(),
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
text:
"Missing arguments for codex tool-call; the `prompt` field is required."
.to_string(),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
};
chore: introduce OutgoingMessageSender (#1622) Previous to this change, `MessageProcessor` had a `tokio::sync::mpsc::Sender<JSONRPCMessage>` as an abstraction for server code to send a message down to the MCP client. Because `Sender` is cheap to `clone()`, it was straightforward to make it available to tasks scheduled with `tokio::task::spawn()`. This worked well when we were only sending notifications or responses back down to the client, but we want to add support for sending elicitations in #1623, which means that we need to be able to send _requests_ to the client, and now we need a bit of centralization to ensure all request ids are unique. To that end, this PR introduces `OutgoingMessageSender`, which houses the existing `Sender<OutgoingMessage>` as well as an `AtomicI64` to mint out new, unique request ids. It has methods like `send_request()` and `send_response()` so that callers do not have to deal with `JSONRPCMessage` directly, as having to set the `jsonrpc` for each message was a bit tedious (this cleans up `codex_tool_runner.rs` quite a bit). We do not have `OutgoingMessageSender` implement `Clone` because it is important that the `AtomicI64` is shared across all users of `OutgoingMessageSender`. As such, `Arc<OutgoingMessageSender>` must be used instead, as it is frequently shared with new tokio tasks. As part of this change, we update `message_processor.rs` to embrace `await`, though we must be careful that no individual handler blocks the main loop and prevents other messages from being handled. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1622). * #1623 * __->__ #1622 * #1621 * #1620
2025-07-19 00:30:56 -04:00
self.send_response::<mcp_types::CallToolRequest>(id, result)
.await;
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
return;
}
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
};
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
// Clone outgoing and server to move into async task.
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
let outgoing = self.outgoing.clone();
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
let conversation_manager = self.conversation_manager.clone();
let running_requests_id_to_codex_uuid = self.running_requests_id_to_codex_uuid.clone();
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
// Spawn an async task to handle the Codex session so that we do not
// block the synchronous message-processing loop.
task::spawn(async move {
// Run the Codex session and stream events back to the client.
crate::codex_tool_runner::run_codex_tool_session(
id,
initial_prompt,
config,
outgoing,
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
conversation_manager,
running_requests_id_to_codex_uuid,
)
.await;
});
}
async fn handle_tool_call_codex_session_reply(
&self,
request_id: RequestId,
arguments: Option<serde_json::Value>,
) {
tracing::info!("tools/call -> params: {:?}", arguments);
// parse arguments
let CodexToolCallReplyParam { session_id, prompt } = match arguments {
Some(json_val) => match serde_json::from_value::<CodexToolCallReplyParam>(json_val) {
Ok(params) => params,
Err(e) => {
tracing::error!("Failed to parse Codex tool call reply parameters: {e}");
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
r#type: "text".to_owned(),
text: format!("Failed to parse configuration for Codex tool: {e}"),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
};
self.send_response::<mcp_types::CallToolRequest>(request_id, result)
.await;
return;
}
},
None => {
tracing::error!(
"Missing arguments for codex-reply tool-call; the `session_id` and `prompt` fields are required."
);
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
r#type: "text".to_owned(),
text: "Missing arguments for codex-reply tool-call; the `session_id` and `prompt` fields are required.".to_owned(),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
};
self.send_response::<mcp_types::CallToolRequest>(request_id, result)
.await;
return;
}
};
let session_id = match Uuid::parse_str(&session_id) {
Ok(id) => id,
Err(e) => {
tracing::error!("Failed to parse session_id: {e}");
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
r#type: "text".to_owned(),
text: format!("Failed to parse session_id: {e}"),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
};
self.send_response::<mcp_types::CallToolRequest>(request_id, result)
.await;
return;
}
};
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
// Clone outgoing to move into async task.
let outgoing = self.outgoing.clone();
let running_requests_id_to_codex_uuid = self.running_requests_id_to_codex_uuid.clone();
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
let codex = match self.conversation_manager.get_conversation(session_id).await {
Ok(c) => c,
Err(_) => {
tracing::warn!("Session not found for session_id: {session_id}");
let result = CallToolResult {
content: vec![ContentBlock::TextContent(TextContent {
r#type: "text".to_owned(),
text: format!("Session not found for session_id: {session_id}"),
annotations: None,
})],
is_error: Some(true),
structured_content: None,
};
outgoing.send_response(request_id, result).await;
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
return;
}
};
// Spawn the long-running reply handler.
tokio::spawn({
let codex = codex.clone();
let outgoing = outgoing.clone();
let prompt = prompt.clone();
let running_requests_id_to_codex_uuid = running_requests_id_to_codex_uuid.clone();
async move {
crate::codex_tool_runner::run_codex_tool_session_reply(
codex,
outgoing,
request_id,
prompt,
running_requests_id_to_codex_uuid,
session_id,
)
.await;
}
feat: make Codex available as a tool when running it as an MCP server (#811) This PR replaces the placeholder `"echo"` tool call in the MCP server with a `"codex"` tool that calls Codex. Events such as `ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled properly yet, but I have `approval_policy = "never"` set in my `~/.codex/config.toml` such that those codepaths are not exercised. The schema for this MPC tool is defined by a new `CodexToolCallParam` struct introduced in this PR. It is fairly similar to `ConfigOverrides`, as the param is used to help create the `Config` used to start the Codex session, though it also includes the `prompt` used to kick off the session. This PR also introduces the use of the third-party `schemars` crate to generate the JSON schema, which is verified in the `verify_codex_tool_json_schema()` unit test. Events that are dispatched during the Codex session are sent back to the MCP client as MCP notifications. This gives the client a way to monitor progress as the tool call itself may take minutes to complete depending on the complexity of the task requested by the user. In the video below, I launched the server via: ```shell mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run -- ``` In the video, you can see the flow of: * requesting the list of tools * choosing the **codex** tool * entering a value for **prompt** and then making the tool call Note that I left the other fields blank because when unspecified, the values in my `~/.codex/config.toml` were used: https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192 Note that while using the inspector, I did run into https://github.com/modelcontextprotocol/inspector/issues/293, though the tip about ensuring I had only one instance of the **MCP Inspector** tab open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
});
}
fn handle_set_level(
&self,
params: <mcp_types::SetLevelRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("logging/setLevel -> params: {:?}", params);
}
fn handle_complete(
&self,
params: <mcp_types::CompleteRequest as mcp_types::ModelContextProtocolRequest>::Params,
) {
tracing::info!("completion/complete -> params: {:?}", params);
}
// ---------------------------------------------------------------------
// Notification handlers
// ---------------------------------------------------------------------
async fn handle_cancelled_notification(
&self,
params: <mcp_types::CancelledNotification as mcp_types::ModelContextProtocolNotification>::Params,
) {
let request_id = params.request_id;
// Create a stable string form early for logging and submission id.
let request_id_string = match &request_id {
RequestId::String(s) => s.clone(),
RequestId::Integer(i) => i.to_string(),
};
// Obtain the session_id while holding the first lock, then release.
let session_id = {
let map_guard = self.running_requests_id_to_codex_uuid.lock().await;
match map_guard.get(&request_id) {
Some(id) => *id, // Uuid is Copy
None => {
tracing::warn!("Session not found for request_id: {}", request_id_string);
return;
}
}
};
tracing::info!("session_id: {session_id}");
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
// Obtain the Codex conversation from the server.
let codex_arc = match self.conversation_manager.get_conversation(session_id).await {
Ok(c) => c,
Err(_) => {
tracing::warn!("Session not found for session_id: {session_id}");
return;
}
};
// Submit interrupt to Codex.
let err = codex_arc
.submit_with_id(Submission {
id: request_id_string,
op: codex_core::protocol::Op::Interrupt,
})
.await;
if let Err(e) = err {
tracing::error!("Failed to submit interrupt to Codex: {e}");
return;
}
// unregister the id so we don't keep it in the map
self.running_requests_id_to_codex_uuid
.lock()
.await
.remove(&request_id);
}
fn handle_progress_notification(
&self,
params: <mcp_types::ProgressNotification as mcp_types::ModelContextProtocolNotification>::Params,
) {
tracing::info!("notifications/progress -> params: {:?}", params);
}
fn handle_resource_list_changed(
&self,
params: <mcp_types::ResourceListChangedNotification as mcp_types::ModelContextProtocolNotification>::Params,
) {
tracing::info!(
"notifications/resources/list_changed -> params: {:?}",
params
);
}
fn handle_resource_updated(
&self,
params: <mcp_types::ResourceUpdatedNotification as mcp_types::ModelContextProtocolNotification>::Params,
) {
tracing::info!("notifications/resources/updated -> params: {:?}", params);
}
fn handle_prompt_list_changed(
&self,
params: <mcp_types::PromptListChangedNotification as mcp_types::ModelContextProtocolNotification>::Params,
) {
tracing::info!("notifications/prompts/list_changed -> params: {:?}", params);
}
fn handle_tool_list_changed(
&self,
params: <mcp_types::ToolListChangedNotification as mcp_types::ModelContextProtocolNotification>::Params,
) {
tracing::info!("notifications/tools/list_changed -> params: {:?}", params);
}
fn handle_logging_message(
&self,
params: <mcp_types::LoggingMessageNotification as mcp_types::ModelContextProtocolNotification>::Params,
) {
tracing::info!("notifications/message -> params: {:?}", params);
}
}