Files
llmx/codex-rs/app-server-protocol/src/protocol.rs

1327 lines
44 KiB
Rust
Raw Normal View History

feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
use std::collections::HashMap;
use std::path::PathBuf;
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
use crate::JSONRPCNotification;
use crate::JSONRPCRequest;
use crate::RequestId;
use codex_protocol::ConversationId;
use codex_protocol::account::PlanType;
Add forced_chatgpt_workspace_id and forced_login_method configuration options (#5303) This PR adds support for configs to specify a forced login method (chatgpt or api) as well as a forced chatgpt account id. This lets enterprises uses [managed configs](https://developers.openai.com/codex/security#managed-configuration) to force all employees to use their company's workspace instead of their own or any other. When a workspace id is set, a query param is sent to the login flow which auto-selects the given workspace or errors if the user isn't a member of it. This PR is large but a large % of it is tests, wiring, and required formatting changes. API login with chatgpt forced <img width="1592" height="116" alt="CleanShot 2025-10-19 at 22 40 04" src="https://github.com/user-attachments/assets/560c6bb4-a20a-4a37-95af-93df39d057dd" /> ChatGPT login with api forced <img width="1018" height="100" alt="CleanShot 2025-10-19 at 22 40 29" src="https://github.com/user-attachments/assets/d010bbbb-9c8d-4227-9eda-e55bf043b4af" /> Onboarding with api forced <img width="892" height="460" alt="CleanShot 2025-10-19 at 22 41 02" src="https://github.com/user-attachments/assets/cc0ed45c-b257-4d62-a32e-6ca7514b5edd" /> Onboarding with ChatGPT forced <img width="1154" height="426" alt="CleanShot 2025-10-19 at 22 41 27" src="https://github.com/user-attachments/assets/41c41417-dc68-4bb4-b3e7-3b7769f7e6a1" /> Logging in with the wrong workspace <img width="2222" height="84" alt="CleanShot 2025-10-19 at 22 42 31" src="https://github.com/user-attachments/assets/0ff4222c-f626-4dd3-b035-0b7fe998a046" />
2025-10-20 08:50:54 -07:00
use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxCommandAssessment;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::TurnAbortReason;
use paste::paste;
use schemars::JsonSchema;
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display;
2025-08-18 13:08:53 -07:00
use ts_rs::TS;
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
use uuid::Uuid;
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, JsonSchema, TS)]
2025-08-19 19:50:28 -07:00
#[ts(type = "string")]
pub struct GitSha(pub String);
impl GitSha {
pub fn new(sha: &str) -> Self {
Self(sha.to_string())
}
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, Display, JsonSchema, TS)]
#[serde(rename_all = "lowercase")]
pub enum AuthMode {
ApiKey,
ChatGPT,
}
/// Generates an `enum ClientRequest` where each variant is a request that the
/// client can send to the server. Each variant has associated `params` and
/// `response` types. Also generates a `export_client_responses()` function to
/// export all response types to TypeScript.
macro_rules! client_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident {
params: $(#[$params_meta:meta])* $params:ty,
response: $response:ty,
}
),* $(,)?
) => {
/// Request from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ClientRequest {
$(
$(#[$variant_meta])*
$variant {
#[serde(rename = "id")]
request_id: RequestId,
$(#[$params_meta])*
params: $params,
},
)*
}
pub fn export_client_responses(
out_dir: &::std::path::Path,
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
$(
<$response as ::ts_rs::TS>::export_all_to(out_dir)?;
)*
Ok(())
}
pub fn export_client_response_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<()> {
$(
crate::export::write_json_schema::<$response>(out_dir, stringify!($response))?;
)*
Ok(())
}
};
}
client_request_definitions! {
/// NEW APIs
#[serde(rename = "model/list")]
#[ts(rename = "model/list")]
ListModels {
params: ListModelsParams,
response: ListModelsResponse,
},
#[serde(rename = "account/login")]
#[ts(rename = "account/login")]
LoginAccount {
params: LoginAccountParams,
response: LoginAccountResponse,
},
#[serde(rename = "account/logout")]
#[ts(rename = "account/logout")]
LogoutAccount {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: LogoutAccountResponse,
},
#[serde(rename = "account/rateLimits/read")]
#[ts(rename = "account/rateLimits/read")]
GetAccountRateLimits {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetAccountRateLimitsResponse,
},
#[serde(rename = "feedback/upload")]
#[ts(rename = "feedback/upload")]
UploadFeedback {
params: UploadFeedbackParams,
response: UploadFeedbackResponse,
},
#[serde(rename = "account/read")]
#[ts(rename = "account/read")]
GetAccount {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetAccountResponse,
},
/// DEPRECATED APIs below
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
Initialize {
params: InitializeParams,
response: InitializeResponse,
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
},
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
NewConversation {
params: NewConversationParams,
response: NewConversationResponse,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
},
GetConversationSummary {
params: GetConversationSummaryParams,
response: GetConversationSummaryResponse,
},
/// List recorded Codex conversations (rollouts) with optional pagination and search.
ListConversations {
params: ListConversationsParams,
response: ListConversationsResponse,
},
/// Resume a recorded Codex conversation from a rollout file.
ResumeConversation {
params: ResumeConversationParams,
response: ResumeConversationResponse,
},
ArchiveConversation {
params: ArchiveConversationParams,
response: ArchiveConversationResponse,
},
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
SendUserMessage {
params: SendUserMessageParams,
response: SendUserMessageResponse,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
},
SendUserTurn {
params: SendUserTurnParams,
response: SendUserTurnResponse,
},
InterruptConversation {
params: InterruptConversationParams,
response: InterruptConversationResponse,
},
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
AddConversationListener {
params: AddConversationListenerParams,
response: AddConversationSubscriptionResponse,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
},
RemoveConversationListener {
params: RemoveConversationListenerParams,
response: RemoveConversationSubscriptionResponse,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
},
GitDiffToRemote {
params: GitDiffToRemoteParams,
response: GitDiffToRemoteResponse,
},
LoginApiKey {
params: LoginApiKeyParams,
response: LoginApiKeyResponse,
},
LoginChatGpt {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: LoginChatGptResponse,
},
CancelLoginChatGpt {
params: CancelLoginChatGptParams,
response: CancelLoginChatGptResponse,
},
LogoutChatGpt {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: LogoutChatGptResponse,
},
GetAuthStatus {
params: GetAuthStatusParams,
response: GetAuthStatusResponse,
2025-08-19 19:50:28 -07:00
},
GetUserSavedConfig {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetUserSavedConfigResponse,
},
SetDefaultModel {
params: SetDefaultModelParams,
response: SetDefaultModelResponse,
},
GetUserAgent {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetUserAgentResponse,
},
UserInfo {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: UserInfoResponse,
},
FuzzyFileSearch {
params: FuzzyFileSearchParams,
response: FuzzyFileSearchResponse,
},
/// Execute a command (argv vector) under the server's sandbox.
ExecOneOffCommand {
params: ExecOneOffCommandParams,
response: ExecOneOffCommandResponse,
},
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
pub enum Account {
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
ApiKey { api_key: String },
#[serde(rename = "chatgpt", rename_all = "camelCase")]
#[ts(rename = "chatgpt", rename_all = "camelCase")]
ChatGpt {
email: Option<String>,
plan_type: PlanType,
},
}
fix: revert "[app-server] fix account/read response annotation (#5642)" (#5796) Revert #5642 because this generates: ``` // GENERATED CODE! DO NOT MODIFY BY HAND! // This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually. export type GetAccountResponse = Account | null; ``` But `Account` is unknown. The unique use of `#[ts(export)]` on `GetAccountResponse` is also suspicious as are the changes to `codex-rs/app-server-protocol/src/export.rs` since the existing system has worked fine for quite some time. Though a pure backout of #5642 puts things in a state where, as the PR noted, the following does not work: ``` cargo run -p codex-app-server-protocol --bin export -- --out DIR ``` So in addition to the backout, this PR adds: ```rust #[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)] #[serde(rename_all = "camelCase")] pub struct GetAccountResponse { pub account: Account, } ``` and changes `GetAccount.response` as follows: ```diff - response: Option<Account>, + response: GetAccountResponse, ``` making it consistent with other types. With this change, I verified that both of the following work: ``` just codex generate-ts --out /tmp/somewhere cargo run -p codex-app-server-protocol --bin export -- --out /tmp/somewhere-else ``` The generated TypeScript is as follows: ```typescript // GetAccountResponse.ts import type { Account } from "./Account"; export type GetAccountResponse = { account: Account, }; ``` and ```typescript // Account.ts import type { PlanType } from "./PlanType"; export type Account = { "type": "ApiKey", api_key: string, } | { "type": "chatgpt", email: string | null, plan_type: PlanType, }; ``` Though while the inconsistency between `"type": "ApiKey"` and `"type": "chatgpt"` is quite concerning, I'm not sure if that format is ever written to disk in any case, but @owenlin0, I would recommend looking into that. Also, it appears that the types in `codex-rs/protocol/src/account.rs` are used exclusively by the `app-server-protocol` crate, so perhaps they should just be moved there?
2025-10-26 18:57:42 -07:00
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetAccountResponse {
pub account: Account,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
#[serde(rename_all = "camelCase")]
pub struct InitializeParams {
pub client_info: ClientInfo,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
#[serde(rename_all = "camelCase")]
pub struct ClientInfo {
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub title: Option<String>,
pub version: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
#[serde(rename_all = "camelCase")]
pub struct InitializeResponse {
pub user_agent: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct NewConversationParams {
/// Optional override for the model name (e.g. "o3", "o4-mini").
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// Override the model provider to use for this session.
#[serde(skip_serializing_if = "Option::is_none")]
pub model_provider: Option<String>,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
/// Configuration profile from config.toml to specify default options.
#[serde(skip_serializing_if = "Option::is_none")]
pub profile: Option<String>,
/// Working directory for the session. If relative, it is resolved against
/// the server process's current working directory.
#[serde(skip_serializing_if = "Option::is_none")]
pub cwd: Option<String>,
/// Approval policy for shell commands generated by the model:
/// `untrusted`, `on-failure`, `on-request`, `never`.
#[serde(skip_serializing_if = "Option::is_none")]
pub approval_policy: Option<AskForApproval>,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
/// Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox: Option<SandboxMode>,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
/// Individual config settings that will override what is in
/// CODEX_HOME/config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub config: Option<HashMap<String, serde_json::Value>>,
/// The set of instructions to use instead of the default ones.
#[serde(skip_serializing_if = "Option::is_none")]
pub base_instructions: Option<String>,
/// Whether to include the apply patch tool in the conversation.
#[serde(skip_serializing_if = "Option::is_none")]
pub include_apply_patch_tool: Option<bool>,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct NewConversationResponse {
pub conversation_id: ConversationId,
pub model: String,
/// Note this could be ignored by the model.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
pub rollout_path: PathBuf,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationResponse {
pub conversation_id: ConversationId,
pub model: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(untagged)]
pub enum GetConversationSummaryParams {
/// Provide the absolute or CODEX_HOMErelative rollout path directly.
RolloutPath {
#[serde(rename = "rolloutPath")]
rollout_path: PathBuf,
},
/// Provide a conversation id; the server will locate the rollout using the
/// same logic as `resumeConversation`. There will be extra latency compared to using the rollout path,
/// as the server needs to locate the rollout path first.
ConversationId {
#[serde(rename = "conversationId")]
conversation_id: ConversationId,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetConversationSummaryResponse {
pub summary: ConversationSummary,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListConversationsParams {
/// Optional page size; defaults to a reasonable server-side value.
#[serde(skip_serializing_if = "Option::is_none")]
pub page_size: Option<usize>,
/// Opaque pagination cursor returned by a previous call.
#[serde(skip_serializing_if = "Option::is_none")]
pub cursor: Option<String>,
/// Optional model provider filter (matches against session metadata).
/// - None => filter by the server's default model provider
/// - Some([]) => no filtering, include all providers
/// - Some([...]) => only include sessions with one of the specified providers
#[serde(skip_serializing_if = "Option::is_none")]
pub model_providers: Option<Vec<String>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ConversationSummary {
pub conversation_id: ConversationId,
pub path: PathBuf,
pub preview: String,
/// RFC3339 timestamp string for the session start, if available.
#[serde(skip_serializing_if = "Option::is_none")]
pub timestamp: Option<String>,
/// Model provider recorded for the session (resolved when absent in metadata).
pub model_provider: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListConversationsResponse {
pub items: Vec<ConversationSummary>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
#[serde(skip_serializing_if = "Option::is_none")]
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsParams {
/// Optional page size; defaults to a reasonable server-side value.
#[serde(skip_serializing_if = "Option::is_none")]
pub page_size: Option<usize>,
/// Opaque pagination cursor returned by a previous call.
#[serde(skip_serializing_if = "Option::is_none")]
pub cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct Model {
pub id: String,
pub model: String,
pub display_name: String,
pub description: String,
pub supported_reasoning_efforts: Vec<ReasoningEffortOption>,
pub default_reasoning_effort: ReasoningEffort,
// Only one model should be marked as default.
pub is_default: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ReasoningEffortOption {
pub reasoning_effort: ReasoningEffort,
pub description: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsResponse {
pub items: Vec<Model>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
#[serde(skip_serializing_if = "Option::is_none")]
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UploadFeedbackParams {
pub classification: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub reason: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub conversation_id: Option<ConversationId>,
pub include_logs: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UploadFeedbackResponse {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type")]
#[ts(tag = "type")]
pub enum LoginAccountParams {
#[serde(rename = "apiKey")]
#[ts(rename = "apiKey")]
ApiKey {
#[serde(rename = "apiKey")]
#[ts(rename = "apiKey")]
api_key: String,
},
#[serde(rename = "chatgpt")]
#[ts(rename = "chatgpt")]
ChatGpt,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginAccountResponse {
/// Only set if the login method is ChatGPT.
#[schemars(with = "String")]
pub login_id: Option<Uuid>,
/// URL the client should open in a browser to initiate the OAuth flow.
/// Only set if the login method is ChatGPT.
pub auth_url: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LogoutAccountResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationParams {
/// Absolute path to the rollout JSONL file. If omitted, `conversationId` must be provided.
#[serde(skip_serializing_if = "Option::is_none")]
pub path: Option<PathBuf>,
/// If the rollout path is not known, it can be discovered via the conversation id at at the cost of extra latency.
#[serde(skip_serializing_if = "Option::is_none")]
pub conversation_id: Option<ConversationId>,
/// Optional overrides to apply when spawning the resumed session.
#[serde(skip_serializing_if = "Option::is_none")]
pub overrides: Option<NewConversationParams>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct AddConversationSubscriptionResponse {
#[schemars(with = "String")]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
pub subscription_id: Uuid,
}
/// The [`ConversationId`] must match the `rollout_path`.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ArchiveConversationParams {
pub conversation_id: ConversationId,
pub rollout_path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ArchiveConversationResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct RemoveConversationSubscriptionResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginApiKeyParams {
pub api_key: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginApiKeyResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginChatGptResponse {
#[schemars(with = "String")]
pub login_id: Uuid,
/// URL the client should open in a browser to initiate the OAuth flow.
pub auth_url: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
2025-08-19 19:50:28 -07:00
#[serde(rename_all = "camelCase")]
pub struct GitDiffToRemoteResponse {
pub sha: GitSha,
pub diff: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct CancelLoginChatGptParams {
#[schemars(with = "String")]
pub login_id: Uuid,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GitDiffToRemoteParams {
pub cwd: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct CancelLoginChatGptResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LogoutChatGptParams {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
2025-08-19 19:50:28 -07:00
#[serde(rename_all = "camelCase")]
pub struct LogoutChatGptResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetAuthStatusParams {
/// If true, include the current auth token (if available) in the response.
#[serde(skip_serializing_if = "Option::is_none")]
pub include_token: Option<bool>,
/// If true, attempt to refresh the token before returning status.
#[serde(skip_serializing_if = "Option::is_none")]
pub refresh_token: Option<bool>,
2025-08-19 19:50:28 -07:00
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecOneOffCommandParams {
/// Command argv to execute.
pub command: Vec<String>,
/// Timeout of the command in milliseconds.
/// If not specified, a sensible default is used server-side.
pub timeout_ms: Option<u64>,
/// Optional working directory for the process. Defaults to server config cwd.
#[serde(skip_serializing_if = "Option::is_none")]
pub cwd: Option<PathBuf>,
/// Optional explicit sandbox policy overriding the server default.
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox_policy: Option<SandboxPolicy>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecOneOffCommandResponse {
pub exit_code: i32,
pub stdout: String,
pub stderr: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetAccountRateLimitsResponse {
pub rate_limits: RateLimitSnapshot,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetAuthStatusResponse {
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_method: Option<AuthMode>,
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_token: Option<String>,
// Indicates that auth method must be valid to use the server.
// This can be false if using a custom provider that is configured
// with requires_openai_auth == false.
#[serde(skip_serializing_if = "Option::is_none")]
pub requires_openai_auth: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetUserAgentResponse {
pub user_agent: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UserInfoResponse {
/// Note: `alleged_user_email` is not currently verified. We read it from
/// the local auth.json, which the user could theoretically modify. In the
/// future, we may add logic to verify the email against the server before
/// returning it.
pub alleged_user_email: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetUserSavedConfigResponse {
pub config: UserSavedConfig,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelParams {
/// If set to None, this means `model` should be cleared in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// If set to None, this means `model_reasoning_effort` should be cleared
/// in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelResponse {}
/// UserSavedConfig contains a subset of the config. It is meant to expose mcp
/// client-configurable settings that can be specified in the NewConversation
/// and SendUserTurn requests.
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UserSavedConfig {
/// Approvals
#[serde(skip_serializing_if = "Option::is_none")]
pub approval_policy: Option<AskForApproval>,
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox_mode: Option<SandboxMode>,
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox_settings: Option<SandboxSettings>,
Add forced_chatgpt_workspace_id and forced_login_method configuration options (#5303) This PR adds support for configs to specify a forced login method (chatgpt or api) as well as a forced chatgpt account id. This lets enterprises uses [managed configs](https://developers.openai.com/codex/security#managed-configuration) to force all employees to use their company's workspace instead of their own or any other. When a workspace id is set, a query param is sent to the login flow which auto-selects the given workspace or errors if the user isn't a member of it. This PR is large but a large % of it is tests, wiring, and required formatting changes. API login with chatgpt forced <img width="1592" height="116" alt="CleanShot 2025-10-19 at 22 40 04" src="https://github.com/user-attachments/assets/560c6bb4-a20a-4a37-95af-93df39d057dd" /> ChatGPT login with api forced <img width="1018" height="100" alt="CleanShot 2025-10-19 at 22 40 29" src="https://github.com/user-attachments/assets/d010bbbb-9c8d-4227-9eda-e55bf043b4af" /> Onboarding with api forced <img width="892" height="460" alt="CleanShot 2025-10-19 at 22 41 02" src="https://github.com/user-attachments/assets/cc0ed45c-b257-4d62-a32e-6ca7514b5edd" /> Onboarding with ChatGPT forced <img width="1154" height="426" alt="CleanShot 2025-10-19 at 22 41 27" src="https://github.com/user-attachments/assets/41c41417-dc68-4bb4-b3e7-3b7769f7e6a1" /> Logging in with the wrong workspace <img width="2222" height="84" alt="CleanShot 2025-10-19 at 22 42 31" src="https://github.com/user-attachments/assets/0ff4222c-f626-4dd3-b035-0b7fe998a046" />
2025-10-20 08:50:54 -07:00
#[serde(skip_serializing_if = "Option::is_none")]
pub forced_chatgpt_workspace_id: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub forced_login_method: Option<ForcedLoginMethod>,
/// Model-specific configuration
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub model_reasoning_effort: Option<ReasoningEffort>,
#[serde(skip_serializing_if = "Option::is_none")]
pub model_reasoning_summary: Option<ReasoningSummary>,
#[serde(skip_serializing_if = "Option::is_none")]
pub model_verbosity: Option<Verbosity>,
/// Tools
#[serde(skip_serializing_if = "Option::is_none")]
pub tools: Option<Tools>,
/// Profiles
#[serde(skip_serializing_if = "Option::is_none")]
pub profile: Option<String>,
#[serde(default)]
pub profiles: HashMap<String, Profile>,
}
/// MCP representation of a [`codex_core::config_profile::ConfigProfile`].
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct Profile {
pub model: Option<String>,
/// The key in the `model_providers` map identifying the
/// [`ModelProviderInfo`] to use.
pub model_provider: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
pub model_verbosity: Option<Verbosity>,
pub chatgpt_base_url: Option<String>,
}
/// MCP representation of a [`codex_core::config::ToolsToml`].
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct Tools {
#[serde(skip_serializing_if = "Option::is_none")]
pub web_search: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub view_image: Option<bool>,
}
/// MCP representation of a [`codex_core::config_types::SandboxWorkspaceWrite`].
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SandboxSettings {
#[serde(default)]
pub writable_roots: Vec<PathBuf>,
#[serde(skip_serializing_if = "Option::is_none")]
pub network_access: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub exclude_tmpdir_env_var: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub exclude_slash_tmp: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct SendUserMessageParams {
pub conversation_id: ConversationId,
pub items: Vec<InputItem>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SendUserTurnParams {
pub conversation_id: ConversationId,
pub items: Vec<InputItem>,
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub model: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub effort: Option<ReasoningEffort>,
pub summary: ReasoningSummary,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SendUserTurnResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct InterruptConversationParams {
pub conversation_id: ConversationId,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct InterruptConversationResponse {
pub abort_reason: TurnAbortReason,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct SendUserMessageResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct AddConversationListenerParams {
pub conversation_id: ConversationId,
#[serde(default)]
pub experimental_raw_events: bool,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
pub struct RemoveConversationListenerParams {
#[schemars(with = "String")]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
pub subscription_id: Uuid,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[serde(rename_all = "camelCase")]
#[serde(tag = "type", content = "data")]
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
pub enum InputItem {
Text {
text: String,
},
/// Preencoded data: URI image.
Image {
image_url: String,
},
/// Local image path provided by the user. This will be converted to an
/// `Image` variant (base64 data URL) during request serialization.
LocalImage {
path: PathBuf,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
},
}
/// Generates an `enum ServerRequest` where each variant is a request that the
/// server can send to the client along with the corresponding params and
/// response types. It also generates helper types used by the app/server
/// infrastructure (payload enum, request constructor, and export helpers).
macro_rules! server_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident
),* $(,)?
) => {
paste! {
/// Request initiated from the server and sent to the client.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ServerRequest {
$(
$(#[$variant_meta])*
$variant {
#[serde(rename = "id")]
request_id: RequestId,
params: [<$variant Params>],
},
)*
}
#[derive(Debug, Clone, PartialEq, JsonSchema)]
pub enum ServerRequestPayload {
$( $variant([<$variant Params>]), )*
}
impl ServerRequestPayload {
pub fn request_with_id(self, request_id: RequestId) -> ServerRequest {
match self {
$(Self::$variant(params) => ServerRequest::$variant { request_id, params },)*
}
}
}
}
pub fn export_server_responses(
out_dir: &::std::path::Path,
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
paste! {
$(<[<$variant Response>] as ::ts_rs::TS>::export_all_to(out_dir)?;)*
}
Ok(())
}
pub fn export_server_response_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<()> {
paste! {
$(crate::export::write_json_schema::<[<$variant Response>]>(out_dir, stringify!([<$variant Response>]))?;)*
}
Ok(())
}
};
}
impl TryFrom<JSONRPCRequest> for ServerRequest {
type Error = serde_json::Error;
fn try_from(value: JSONRPCRequest) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
server_request_definitions! {
/// Request to approve a patch.
ApplyPatchApproval,
/// Request to exec a command.
ExecCommandApproval,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
/// and [codex_core::protocol::PatchApplyEndEvent].
pub call_id: String,
pub file_changes: HashMap<PathBuf, FileChange>,
/// Optional explanatory reason (e.g. request for extra write access).
#[serde(skip_serializing_if = "Option::is_none")]
pub reason: Option<String>,
/// When set, the agent is asking the user to allow writes under this root
/// for the remainder of the session (unclear if this is honored today).
#[serde(skip_serializing_if = "Option::is_none")]
pub grant_root: Option<PathBuf>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
/// and [codex_core::protocol::ExecCommandEndEvent].
pub call_id: String,
pub command: Vec<String>,
pub cwd: PathBuf,
#[serde(skip_serializing_if = "Option::is_none")]
pub reason: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub risk: Option<SandboxCommandAssessment>,
pub parsed_cmd: Vec<ParsedCommand>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
pub struct ExecCommandApprovalResponse {
pub decision: ReviewDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
pub struct ApplyPatchApprovalResponse {
pub decision: ReviewDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
pub struct FuzzyFileSearchParams {
pub query: String,
pub roots: Vec<String>,
// if provided, will cancel any previous request that used the same value
#[serde(skip_serializing_if = "Option::is_none")]
pub cancellation_token: Option<String>,
}
/// Superset of [`codex_file_search::FileMatch`]
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
pub struct FuzzyFileSearchResult {
pub root: String,
pub path: String,
pub file_name: String,
pub score: u32,
#[serde(skip_serializing_if = "Option::is_none")]
pub indices: Option<Vec<u32>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
pub struct FuzzyFileSearchResponse {
pub files: Vec<FuzzyFileSearchResult>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginChatGptCompleteNotification {
#[schemars(with = "String")]
pub login_id: Uuid,
pub success: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
#[serde(rename_all = "camelCase")]
pub struct SessionConfiguredNotification {
/// Name left as session_id instead of conversation_id for backwards compatibility.
pub session_id: ConversationId,
/// Tell the client what model is being queried.
pub model: String,
/// The effort the model is putting into reasoning about the user's request.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
/// Identifier of the history log file (inode on Unix, 0 otherwise).
pub history_log_id: u64,
/// Current number of entries in the history log.
#[ts(type = "number")]
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
pub history_entry_count: usize,
/// Optional initial messages (as events) for resumed sessions.
/// When present, UIs can use these to seed the history.
#[serde(skip_serializing_if = "Option::is_none")]
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct AuthStatusChangeNotification {
/// Current authentication method; omitted if signed out.
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_method: Option<AuthMode>,
}
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
/// Notification sent from the server to the client.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ServerNotification {
/// NEW NOTIFICATIONS
#[serde(rename = "account/rateLimits/updated")]
#[ts(rename = "account/rateLimits/updated")]
#[strum(serialize = "account/rateLimits/updated")]
AccountRateLimitsUpdated(RateLimitSnapshot),
/// DEPRECATED NOTIFICATIONS below
/// Authentication status changed
AuthStatusChange(AuthStatusChangeNotification),
/// ChatGPT login flow completed
LoginChatGptComplete(LoginChatGptCompleteNotification),
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
/// The special session configured event for a new or resumed conversation.
SessionConfigured(SessionConfiguredNotification),
}
impl ServerNotification {
pub fn to_params(self) -> Result<serde_json::Value, serde_json::Error> {
match self {
ServerNotification::AccountRateLimitsUpdated(params) => serde_json::to_value(params),
ServerNotification::AuthStatusChange(params) => serde_json::to_value(params),
ServerNotification::LoginChatGptComplete(params) => serde_json::to_value(params),
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
ServerNotification::SessionConfigured(params) => serde_json::to_value(params),
}
}
}
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
impl TryFrom<JSONRPCNotification> for ServerNotification {
type Error = serde_json::Error;
fn try_from(value: JSONRPCNotification) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
/// Notification sent from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ClientNotification {
Initialized,
}
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
#[cfg(test)]
mod tests {
use super::*;
use anyhow::Result;
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
use pretty_assertions::assert_eq;
use serde_json::json;
#[test]
fn serialize_new_conversation() -> Result<()> {
let request = ClientRequest::NewConversation {
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
request_id: RequestId::Integer(42),
params: NewConversationParams {
model: Some("gpt-5-codex".to_string()),
model_provider: None,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
profile: None,
cwd: None,
approval_policy: Some(AskForApproval::OnRequest),
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
sandbox: None,
config: None,
base_instructions: None,
include_apply_patch_tool: None,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
},
};
assert_eq!(
json!({
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5-codex",
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
"approvalPolicy": "on-request"
}
}),
serde_json::to_value(&request)?,
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
);
Ok(())
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
}
#[test]
fn conversation_id_serializes_as_plain_string() -> Result<()> {
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
assert_eq!(
json!("67e55044-10b1-426f-9247-bb680e5fe0c8"),
serde_json::to_value(id)?
);
Ok(())
}
#[test]
fn conversation_id_deserializes_from_plain_string() -> Result<()> {
let id: ConversationId =
serde_json::from_value(json!("67e55044-10b1-426f-9247-bb680e5fe0c8"))?;
assert_eq!(
ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?,
id,
);
Ok(())
}
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471) This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
2025-09-30 00:06:18 -07:00
#[test]
fn serialize_client_notification() -> Result<()> {
let notification = ClientNotification::Initialized;
// Note there is no "params" field for this notification.
assert_eq!(
json!({
"method": "initialized",
}),
serde_json::to_value(&notification)?,
);
Ok(())
}
#[test]
fn serialize_server_request() -> Result<()> {
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let params = ExecCommandApprovalParams {
conversation_id,
call_id: "call-42".to_string(),
command: vec!["echo".to_string(), "hello".to_string()],
cwd: PathBuf::from("/tmp"),
reason: Some("because tests".to_string()),
risk: None,
parsed_cmd: vec![ParsedCommand::Unknown {
cmd: "echo hello".to_string(),
}],
};
let request = ServerRequest::ExecCommandApproval {
request_id: RequestId::Integer(7),
params: params.clone(),
};
assert_eq!(
json!({
"method": "execCommandApproval",
"id": 7,
"params": {
"conversationId": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"callId": "call-42",
"command": ["echo", "hello"],
"cwd": "/tmp",
"reason": "because tests",
"parsedCmd": [
{
"type": "unknown",
"cmd": "echo hello"
}
]
}
}),
serde_json::to_value(&request)?,
);
let payload = ServerRequestPayload::ExecCommandApproval(params);
assert_eq!(payload.request_with_id(RequestId::Integer(7)), request);
Ok(())
}
#[test]
fn serialize_get_account_rate_limits() -> Result<()> {
let request = ClientRequest::GetAccountRateLimits {
request_id: RequestId::Integer(1),
params: None,
};
assert_eq!(
json!({
"method": "account/rateLimits/read",
"id": 1,
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_account_login_api_key() -> Result<()> {
let request = ClientRequest::LoginAccount {
request_id: RequestId::Integer(2),
params: LoginAccountParams::ApiKey {
api_key: "secret".to_string(),
},
};
assert_eq!(
json!({
"method": "account/login",
"id": 2,
"params": {
"type": "apiKey",
"apiKey": "secret"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_account_login_chatgpt() -> Result<()> {
let request = ClientRequest::LoginAccount {
request_id: RequestId::Integer(3),
params: LoginAccountParams::ChatGpt,
};
assert_eq!(
json!({
"method": "account/login",
"id": 3,
"params": {
"type": "chatgpt"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_account_logout() -> Result<()> {
let request = ClientRequest::LogoutAccount {
request_id: RequestId::Integer(4),
params: None,
};
assert_eq!(
json!({
"method": "account/logout",
"id": 4,
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_get_account() -> Result<()> {
let request = ClientRequest::GetAccount {
request_id: RequestId::Integer(5),
params: None,
};
assert_eq!(
json!({
"method": "account/read",
"id": 5,
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn account_serializes_fields_in_camel_case() -> Result<()> {
let api_key = Account::ApiKey {
api_key: "secret".to_string(),
};
assert_eq!(
json!({
"type": "apiKey",
"apiKey": "secret",
}),
serde_json::to_value(&api_key)?,
);
let chatgpt = Account::ChatGpt {
email: Some("user@example.com".to_string()),
plan_type: PlanType::Plus,
};
assert_eq!(
json!({
"type": "chatgpt",
"email": "user@example.com",
"planType": "plus",
}),
serde_json::to_value(&chatgpt)?,
);
Ok(())
}
#[test]
fn serialize_list_models() -> Result<()> {
let request = ClientRequest::ListModels {
request_id: RequestId::Integer(6),
params: ListModelsParams::default(),
};
assert_eq!(
json!({
"method": "model/list",
"id": 6,
"params": {}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
feat: support traditional JSON-RPC request/response in MCP server (#2264) This introduces a new set of request types that our `codex mcp` supports. Note that these do not conform to MCP tool calls so that instead of having to send something like this: ```json { "jsonrpc": "2.0", "method": "tools/call", "id": 42, "params": { "name": "newConversation", "arguments": { "model": "gpt-5", "approvalPolicy": "on-request" } } } ``` we can send something like this: ```json { "jsonrpc": "2.0", "method": "newConversation", "id": 42, "params": { "model": "gpt-5", "approvalPolicy": "on-request" } } ``` Admittedly, this new format is not a valid MCP tool call, but we are OK with that right now. (That is, not everything we might want to request of `codex mcp` is something that is appropriate for an autonomous agent to do.) To start, this introduces four request types: - `newConversation` - `sendUserMessage` - `addConversationListener` - `removeConversationListener` The new `mcp-server/tests/codex_message_processor_flow.rs` shows how these can be used. The types are defined on the `CodexRequest` enum, so we introduce a new `CodexMessageProcessor` that is responsible for dealing with requests from this enum. The top-level `MessageProcessor` has been updated so that when `process_request()` is called, it first checks whether the request conforms to `CodexRequest` and dispatches it to `CodexMessageProcessor` if so. Note that I also decided to use `camelCase` for the on-the-wire format, as that seems to be the convention for MCP. For the moment, the new protocol is defined in `wire_format.rs` within the `mcp-server` crate, but in a subsequent PR, I will probably move it to its own crate to ensure the protocol has minimal dependencies and that we can codegen a schema from it. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264). * #2278 * __->__ #2264
2025-08-13 17:36:29 -07:00
}