feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use std::collections::HashMap;
|
|
|
|
|
|
use std::fmt::Display;
|
2025-08-13 23:00:50 -07:00
|
|
|
|
use std::path::PathBuf;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
2025-08-18 09:36:57 -07:00
|
|
|
|
use crate::config_types::ReasoningEffort;
|
|
|
|
|
|
use crate::config_types::ReasoningSummary;
|
|
|
|
|
|
use crate::config_types::SandboxMode;
|
|
|
|
|
|
use crate::protocol::AskForApproval;
|
|
|
|
|
|
use crate::protocol::FileChange;
|
|
|
|
|
|
use crate::protocol::ReviewDecision;
|
|
|
|
|
|
use crate::protocol::SandboxPolicy;
|
|
|
|
|
|
use crate::protocol::TurnAbortReason;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use mcp_types::RequestId;
|
|
|
|
|
|
use serde::Deserialize;
|
|
|
|
|
|
use serde::Serialize;
|
2025-08-20 20:36:34 -07:00
|
|
|
|
use strum_macros::Display;
|
2025-08-18 13:08:53 -07:00
|
|
|
|
use ts_rs::TS;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use uuid::Uuid;
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, TS)]
|
|
|
|
|
|
#[ts(type = "string")]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
pub struct ConversationId(pub Uuid);
|
|
|
|
|
|
|
|
|
|
|
|
impl Display for ConversationId {
|
|
|
|
|
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
|
|
|
|
|
write!(f, "{}", self.0)
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, TS)]
|
|
|
|
|
|
#[ts(type = "string")]
|
|
|
|
|
|
pub struct GitSha(pub String);
|
|
|
|
|
|
|
|
|
|
|
|
impl GitSha {
|
|
|
|
|
|
pub fn new(sha: &str) -> Self {
|
|
|
|
|
|
Self(sha.to_string())
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-20 20:36:34 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "lowercase")]
|
|
|
|
|
|
pub enum AuthMode {
|
|
|
|
|
|
ApiKey,
|
|
|
|
|
|
ChatGPT,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
/// Request from the client to the server.
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(tag = "method", rename_all = "camelCase")]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub enum ClientRequest {
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
NewConversation {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: NewConversationParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
SendUserMessage {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: SendUserMessageParams,
|
|
|
|
|
|
},
|
2025-08-15 10:05:58 -07:00
|
|
|
|
SendUserTurn {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: SendUserTurnParams,
|
|
|
|
|
|
},
|
2025-08-13 23:12:03 -07:00
|
|
|
|
InterruptConversation {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: InterruptConversationParams,
|
|
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
AddConversationListener {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: AddConversationListenerParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
RemoveConversationListener {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: RemoveConversationListenerParams,
|
|
|
|
|
|
},
|
2025-08-17 10:03:52 -07:00
|
|
|
|
LoginChatGpt {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
|
|
|
|
|
CancelLoginChatGpt {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: CancelLoginChatGptParams,
|
|
|
|
|
|
},
|
2025-08-20 20:36:34 -07:00
|
|
|
|
LogoutChatGpt {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
|
|
|
|
|
GetAuthStatus {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
2025-08-19 19:50:28 -07:00
|
|
|
|
GitDiffToRemote {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: GitDiffToRemoteParams,
|
|
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct NewConversationParams {
|
|
|
|
|
|
/// Optional override for the model name (e.g. "o3", "o4-mini").
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Configuration profile from config.toml to specify default options.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub profile: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Working directory for the session. If relative, it is resolved against
|
|
|
|
|
|
/// the server process's current working directory.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cwd: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Approval policy for shell commands generated by the model:
|
|
|
|
|
|
/// `untrusted`, `on-failure`, `on-request`, `never`.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-08-18 09:36:57 -07:00
|
|
|
|
pub approval_policy: Option<AskForApproval>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
|
|
|
|
|
/// Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-08-18 09:36:57 -07:00
|
|
|
|
pub sandbox: Option<SandboxMode>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
|
|
|
|
|
/// Individual config settings that will override what is in
|
|
|
|
|
|
/// CODEX_HOME/config.toml.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub config: Option<HashMap<String, serde_json::Value>>,
|
|
|
|
|
|
|
|
|
|
|
|
/// The set of instructions to use instead of the default ones.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub base_instructions: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Whether to include the plan tool in the conversation.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_plan_tool: Option<bool>,
|
2025-08-15 11:55:53 -04:00
|
|
|
|
|
|
|
|
|
|
/// Whether to include the apply patch tool in the conversation.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_apply_patch_tool: Option<bool>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct NewConversationResponse {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub model: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AddConversationSubscriptionResponse {
|
|
|
|
|
|
pub subscription_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct RemoveConversationSubscriptionResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginChatGptResponse {
|
|
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
/// URL the client should open in a browser to initiate the OAuth flow.
|
|
|
|
|
|
pub auth_url: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GitDiffToRemoteResponse {
|
|
|
|
|
|
pub sha: GitSha,
|
|
|
|
|
|
pub diff: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct CancelLoginChatGptParams {
|
2025-08-17 10:03:52 -07:00
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct GitDiffToRemoteParams {
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct CancelLoginChatGptResponse {}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LogoutChatGptParams {
|
2025-08-17 10:03:52 -07:00
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct LogoutChatGptResponse {}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GetAuthStatusParams {
|
|
|
|
|
|
pub login_id: Uuid,
|
2025-08-19 19:50:28 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct GetAuthStatusResponse {
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_method: Option<AuthMode>,
|
|
|
|
|
|
pub preferred_auth_method: AuthMode,
|
|
|
|
|
|
}
|
2025-08-17 10:03:52 -07:00
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserMessageParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub items: Vec<InputItem>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-15 10:05:58 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserTurnParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub items: Vec<InputItem>,
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
pub approval_policy: AskForApproval,
|
|
|
|
|
|
pub sandbox_policy: SandboxPolicy,
|
|
|
|
|
|
pub model: String,
|
|
|
|
|
|
pub effort: ReasoningEffort,
|
|
|
|
|
|
pub summary: ReasoningSummary,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-15 10:05:58 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserTurnResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:12:03 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct InterruptConversationParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS)]
|
2025-08-13 23:12:03 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-17 21:40:31 -07:00
|
|
|
|
pub struct InterruptConversationResponse {
|
|
|
|
|
|
pub abort_reason: TurnAbortReason,
|
|
|
|
|
|
}
|
2025-08-13 23:12:03 -07:00
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserMessageResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AddConversationListenerParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct RemoveConversationListenerParams {
|
|
|
|
|
|
pub subscription_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-14 10:58:04 -07:00
|
|
|
|
#[serde(tag = "type", content = "data")]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
pub enum InputItem {
|
|
|
|
|
|
Text {
|
|
|
|
|
|
text: String,
|
|
|
|
|
|
},
|
|
|
|
|
|
/// Pre‑encoded data: URI image.
|
|
|
|
|
|
Image {
|
|
|
|
|
|
image_url: String,
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
|
|
/// Local image path provided by the user. This will be converted to an
|
|
|
|
|
|
/// `Image` variant (base64 data URL) during request serialization.
|
|
|
|
|
|
LocalImage {
|
2025-08-13 23:00:50 -07:00
|
|
|
|
path: PathBuf,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-13 23:00:50 -07:00
|
|
|
|
// TODO(mbolin): Need test to ensure these constants match the enum variants.
|
|
|
|
|
|
|
|
|
|
|
|
pub const APPLY_PATCH_APPROVAL_METHOD: &str = "applyPatchApproval";
|
|
|
|
|
|
pub const EXEC_COMMAND_APPROVAL_METHOD: &str = "execCommandApproval";
|
|
|
|
|
|
|
|
|
|
|
|
/// Request initiated from the server and sent to the client.
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
#[serde(tag = "method", rename_all = "camelCase")]
|
|
|
|
|
|
pub enum ServerRequest {
|
|
|
|
|
|
/// Request to approve a patch.
|
|
|
|
|
|
ApplyPatchApproval {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ApplyPatchApprovalParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
/// Request to exec a command.
|
|
|
|
|
|
ExecCommandApproval {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ExecCommandApprovalParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ApplyPatchApprovalParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
2025-08-14 16:09:12 -07:00
|
|
|
|
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
|
|
|
|
|
|
/// and [codex_core::protocol::PatchApplyEndEvent].
|
|
|
|
|
|
pub call_id: String,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub file_changes: HashMap<PathBuf, FileChange>,
|
|
|
|
|
|
/// Optional explanatory reason (e.g. request for extra write access).
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reason: Option<String>,
|
|
|
|
|
|
/// When set, the agent is asking the user to allow writes under this root
|
|
|
|
|
|
/// for the remainder of the session (unclear if this is honored today).
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub grant_root: Option<PathBuf>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ExecCommandApprovalParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
2025-08-14 16:09:12 -07:00
|
|
|
|
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
|
|
|
|
|
|
/// and [codex_core::protocol::ExecCommandEndEvent].
|
|
|
|
|
|
pub call_id: String,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub command: Vec<String>,
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reason: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ExecCommandApprovalResponse {
|
|
|
|
|
|
pub decision: ReviewDecision,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ApplyPatchApprovalResponse {
|
|
|
|
|
|
pub decision: ReviewDecision,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-20 20:36:34 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginChatGptCompleteNotification {
|
|
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
pub success: bool,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub error: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AuthStatusChangeNotification {
|
|
|
|
|
|
/// Current authentication method; omitted if signed out.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_method: Option<AuthMode>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS, Display)]
|
|
|
|
|
|
#[serde(tag = "type", content = "data", rename_all = "snake_case")]
|
|
|
|
|
|
#[strum(serialize_all = "snake_case")]
|
|
|
|
|
|
pub enum ServerNotification {
|
|
|
|
|
|
/// Authentication status changed
|
|
|
|
|
|
AuthStatusChange(AuthStatusChangeNotification),
|
|
|
|
|
|
|
|
|
|
|
|
/// ChatGPT login flow completed
|
|
|
|
|
|
LoginChatGptComplete(LoginChatGptCompleteNotification),
|
|
|
|
|
|
}
|
|
|
|
|
|
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[cfg(test)]
|
|
|
|
|
|
mod tests {
|
|
|
|
|
|
use super::*;
|
|
|
|
|
|
use pretty_assertions::assert_eq;
|
|
|
|
|
|
use serde_json::json;
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn serialize_new_conversation() {
|
2025-08-13 23:00:50 -07:00
|
|
|
|
let request = ClientRequest::NewConversation {
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
request_id: RequestId::Integer(42),
|
|
|
|
|
|
params: NewConversationParams {
|
|
|
|
|
|
model: Some("gpt-5".to_string()),
|
|
|
|
|
|
profile: None,
|
|
|
|
|
|
cwd: None,
|
2025-08-18 09:36:57 -07:00
|
|
|
|
approval_policy: Some(AskForApproval::OnRequest),
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
sandbox: None,
|
|
|
|
|
|
config: None,
|
|
|
|
|
|
base_instructions: None,
|
|
|
|
|
|
include_plan_tool: None,
|
2025-08-15 11:55:53 -04:00
|
|
|
|
include_apply_patch_tool: None,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
|
|
|
|
|
};
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
json!({
|
|
|
|
|
|
"method": "newConversation",
|
|
|
|
|
|
"id": 42,
|
|
|
|
|
|
"params": {
|
|
|
|
|
|
"model": "gpt-5",
|
|
|
|
|
|
"approvalPolicy": "on-request"
|
|
|
|
|
|
}
|
|
|
|
|
|
}),
|
|
|
|
|
|
serde_json::to_value(&request).unwrap(),
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|