feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use std::collections::HashMap;
|
2025-08-13 23:00:50 -07:00
|
|
|
|
use std::path::PathBuf;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
use crate::JSONRPCNotification;
|
|
|
|
|
|
use crate::JSONRPCRequest;
|
|
|
|
|
|
use crate::RequestId;
|
|
|
|
|
|
use codex_protocol::ConversationId;
|
|
|
|
|
|
use codex_protocol::config_types::ReasoningEffort;
|
|
|
|
|
|
use codex_protocol::config_types::ReasoningSummary;
|
|
|
|
|
|
use codex_protocol::config_types::SandboxMode;
|
|
|
|
|
|
use codex_protocol::config_types::Verbosity;
|
2025-10-15 13:58:40 -07:00
|
|
|
|
use codex_protocol::parse_command::ParsedCommand;
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
use codex_protocol::protocol::AskForApproval;
|
|
|
|
|
|
use codex_protocol::protocol::EventMsg;
|
|
|
|
|
|
use codex_protocol::protocol::FileChange;
|
|
|
|
|
|
use codex_protocol::protocol::ReviewDecision;
|
|
|
|
|
|
use codex_protocol::protocol::SandboxPolicy;
|
|
|
|
|
|
use codex_protocol::protocol::TurnAbortReason;
|
2025-09-30 18:06:05 -07:00
|
|
|
|
use paste::paste;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use serde::Deserialize;
|
|
|
|
|
|
use serde::Serialize;
|
2025-08-20 20:36:34 -07:00
|
|
|
|
use strum_macros::Display;
|
2025-08-18 13:08:53 -07:00
|
|
|
|
use ts_rs::TS;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use uuid::Uuid;
|
|
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, TS)]
|
|
|
|
|
|
#[ts(type = "string")]
|
|
|
|
|
|
pub struct GitSha(pub String);
|
|
|
|
|
|
|
|
|
|
|
|
impl GitSha {
|
|
|
|
|
|
pub fn new(sha: &str) -> Self {
|
|
|
|
|
|
Self(sha.to_string())
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
OpenTelemetry events (#2103)
### Title
## otel
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events**
that
describe each run: outbound API requests, streamed responses, user
input,
tool-approval decisions, and the result of every tool invocation. Export
is
**disabled by default** so local runs remain self-contained. Opt in by
adding an
`[otel]` table and choosing an exporter.
```toml
[otel]
environment = "staging" # defaults to "dev"
exporter = "none" # defaults to "none"; set to otlp-http or otlp-grpc to send events
log_user_prompt = false # defaults to false; redact prompt text unless explicitly enabled
```
Codex tags every exported event with `service.name = "codex-cli"`, the
CLI
version, and an `env` attribute so downstream collectors can distinguish
dev/staging/prod traffic. Only telemetry produced inside the
`codex_otel`
crate—the events listed below—is forwarded to the exporter.
### Event catalog
Every event shares a common set of metadata fields: `event.timestamp`,
`conversation.id`, `app.version`, `auth_mode` (when available),
`user.account_id` (when available), `terminal.type`, `model`, and
`slug`.
With OTEL enabled Codex emits the following event types (in addition to
the
metadata above):
- `codex.api_request`
- `cf_ray` (optional)
- `attempt`
- `duration_ms`
- `http.response.status_code` (optional)
- `error.message` (failures)
- `codex.sse_event`
- `event.kind`
- `duration_ms`
- `error.message` (failures)
- `input_token_count` (completion only)
- `output_token_count` (completion only)
- `cached_token_count` (completion only, optional)
- `reasoning_token_count` (completion only, optional)
- `tool_token_count` (completion only)
- `codex.user_prompt`
- `prompt_length`
- `prompt` (redacted unless `log_user_prompt = true`)
- `codex.tool_decision`
- `tool_name`
- `call_id`
- `decision` (`approved`, `approved_for_session`, `denied`, or `abort`)
- `source` (`config` or `user`)
- `codex.tool_result`
- `tool_name`
- `call_id`
- `arguments`
- `duration_ms` (execution time for the tool)
- `success` (`"true"` or `"false"`)
- `output`
### Choosing an exporter
Set `otel.exporter` to control where events go:
- `none` – leaves instrumentation active but skips exporting. This is
the
default.
- `otlp-http` – posts OTLP log records to an OTLP/HTTP collector.
Specify the
endpoint, protocol, and headers your collector expects:
```toml
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```
- `otlp-grpc` – streams OTLP log records over gRPC. Provide the endpoint
and any
metadata headers:
```toml
[otel]
exporter = { otlp-grpc = {
endpoint = "https://otel.example.com:4317",
headers = { "x-otlp-meta" = "abc123" }
}}
```
If the exporter is `none` nothing is written anywhere; otherwise you
must run or point to your
own collector. All exporters run on a background batch worker that is
flushed on
shutdown.
If you build Codex from source the OTEL crate is still behind an `otel`
feature
flag; the official prebuilt binaries ship with the feature enabled. When
the
feature is disabled the telemetry hooks become no-ops so the CLI
continues to
function without the extra dependencies.
---------
Co-authored-by: Anton Panasenko <apanasenko@openai.com>
2025-09-29 19:30:55 +01:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, Display, TS)]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
#[serde(rename_all = "lowercase")]
|
|
|
|
|
|
pub enum AuthMode {
|
|
|
|
|
|
ApiKey,
|
|
|
|
|
|
ChatGPT,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-30 18:06:05 -07:00
|
|
|
|
/// Generates an `enum ClientRequest` where each variant is a request that the
|
|
|
|
|
|
/// client can send to the server. Each variant has associated `params` and
|
|
|
|
|
|
/// `response` types. Also generates a `export_client_responses()` function to
|
|
|
|
|
|
/// export all response types to TypeScript.
|
|
|
|
|
|
macro_rules! client_request_definitions {
|
|
|
|
|
|
(
|
|
|
|
|
|
$(
|
|
|
|
|
|
$(#[$variant_meta:meta])*
|
|
|
|
|
|
$variant:ident {
|
|
|
|
|
|
params: $(#[$params_meta:meta])* $params:ty,
|
|
|
|
|
|
response: $response:ty,
|
|
|
|
|
|
}
|
|
|
|
|
|
),* $(,)?
|
|
|
|
|
|
) => {
|
|
|
|
|
|
/// Request from the client to the server.
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(tag = "method", rename_all = "camelCase")]
|
|
|
|
|
|
pub enum ClientRequest {
|
|
|
|
|
|
$(
|
|
|
|
|
|
$(#[$variant_meta])*
|
|
|
|
|
|
$variant {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
$(#[$params_meta])*
|
|
|
|
|
|
params: $params,
|
|
|
|
|
|
},
|
|
|
|
|
|
)*
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub fn export_client_responses(
|
|
|
|
|
|
out_dir: &::std::path::Path,
|
|
|
|
|
|
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
|
|
|
|
|
|
$(
|
|
|
|
|
|
<$response as ::ts_rs::TS>::export_all_to(out_dir)?;
|
|
|
|
|
|
)*
|
|
|
|
|
|
Ok(())
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
client_request_definitions! {
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
Initialize {
|
|
|
|
|
|
params: InitializeParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: InitializeResponse,
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
NewConversation {
|
|
|
|
|
|
params: NewConversationParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: NewConversationResponse,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
2025-09-04 16:44:18 -07:00
|
|
|
|
/// List recorded Codex conversations (rollouts) with optional pagination and search.
|
|
|
|
|
|
ListConversations {
|
|
|
|
|
|
params: ListConversationsParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: ListConversationsResponse,
|
2025-09-04 16:44:18 -07:00
|
|
|
|
},
|
|
|
|
|
|
/// Resume a recorded Codex conversation from a rollout file.
|
|
|
|
|
|
ResumeConversation {
|
|
|
|
|
|
params: ResumeConversationParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: ResumeConversationResponse,
|
2025-09-04 16:44:18 -07:00
|
|
|
|
},
|
2025-09-09 08:39:00 -07:00
|
|
|
|
ArchiveConversation {
|
|
|
|
|
|
params: ArchiveConversationParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: ArchiveConversationResponse,
|
2025-09-09 08:39:00 -07:00
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
SendUserMessage {
|
|
|
|
|
|
params: SendUserMessageParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: SendUserMessageResponse,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
2025-08-15 10:05:58 -07:00
|
|
|
|
SendUserTurn {
|
|
|
|
|
|
params: SendUserTurnParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: SendUserTurnResponse,
|
2025-08-15 10:05:58 -07:00
|
|
|
|
},
|
2025-08-13 23:12:03 -07:00
|
|
|
|
InterruptConversation {
|
|
|
|
|
|
params: InterruptConversationParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: InterruptConversationResponse,
|
2025-08-13 23:12:03 -07:00
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
AddConversationListener {
|
|
|
|
|
|
params: AddConversationListenerParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: AddConversationSubscriptionResponse,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
|
|
|
|
|
RemoveConversationListener {
|
|
|
|
|
|
params: RemoveConversationListenerParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: RemoveConversationSubscriptionResponse,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
2025-08-22 13:10:11 -07:00
|
|
|
|
GitDiffToRemote {
|
|
|
|
|
|
params: GitDiffToRemoteParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: GitDiffToRemoteResponse,
|
2025-08-22 13:10:11 -07:00
|
|
|
|
},
|
2025-09-11 09:16:34 -07:00
|
|
|
|
LoginApiKey {
|
|
|
|
|
|
params: LoginApiKeyParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: LoginApiKeyResponse,
|
2025-09-11 09:16:34 -07:00
|
|
|
|
},
|
2025-08-17 10:03:52 -07:00
|
|
|
|
LoginChatGpt {
|
2025-09-30 18:06:05 -07:00
|
|
|
|
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
|
|
|
|
|
response: LoginChatGptResponse,
|
2025-08-17 10:03:52 -07:00
|
|
|
|
},
|
|
|
|
|
|
CancelLoginChatGpt {
|
|
|
|
|
|
params: CancelLoginChatGptParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: CancelLoginChatGptResponse,
|
2025-08-17 10:03:52 -07:00
|
|
|
|
},
|
2025-08-20 20:36:34 -07:00
|
|
|
|
LogoutChatGpt {
|
2025-09-30 18:06:05 -07:00
|
|
|
|
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
|
|
|
|
|
response: LogoutChatGptResponse,
|
2025-08-20 20:36:34 -07:00
|
|
|
|
},
|
|
|
|
|
|
GetAuthStatus {
|
2025-08-22 13:10:11 -07:00
|
|
|
|
params: GetAuthStatusParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: GetAuthStatusResponse,
|
2025-08-19 19:50:28 -07:00
|
|
|
|
},
|
2025-09-04 16:26:41 -07:00
|
|
|
|
GetUserSavedConfig {
|
2025-09-30 18:06:05 -07:00
|
|
|
|
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
|
|
|
|
|
response: GetUserSavedConfigResponse,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
},
|
2025-09-11 23:44:17 -07:00
|
|
|
|
SetDefaultModel {
|
|
|
|
|
|
params: SetDefaultModelParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: SetDefaultModelResponse,
|
2025-09-11 23:44:17 -07:00
|
|
|
|
},
|
2025-09-08 10:30:13 -07:00
|
|
|
|
GetUserAgent {
|
2025-09-30 18:06:05 -07:00
|
|
|
|
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
|
|
|
|
|
response: GetUserAgentResponse,
|
2025-09-08 10:30:13 -07:00
|
|
|
|
},
|
2025-09-10 17:03:35 -07:00
|
|
|
|
UserInfo {
|
2025-09-30 18:06:05 -07:00
|
|
|
|
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
|
|
|
|
|
response: UserInfoResponse,
|
2025-09-10 17:03:35 -07:00
|
|
|
|
},
|
2025-09-29 12:19:09 -07:00
|
|
|
|
FuzzyFileSearch {
|
|
|
|
|
|
params: FuzzyFileSearchParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: FuzzyFileSearchResponse,
|
2025-09-29 12:19:09 -07:00
|
|
|
|
},
|
2025-09-03 17:05:03 -07:00
|
|
|
|
/// Execute a command (argv vector) under the server's sandbox.
|
|
|
|
|
|
ExecOneOffCommand {
|
|
|
|
|
|
params: ExecOneOffCommandParams,
|
2025-09-30 18:06:05 -07:00
|
|
|
|
response: ExecOneOffCommandResponse,
|
2025-09-03 17:05:03 -07:00
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct InitializeParams {
|
|
|
|
|
|
pub client_info: ClientInfo,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ClientInfo {
|
|
|
|
|
|
pub name: String,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub title: Option<String>,
|
|
|
|
|
|
pub version: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct InitializeResponse {
|
|
|
|
|
|
pub user_agent: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct NewConversationParams {
|
|
|
|
|
|
/// Optional override for the model name (e.g. "o3", "o4-mini").
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Configuration profile from config.toml to specify default options.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub profile: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Working directory for the session. If relative, it is resolved against
|
|
|
|
|
|
/// the server process's current working directory.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cwd: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Approval policy for shell commands generated by the model:
|
|
|
|
|
|
/// `untrusted`, `on-failure`, `on-request`, `never`.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-08-18 09:36:57 -07:00
|
|
|
|
pub approval_policy: Option<AskForApproval>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
|
|
|
|
|
/// Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-08-18 09:36:57 -07:00
|
|
|
|
pub sandbox: Option<SandboxMode>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
|
|
|
|
|
/// Individual config settings that will override what is in
|
|
|
|
|
|
/// CODEX_HOME/config.toml.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub config: Option<HashMap<String, serde_json::Value>>,
|
|
|
|
|
|
|
|
|
|
|
|
/// The set of instructions to use instead of the default ones.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub base_instructions: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Whether to include the plan tool in the conversation.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_plan_tool: Option<bool>,
|
2025-08-15 11:55:53 -04:00
|
|
|
|
|
|
|
|
|
|
/// Whether to include the apply patch tool in the conversation.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_apply_patch_tool: Option<bool>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct NewConversationResponse {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub model: String,
|
2025-09-11 21:04:40 -07:00
|
|
|
|
/// Note this could be ignored by the model.
|
2025-09-12 12:06:33 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reasoning_effort: Option<ReasoningEffort>,
|
2025-09-09 00:11:48 -07:00
|
|
|
|
pub rollout_path: PathBuf,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-08 14:54:47 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS)]
|
2025-09-04 16:44:18 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ResumeConversationResponse {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub model: String,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub initial_messages: Option<Vec<EventMsg>>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ListConversationsParams {
|
|
|
|
|
|
/// Optional page size; defaults to a reasonable server-side value.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub page_size: Option<usize>,
|
|
|
|
|
|
/// Opaque pagination cursor returned by a previous call.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cursor: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ConversationSummary {
|
2025-09-08 14:54:47 -07:00
|
|
|
|
pub conversation_id: ConversationId,
|
2025-09-04 16:44:18 -07:00
|
|
|
|
pub path: PathBuf,
|
|
|
|
|
|
pub preview: String,
|
|
|
|
|
|
/// RFC3339 timestamp string for the session start, if available.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub timestamp: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ListConversationsResponse {
|
|
|
|
|
|
pub items: Vec<ConversationSummary>,
|
|
|
|
|
|
/// Opaque cursor to pass to the next call to continue after the last item.
|
|
|
|
|
|
/// if None, there are no more items to return.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub next_cursor: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ResumeConversationParams {
|
|
|
|
|
|
/// Absolute path to the rollout JSONL file.
|
|
|
|
|
|
pub path: PathBuf,
|
|
|
|
|
|
/// Optional overrides to apply when spawning the resumed session.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub overrides: Option<NewConversationParams>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AddConversationSubscriptionResponse {
|
|
|
|
|
|
pub subscription_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-09 08:39:00 -07:00
|
|
|
|
/// The [`ConversationId`] must match the `rollout_path`.
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ArchiveConversationParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub rollout_path: PathBuf,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ArchiveConversationResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct RemoveConversationSubscriptionResponse {}
|
|
|
|
|
|
|
2025-09-11 09:16:34 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginApiKeyParams {
|
|
|
|
|
|
pub api_key: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginApiKeyResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginChatGptResponse {
|
|
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
/// URL the client should open in a browser to initiate the OAuth flow.
|
|
|
|
|
|
pub auth_url: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GitDiffToRemoteResponse {
|
|
|
|
|
|
pub sha: GitSha,
|
|
|
|
|
|
pub diff: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct CancelLoginChatGptParams {
|
2025-08-17 10:03:52 -07:00
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct GitDiffToRemoteParams {
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct CancelLoginChatGptResponse {}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-22 13:10:11 -07:00
|
|
|
|
pub struct LogoutChatGptParams {}
|
2025-08-17 10:03:52 -07:00
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct LogoutChatGptResponse {}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GetAuthStatusParams {
|
2025-08-22 13:10:11 -07:00
|
|
|
|
/// If true, include the current auth token (if available) in the response.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_token: Option<bool>,
|
|
|
|
|
|
/// If true, attempt to refresh the token before returning status.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub refresh_token: Option<bool>,
|
2025-08-19 19:50:28 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-03 17:05:03 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ExecOneOffCommandParams {
|
|
|
|
|
|
/// Command argv to execute.
|
|
|
|
|
|
pub command: Vec<String>,
|
|
|
|
|
|
/// Timeout of the command in milliseconds.
|
|
|
|
|
|
/// If not specified, a sensible default is used server-side.
|
|
|
|
|
|
pub timeout_ms: Option<u64>,
|
|
|
|
|
|
/// Optional working directory for the process. Defaults to server config cwd.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cwd: Option<PathBuf>,
|
|
|
|
|
|
/// Optional explicit sandbox policy overriding the server default.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub sandbox_policy: Option<SandboxPolicy>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-09-30 18:06:05 -07:00
|
|
|
|
pub struct ExecOneOffCommandResponse {
|
2025-09-03 17:05:03 -07:00
|
|
|
|
pub exit_code: i32,
|
|
|
|
|
|
pub stdout: String,
|
|
|
|
|
|
pub stderr: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct GetAuthStatusResponse {
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_method: Option<AuthMode>,
|
2025-08-22 13:10:11 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_token: Option<String>,
|
2025-09-11 09:16:34 -07:00
|
|
|
|
|
|
|
|
|
|
// Indicates that auth method must be valid to use the server.
|
|
|
|
|
|
// This can be false if using a custom provider that is configured
|
|
|
|
|
|
// with requires_openai_auth == false.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub requires_openai_auth: Option<bool>,
|
2025-08-20 20:36:34 -07:00
|
|
|
|
}
|
2025-08-17 10:03:52 -07:00
|
|
|
|
|
2025-09-08 10:30:13 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GetUserAgentResponse {
|
|
|
|
|
|
pub user_agent: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-10 17:03:35 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct UserInfoResponse {
|
|
|
|
|
|
/// Note: `alleged_user_email` is not currently verified. We read it from
|
|
|
|
|
|
/// the local auth.json, which the user could theoretically modify. In the
|
|
|
|
|
|
/// future, we may add logic to verify the email against the server before
|
|
|
|
|
|
/// returning it.
|
|
|
|
|
|
pub alleged_user_email: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-27 09:59:03 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-09-04 16:26:41 -07:00
|
|
|
|
pub struct GetUserSavedConfigResponse {
|
|
|
|
|
|
pub config: UserSavedConfig,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-11 23:44:17 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SetDefaultModelParams {
|
2025-09-12 11:35:51 -07:00
|
|
|
|
/// If set to None, this means `model` should be cleared in config.toml.
|
2025-09-11 23:44:17 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model: Option<String>,
|
2025-09-12 11:35:51 -07:00
|
|
|
|
/// If set to None, this means `model_reasoning_effort` should be cleared
|
|
|
|
|
|
/// in config.toml.
|
2025-09-11 23:44:17 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reasoning_effort: Option<ReasoningEffort>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SetDefaultModelResponse {}
|
|
|
|
|
|
|
2025-09-04 16:26:41 -07:00
|
|
|
|
/// UserSavedConfig contains a subset of the config. It is meant to expose mcp
|
|
|
|
|
|
/// client-configurable settings that can be specified in the NewConversation
|
|
|
|
|
|
/// and SendUserTurn requests.
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct UserSavedConfig {
|
2025-08-27 09:59:03 -07:00
|
|
|
|
/// Approvals
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub approval_policy: Option<AskForApproval>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub sandbox_mode: Option<SandboxMode>,
|
2025-09-04 16:26:41 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub sandbox_settings: Option<SandboxSettings>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
|
2025-09-04 16:26:41 -07:00
|
|
|
|
/// Model-specific configuration
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model: Option<String>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model_reasoning_effort: Option<ReasoningEffort>,
|
2025-09-04 16:26:41 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model_reasoning_summary: Option<ReasoningSummary>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model_verbosity: Option<Verbosity>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Tools
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub tools: Option<Tools>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
|
|
|
|
|
|
/// Profiles
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub profile: Option<String>,
|
2025-09-04 16:26:41 -07:00
|
|
|
|
#[serde(default)]
|
|
|
|
|
|
pub profiles: HashMap<String, Profile>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// MCP representation of a [`codex_core::config_profile::ConfigProfile`].
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct Profile {
|
|
|
|
|
|
pub model: Option<String>,
|
|
|
|
|
|
/// The key in the `model_providers` map identifying the
|
|
|
|
|
|
/// [`ModelProviderInfo`] to use.
|
|
|
|
|
|
pub model_provider: Option<String>,
|
|
|
|
|
|
pub approval_policy: Option<AskForApproval>,
|
|
|
|
|
|
pub model_reasoning_effort: Option<ReasoningEffort>,
|
|
|
|
|
|
pub model_reasoning_summary: Option<ReasoningSummary>,
|
|
|
|
|
|
pub model_verbosity: Option<Verbosity>,
|
|
|
|
|
|
pub chatgpt_base_url: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
/// MCP representation of a [`codex_core::config::ToolsToml`].
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct Tools {
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub web_search: Option<bool>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub view_image: Option<bool>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// MCP representation of a [`codex_core::config_types::SandboxWorkspaceWrite`].
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SandboxSettings {
|
|
|
|
|
|
#[serde(default)]
|
|
|
|
|
|
pub writable_roots: Vec<PathBuf>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub network_access: Option<bool>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub exclude_tmpdir_env_var: Option<bool>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-09-04 16:26:41 -07:00
|
|
|
|
pub exclude_slash_tmp: Option<bool>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserMessageParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub items: Vec<InputItem>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-15 10:05:58 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserTurnParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub items: Vec<InputItem>,
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
pub approval_policy: AskForApproval,
|
|
|
|
|
|
pub sandbox_policy: SandboxPolicy,
|
|
|
|
|
|
pub model: String,
|
2025-09-12 12:06:33 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub effort: Option<ReasoningEffort>,
|
2025-08-15 10:05:58 -07:00
|
|
|
|
pub summary: ReasoningSummary,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-15 10:05:58 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserTurnResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:12:03 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct InterruptConversationParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS)]
|
2025-08-13 23:12:03 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-17 21:40:31 -07:00
|
|
|
|
pub struct InterruptConversationResponse {
|
|
|
|
|
|
pub abort_reason: TurnAbortReason,
|
|
|
|
|
|
}
|
2025-08-13 23:12:03 -07:00
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserMessageResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AddConversationListenerParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct RemoveConversationListenerParams {
|
|
|
|
|
|
pub subscription_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-14 10:58:04 -07:00
|
|
|
|
#[serde(tag = "type", content = "data")]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
pub enum InputItem {
|
|
|
|
|
|
Text {
|
|
|
|
|
|
text: String,
|
|
|
|
|
|
},
|
|
|
|
|
|
/// Pre‑encoded data: URI image.
|
|
|
|
|
|
Image {
|
|
|
|
|
|
image_url: String,
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
|
|
/// Local image path provided by the user. This will be converted to an
|
|
|
|
|
|
/// `Image` variant (base64 data URL) during request serialization.
|
|
|
|
|
|
LocalImage {
|
2025-08-13 23:00:50 -07:00
|
|
|
|
path: PathBuf,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-30 18:06:05 -07:00
|
|
|
|
/// Generates an `enum ServerRequest` where each variant is a request that the
|
|
|
|
|
|
/// server can send to the client along with the corresponding params and
|
|
|
|
|
|
/// response types. It also generates helper types used by the app/server
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
/// infrastructure (payload enum, request constructor, and export helpers).
|
2025-09-30 18:06:05 -07:00
|
|
|
|
macro_rules! server_request_definitions {
|
|
|
|
|
|
(
|
|
|
|
|
|
$(
|
|
|
|
|
|
$(#[$variant_meta:meta])*
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
$variant:ident
|
2025-09-30 18:06:05 -07:00
|
|
|
|
),* $(,)?
|
|
|
|
|
|
) => {
|
|
|
|
|
|
paste! {
|
|
|
|
|
|
/// Request initiated from the server and sent to the client.
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(tag = "method", rename_all = "camelCase")]
|
|
|
|
|
|
pub enum ServerRequest {
|
|
|
|
|
|
$(
|
|
|
|
|
|
$(#[$variant_meta])*
|
|
|
|
|
|
$variant {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: [<$variant Params>],
|
|
|
|
|
|
},
|
|
|
|
|
|
)*
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Clone, PartialEq)]
|
|
|
|
|
|
pub enum ServerRequestPayload {
|
|
|
|
|
|
$( $variant([<$variant Params>]), )*
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
impl ServerRequestPayload {
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
pub fn request_with_id(self, request_id: RequestId) -> ServerRequest {
|
2025-09-30 18:06:05 -07:00
|
|
|
|
match self {
|
|
|
|
|
|
$(Self::$variant(params) => ServerRequest::$variant { request_id, params },)*
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub fn export_server_responses(
|
|
|
|
|
|
out_dir: &::std::path::Path,
|
|
|
|
|
|
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
|
|
|
|
|
|
paste! {
|
|
|
|
|
|
$(<[<$variant Response>] as ::ts_rs::TS>::export_all_to(out_dir)?;)*
|
|
|
|
|
|
}
|
|
|
|
|
|
Ok(())
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
impl TryFrom<JSONRPCRequest> for ServerRequest {
|
|
|
|
|
|
type Error = serde_json::Error;
|
|
|
|
|
|
|
|
|
|
|
|
fn try_from(value: JSONRPCRequest) -> Result<Self, Self::Error> {
|
|
|
|
|
|
serde_json::from_value(serde_json::to_value(value)?)
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
server_request_definitions! {
|
2025-08-13 23:00:50 -07:00
|
|
|
|
/// Request to approve a patch.
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
ApplyPatchApproval,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
/// Request to exec a command.
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
ExecCommandApproval,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-09-30 18:06:05 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ApplyPatchApprovalParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
2025-08-14 16:09:12 -07:00
|
|
|
|
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
|
|
|
|
|
|
/// and [codex_core::protocol::PatchApplyEndEvent].
|
|
|
|
|
|
pub call_id: String,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub file_changes: HashMap<PathBuf, FileChange>,
|
|
|
|
|
|
/// Optional explanatory reason (e.g. request for extra write access).
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reason: Option<String>,
|
|
|
|
|
|
/// When set, the agent is asking the user to allow writes under this root
|
|
|
|
|
|
/// for the remainder of the session (unclear if this is honored today).
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub grant_root: Option<PathBuf>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-09-30 18:06:05 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ExecCommandApprovalParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
2025-08-14 16:09:12 -07:00
|
|
|
|
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
|
|
|
|
|
|
/// and [codex_core::protocol::ExecCommandEndEvent].
|
|
|
|
|
|
pub call_id: String,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub command: Vec<String>,
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reason: Option<String>,
|
2025-10-15 13:58:40 -07:00
|
|
|
|
pub parsed_cmd: Vec<ParsedCommand>,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ExecCommandApprovalResponse {
|
|
|
|
|
|
pub decision: ReviewDecision,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ApplyPatchApprovalResponse {
|
|
|
|
|
|
pub decision: ReviewDecision,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-29 12:19:09 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
#[ts(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct FuzzyFileSearchParams {
|
|
|
|
|
|
pub query: String,
|
|
|
|
|
|
pub roots: Vec<String>,
|
|
|
|
|
|
// if provided, will cancel any previous request that used the same value
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cancellation_token: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Superset of [`codex_file_search::FileMatch`]
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
pub struct FuzzyFileSearchResult {
|
|
|
|
|
|
pub root: String,
|
|
|
|
|
|
pub path: String,
|
2025-10-02 18:19:13 -07:00
|
|
|
|
pub file_name: String,
|
2025-09-29 12:19:09 -07:00
|
|
|
|
pub score: u32,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub indices: Option<Vec<u32>>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
pub struct FuzzyFileSearchResponse {
|
|
|
|
|
|
pub files: Vec<FuzzyFileSearchResult>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-20 20:36:34 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginChatGptCompleteNotification {
|
|
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
pub success: bool,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub error: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SessionConfiguredNotification {
|
|
|
|
|
|
/// Name left as session_id instead of conversation_id for backwards compatibility.
|
|
|
|
|
|
pub session_id: ConversationId,
|
|
|
|
|
|
|
|
|
|
|
|
/// Tell the client what model is being queried.
|
|
|
|
|
|
pub model: String,
|
|
|
|
|
|
|
|
|
|
|
|
/// The effort the model is putting into reasoning about the user's request.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reasoning_effort: Option<ReasoningEffort>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Identifier of the history log file (inode on Unix, 0 otherwise).
|
|
|
|
|
|
pub history_log_id: u64,
|
|
|
|
|
|
|
|
|
|
|
|
/// Current number of entries in the history log.
|
2025-09-30 18:06:05 -07:00
|
|
|
|
#[ts(type = "number")]
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
pub history_entry_count: usize,
|
|
|
|
|
|
|
|
|
|
|
|
/// Optional initial messages (as events) for resumed sessions.
|
|
|
|
|
|
/// When present, UIs can use these to seed the history.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub initial_messages: Option<Vec<EventMsg>>,
|
|
|
|
|
|
|
|
|
|
|
|
pub rollout_path: PathBuf,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-20 20:36:34 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AuthStatusChangeNotification {
|
|
|
|
|
|
/// Current authentication method; omitted if signed out.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_method: Option<AuthMode>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
/// Notification sent from the server to the client.
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS, Display)]
|
2025-09-04 17:49:50 -07:00
|
|
|
|
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
|
|
|
|
|
|
#[strum(serialize_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub enum ServerNotification {
|
|
|
|
|
|
/// Authentication status changed
|
|
|
|
|
|
AuthStatusChange(AuthStatusChangeNotification),
|
|
|
|
|
|
|
|
|
|
|
|
/// ChatGPT login flow completed
|
|
|
|
|
|
LoginChatGptComplete(LoginChatGptCompleteNotification),
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
|
|
|
|
|
|
/// The special session configured event for a new or resumed conversation.
|
|
|
|
|
|
SessionConfigured(SessionConfiguredNotification),
|
2025-08-20 20:36:34 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-04 17:49:50 -07:00
|
|
|
|
impl ServerNotification {
|
|
|
|
|
|
pub fn to_params(self) -> Result<serde_json::Value, serde_json::Error> {
|
|
|
|
|
|
match self {
|
|
|
|
|
|
ServerNotification::AuthStatusChange(params) => serde_json::to_value(params),
|
|
|
|
|
|
ServerNotification::LoginChatGptComplete(params) => serde_json::to_value(params),
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
ServerNotification::SessionConfigured(params) => serde_json::to_value(params),
|
2025-09-04 17:49:50 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
impl TryFrom<JSONRPCNotification> for ServerNotification {
|
|
|
|
|
|
type Error = serde_json::Error;
|
|
|
|
|
|
|
|
|
|
|
|
fn try_from(value: JSONRPCNotification) -> Result<Self, Self::Error> {
|
|
|
|
|
|
serde_json::from_value(serde_json::to_value(value)?)
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Notification sent from the client to the server.
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS, Display)]
|
|
|
|
|
|
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
|
|
|
|
|
|
#[strum(serialize_all = "camelCase")]
|
|
|
|
|
|
pub enum ClientNotification {
|
|
|
|
|
|
Initialized,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[cfg(test)]
|
|
|
|
|
|
mod tests {
|
|
|
|
|
|
use super::*;
|
2025-09-23 13:31:36 -07:00
|
|
|
|
use anyhow::Result;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use pretty_assertions::assert_eq;
|
|
|
|
|
|
use serde_json::json;
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
2025-09-23 13:31:36 -07:00
|
|
|
|
fn serialize_new_conversation() -> Result<()> {
|
2025-08-13 23:00:50 -07:00
|
|
|
|
let request = ClientRequest::NewConversation {
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
request_id: RequestId::Integer(42),
|
|
|
|
|
|
params: NewConversationParams {
|
2025-09-22 20:10:52 -07:00
|
|
|
|
model: Some("gpt-5-codex".to_string()),
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
profile: None,
|
|
|
|
|
|
cwd: None,
|
2025-08-18 09:36:57 -07:00
|
|
|
|
approval_policy: Some(AskForApproval::OnRequest),
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
sandbox: None,
|
|
|
|
|
|
config: None,
|
|
|
|
|
|
base_instructions: None,
|
|
|
|
|
|
include_plan_tool: None,
|
2025-08-15 11:55:53 -04:00
|
|
|
|
include_apply_patch_tool: None,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
|
|
|
|
|
};
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
json!({
|
|
|
|
|
|
"method": "newConversation",
|
|
|
|
|
|
"id": 42,
|
|
|
|
|
|
"params": {
|
2025-09-22 20:10:52 -07:00
|
|
|
|
"model": "gpt-5-codex",
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
"approvalPolicy": "on-request"
|
|
|
|
|
|
}
|
|
|
|
|
|
}),
|
2025-09-23 13:31:36 -07:00
|
|
|
|
serde_json::to_value(&request)?,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
);
|
2025-09-23 13:31:36 -07:00
|
|
|
|
Ok(())
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
2025-09-08 14:54:47 -07:00
|
|
|
|
|
2025-09-18 07:37:03 -07:00
|
|
|
|
#[test]
|
2025-09-23 13:31:36 -07:00
|
|
|
|
fn conversation_id_serializes_as_plain_string() -> Result<()> {
|
|
|
|
|
|
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
|
2025-09-18 07:37:03 -07:00
|
|
|
|
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
json!("67e55044-10b1-426f-9247-bb680e5fe0c8"),
|
2025-09-23 13:31:36 -07:00
|
|
|
|
serde_json::to_value(id)?
|
2025-09-18 07:37:03 -07:00
|
|
|
|
);
|
2025-09-23 13:31:36 -07:00
|
|
|
|
Ok(())
|
2025-09-18 07:37:03 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
2025-09-23 13:31:36 -07:00
|
|
|
|
fn conversation_id_deserializes_from_plain_string() -> Result<()> {
|
2025-09-18 07:37:03 -07:00
|
|
|
|
let id: ConversationId =
|
2025-09-23 13:31:36 -07:00
|
|
|
|
serde_json::from_value(json!("67e55044-10b1-426f-9247-bb680e5fe0c8"))?;
|
2025-09-18 07:37:03 -07:00
|
|
|
|
|
|
|
|
|
|
assert_eq!(
|
2025-09-23 13:31:36 -07:00
|
|
|
|
ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?,
|
2025-09-18 07:37:03 -07:00
|
|
|
|
id,
|
|
|
|
|
|
);
|
2025-09-23 13:31:36 -07:00
|
|
|
|
Ok(())
|
2025-09-08 14:54:47 -07:00
|
|
|
|
}
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn serialize_client_notification() -> Result<()> {
|
|
|
|
|
|
let notification = ClientNotification::Initialized;
|
|
|
|
|
|
// Note there is no "params" field for this notification.
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
json!({
|
|
|
|
|
|
"method": "initialized",
|
|
|
|
|
|
}),
|
|
|
|
|
|
serde_json::to_value(¬ification)?,
|
|
|
|
|
|
);
|
|
|
|
|
|
Ok(())
|
|
|
|
|
|
}
|
2025-09-30 18:06:05 -07:00
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn serialize_server_request() -> Result<()> {
|
|
|
|
|
|
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
|
|
|
|
|
|
let params = ExecCommandApprovalParams {
|
|
|
|
|
|
conversation_id,
|
|
|
|
|
|
call_id: "call-42".to_string(),
|
|
|
|
|
|
command: vec!["echo".to_string(), "hello".to_string()],
|
|
|
|
|
|
cwd: PathBuf::from("/tmp"),
|
|
|
|
|
|
reason: Some("because tests".to_string()),
|
2025-10-15 13:58:40 -07:00
|
|
|
|
parsed_cmd: vec![ParsedCommand::Unknown {
|
|
|
|
|
|
cmd: "echo hello".to_string(),
|
|
|
|
|
|
}],
|
2025-09-30 18:06:05 -07:00
|
|
|
|
};
|
|
|
|
|
|
let request = ServerRequest::ExecCommandApproval {
|
|
|
|
|
|
request_id: RequestId::Integer(7),
|
|
|
|
|
|
params: params.clone(),
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
json!({
|
|
|
|
|
|
"method": "execCommandApproval",
|
|
|
|
|
|
"id": 7,
|
|
|
|
|
|
"params": {
|
|
|
|
|
|
"conversationId": "67e55044-10b1-426f-9247-bb680e5fe0c8",
|
|
|
|
|
|
"callId": "call-42",
|
|
|
|
|
|
"command": ["echo", "hello"],
|
|
|
|
|
|
"cwd": "/tmp",
|
|
|
|
|
|
"reason": "because tests",
|
2025-10-15 13:58:40 -07:00
|
|
|
|
"parsedCmd": [
|
|
|
|
|
|
{
|
|
|
|
|
|
"type": "unknown",
|
|
|
|
|
|
"cmd": "echo hello"
|
|
|
|
|
|
}
|
|
|
|
|
|
]
|
2025-09-30 18:06:05 -07:00
|
|
|
|
}
|
|
|
|
|
|
}),
|
|
|
|
|
|
serde_json::to_value(&request)?,
|
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
let payload = ServerRequestPayload::ExecCommandApproval(params);
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
assert_eq!(payload.request_with_id(RequestId::Integer(7)), request);
|
2025-09-30 18:06:05 -07:00
|
|
|
|
Ok(())
|
|
|
|
|
|
}
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|