feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use std::collections::HashMap;
|
|
|
|
|
|
use std::fmt::Display;
|
2025-08-13 23:00:50 -07:00
|
|
|
|
use std::path::PathBuf;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
2025-08-18 09:36:57 -07:00
|
|
|
|
use crate::config_types::ReasoningEffort;
|
|
|
|
|
|
use crate::config_types::ReasoningSummary;
|
|
|
|
|
|
use crate::config_types::SandboxMode;
|
2025-09-04 16:26:41 -07:00
|
|
|
|
use crate::config_types::Verbosity;
|
2025-08-18 09:36:57 -07:00
|
|
|
|
use crate::protocol::AskForApproval;
|
2025-09-04 16:44:18 -07:00
|
|
|
|
use crate::protocol::EventMsg;
|
2025-08-18 09:36:57 -07:00
|
|
|
|
use crate::protocol::FileChange;
|
|
|
|
|
|
use crate::protocol::ReviewDecision;
|
|
|
|
|
|
use crate::protocol::SandboxPolicy;
|
|
|
|
|
|
use crate::protocol::TurnAbortReason;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use mcp_types::RequestId;
|
|
|
|
|
|
use serde::Deserialize;
|
|
|
|
|
|
use serde::Serialize;
|
2025-08-20 20:36:34 -07:00
|
|
|
|
use strum_macros::Display;
|
2025-08-18 13:08:53 -07:00
|
|
|
|
use ts_rs::TS;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use uuid::Uuid;
|
|
|
|
|
|
|
2025-09-18 07:37:03 -07:00
|
|
|
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, TS, Hash)]
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[ts(type = "string")]
|
2025-09-18 07:37:03 -07:00
|
|
|
|
pub struct ConversationId {
|
|
|
|
|
|
uuid: Uuid,
|
|
|
|
|
|
}
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
2025-09-07 20:22:25 -07:00
|
|
|
|
impl ConversationId {
|
|
|
|
|
|
pub fn new() -> Self {
|
2025-09-18 07:37:03 -07:00
|
|
|
|
Self {
|
|
|
|
|
|
uuid: Uuid::now_v7(),
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub fn from_string(s: &str) -> Result<Self, uuid::Error> {
|
|
|
|
|
|
Ok(Self {
|
|
|
|
|
|
uuid: Uuid::parse_str(s)?,
|
|
|
|
|
|
})
|
2025-09-07 20:22:25 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-08 14:54:47 -07:00
|
|
|
|
impl Default for ConversationId {
|
|
|
|
|
|
fn default() -> Self {
|
|
|
|
|
|
Self::new()
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
impl Display for ConversationId {
|
|
|
|
|
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
2025-09-18 07:37:03 -07:00
|
|
|
|
write!(f, "{}", self.uuid)
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-18 07:37:03 -07:00
|
|
|
|
impl Serialize for ConversationId {
|
|
|
|
|
|
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
|
|
|
|
|
where
|
|
|
|
|
|
S: serde::Serializer,
|
|
|
|
|
|
{
|
|
|
|
|
|
serializer.collect_str(&self.uuid)
|
2025-09-07 20:22:25 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-18 07:37:03 -07:00
|
|
|
|
impl<'de> Deserialize<'de> for ConversationId {
|
|
|
|
|
|
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
|
|
|
|
|
|
where
|
|
|
|
|
|
D: serde::Deserializer<'de>,
|
|
|
|
|
|
{
|
|
|
|
|
|
let value = String::deserialize(deserializer)?;
|
|
|
|
|
|
let uuid = Uuid::parse_str(&value).map_err(serde::de::Error::custom)?;
|
|
|
|
|
|
Ok(Self { uuid })
|
2025-09-07 20:22:25 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, TS)]
|
|
|
|
|
|
#[ts(type = "string")]
|
|
|
|
|
|
pub struct GitSha(pub String);
|
|
|
|
|
|
|
|
|
|
|
|
impl GitSha {
|
|
|
|
|
|
pub fn new(sha: &str) -> Self {
|
|
|
|
|
|
Self(sha.to_string())
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
OpenTelemetry events (#2103)
### Title
## otel
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events**
that
describe each run: outbound API requests, streamed responses, user
input,
tool-approval decisions, and the result of every tool invocation. Export
is
**disabled by default** so local runs remain self-contained. Opt in by
adding an
`[otel]` table and choosing an exporter.
```toml
[otel]
environment = "staging" # defaults to "dev"
exporter = "none" # defaults to "none"; set to otlp-http or otlp-grpc to send events
log_user_prompt = false # defaults to false; redact prompt text unless explicitly enabled
```
Codex tags every exported event with `service.name = "codex-cli"`, the
CLI
version, and an `env` attribute so downstream collectors can distinguish
dev/staging/prod traffic. Only telemetry produced inside the
`codex_otel`
crate—the events listed below—is forwarded to the exporter.
### Event catalog
Every event shares a common set of metadata fields: `event.timestamp`,
`conversation.id`, `app.version`, `auth_mode` (when available),
`user.account_id` (when available), `terminal.type`, `model`, and
`slug`.
With OTEL enabled Codex emits the following event types (in addition to
the
metadata above):
- `codex.api_request`
- `cf_ray` (optional)
- `attempt`
- `duration_ms`
- `http.response.status_code` (optional)
- `error.message` (failures)
- `codex.sse_event`
- `event.kind`
- `duration_ms`
- `error.message` (failures)
- `input_token_count` (completion only)
- `output_token_count` (completion only)
- `cached_token_count` (completion only, optional)
- `reasoning_token_count` (completion only, optional)
- `tool_token_count` (completion only)
- `codex.user_prompt`
- `prompt_length`
- `prompt` (redacted unless `log_user_prompt = true`)
- `codex.tool_decision`
- `tool_name`
- `call_id`
- `decision` (`approved`, `approved_for_session`, `denied`, or `abort`)
- `source` (`config` or `user`)
- `codex.tool_result`
- `tool_name`
- `call_id`
- `arguments`
- `duration_ms` (execution time for the tool)
- `success` (`"true"` or `"false"`)
- `output`
### Choosing an exporter
Set `otel.exporter` to control where events go:
- `none` – leaves instrumentation active but skips exporting. This is
the
default.
- `otlp-http` – posts OTLP log records to an OTLP/HTTP collector.
Specify the
endpoint, protocol, and headers your collector expects:
```toml
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```
- `otlp-grpc` – streams OTLP log records over gRPC. Provide the endpoint
and any
metadata headers:
```toml
[otel]
exporter = { otlp-grpc = {
endpoint = "https://otel.example.com:4317",
headers = { "x-otlp-meta" = "abc123" }
}}
```
If the exporter is `none` nothing is written anywhere; otherwise you
must run or point to your
own collector. All exporters run on a background batch worker that is
flushed on
shutdown.
If you build Codex from source the OTEL crate is still behind an `otel`
feature
flag; the official prebuilt binaries ship with the feature enabled. When
the
feature is disabled the telemetry hooks become no-ops so the CLI
continues to
function without the extra dependencies.
---------
Co-authored-by: Anton Panasenko <apanasenko@openai.com>
2025-09-29 19:30:55 +01:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, Display, TS)]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
#[serde(rename_all = "lowercase")]
|
|
|
|
|
|
pub enum AuthMode {
|
|
|
|
|
|
ApiKey,
|
|
|
|
|
|
ChatGPT,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
/// Request from the client to the server.
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(tag = "method", rename_all = "camelCase")]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub enum ClientRequest {
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
NewConversation {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: NewConversationParams,
|
|
|
|
|
|
},
|
2025-09-04 16:44:18 -07:00
|
|
|
|
/// List recorded Codex conversations (rollouts) with optional pagination and search.
|
|
|
|
|
|
ListConversations {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ListConversationsParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
/// Resume a recorded Codex conversation from a rollout file.
|
|
|
|
|
|
ResumeConversation {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ResumeConversationParams,
|
|
|
|
|
|
},
|
2025-09-09 08:39:00 -07:00
|
|
|
|
ArchiveConversation {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ArchiveConversationParams,
|
|
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
SendUserMessage {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: SendUserMessageParams,
|
|
|
|
|
|
},
|
2025-08-15 10:05:58 -07:00
|
|
|
|
SendUserTurn {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: SendUserTurnParams,
|
|
|
|
|
|
},
|
2025-08-13 23:12:03 -07:00
|
|
|
|
InterruptConversation {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: InterruptConversationParams,
|
|
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
AddConversationListener {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: AddConversationListenerParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
RemoveConversationListener {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: RemoveConversationListenerParams,
|
|
|
|
|
|
},
|
2025-08-22 13:10:11 -07:00
|
|
|
|
GitDiffToRemote {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: GitDiffToRemoteParams,
|
|
|
|
|
|
},
|
2025-09-11 09:16:34 -07:00
|
|
|
|
LoginApiKey {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: LoginApiKeyParams,
|
|
|
|
|
|
},
|
2025-08-17 10:03:52 -07:00
|
|
|
|
LoginChatGpt {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
|
|
|
|
|
CancelLoginChatGpt {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: CancelLoginChatGptParams,
|
|
|
|
|
|
},
|
2025-08-20 20:36:34 -07:00
|
|
|
|
LogoutChatGpt {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
|
|
|
|
|
GetAuthStatus {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
2025-08-22 13:10:11 -07:00
|
|
|
|
params: GetAuthStatusParams,
|
2025-08-19 19:50:28 -07:00
|
|
|
|
},
|
2025-09-04 16:26:41 -07:00
|
|
|
|
GetUserSavedConfig {
|
2025-08-27 09:59:03 -07:00
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
2025-09-11 23:44:17 -07:00
|
|
|
|
SetDefaultModel {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: SetDefaultModelParams,
|
|
|
|
|
|
},
|
2025-09-08 10:30:13 -07:00
|
|
|
|
GetUserAgent {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
2025-09-10 17:03:35 -07:00
|
|
|
|
UserInfo {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
},
|
2025-09-29 12:19:09 -07:00
|
|
|
|
FuzzyFileSearch {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: FuzzyFileSearchParams,
|
|
|
|
|
|
},
|
2025-09-03 17:05:03 -07:00
|
|
|
|
/// Execute a command (argv vector) under the server's sandbox.
|
|
|
|
|
|
ExecOneOffCommand {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ExecOneOffCommandParams,
|
|
|
|
|
|
},
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct NewConversationParams {
|
|
|
|
|
|
/// Optional override for the model name (e.g. "o3", "o4-mini").
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Configuration profile from config.toml to specify default options.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub profile: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Working directory for the session. If relative, it is resolved against
|
|
|
|
|
|
/// the server process's current working directory.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cwd: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Approval policy for shell commands generated by the model:
|
|
|
|
|
|
/// `untrusted`, `on-failure`, `on-request`, `never`.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-08-18 09:36:57 -07:00
|
|
|
|
pub approval_policy: Option<AskForApproval>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
|
|
|
|
|
/// Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-08-18 09:36:57 -07:00
|
|
|
|
pub sandbox: Option<SandboxMode>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
|
|
|
|
|
|
/// Individual config settings that will override what is in
|
|
|
|
|
|
/// CODEX_HOME/config.toml.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub config: Option<HashMap<String, serde_json::Value>>,
|
|
|
|
|
|
|
|
|
|
|
|
/// The set of instructions to use instead of the default ones.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub base_instructions: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Whether to include the plan tool in the conversation.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_plan_tool: Option<bool>,
|
2025-08-15 11:55:53 -04:00
|
|
|
|
|
|
|
|
|
|
/// Whether to include the apply patch tool in the conversation.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_apply_patch_tool: Option<bool>,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct NewConversationResponse {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub model: String,
|
2025-09-11 21:04:40 -07:00
|
|
|
|
/// Note this could be ignored by the model.
|
2025-09-12 12:06:33 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reasoning_effort: Option<ReasoningEffort>,
|
2025-09-09 00:11:48 -07:00
|
|
|
|
pub rollout_path: PathBuf,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-08 14:54:47 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS)]
|
2025-09-04 16:44:18 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ResumeConversationResponse {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub model: String,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub initial_messages: Option<Vec<EventMsg>>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ListConversationsParams {
|
|
|
|
|
|
/// Optional page size; defaults to a reasonable server-side value.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub page_size: Option<usize>,
|
|
|
|
|
|
/// Opaque pagination cursor returned by a previous call.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cursor: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ConversationSummary {
|
2025-09-08 14:54:47 -07:00
|
|
|
|
pub conversation_id: ConversationId,
|
2025-09-04 16:44:18 -07:00
|
|
|
|
pub path: PathBuf,
|
|
|
|
|
|
pub preview: String,
|
|
|
|
|
|
/// RFC3339 timestamp string for the session start, if available.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub timestamp: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ListConversationsResponse {
|
|
|
|
|
|
pub items: Vec<ConversationSummary>,
|
|
|
|
|
|
/// Opaque cursor to pass to the next call to continue after the last item.
|
|
|
|
|
|
/// if None, there are no more items to return.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub next_cursor: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ResumeConversationParams {
|
|
|
|
|
|
/// Absolute path to the rollout JSONL file.
|
|
|
|
|
|
pub path: PathBuf,
|
|
|
|
|
|
/// Optional overrides to apply when spawning the resumed session.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub overrides: Option<NewConversationParams>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AddConversationSubscriptionResponse {
|
|
|
|
|
|
pub subscription_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-09 08:39:00 -07:00
|
|
|
|
/// The [`ConversationId`] must match the `rollout_path`.
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ArchiveConversationParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub rollout_path: PathBuf,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ArchiveConversationResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct RemoveConversationSubscriptionResponse {}
|
|
|
|
|
|
|
2025-09-11 09:16:34 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginApiKeyParams {
|
|
|
|
|
|
pub api_key: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginApiKeyResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginChatGptResponse {
|
|
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
/// URL the client should open in a browser to initiate the OAuth flow.
|
|
|
|
|
|
pub auth_url: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GitDiffToRemoteResponse {
|
|
|
|
|
|
pub sha: GitSha,
|
|
|
|
|
|
pub diff: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct CancelLoginChatGptParams {
|
2025-08-17 10:03:52 -07:00
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct GitDiffToRemoteParams {
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct CancelLoginChatGptResponse {}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-22 13:10:11 -07:00
|
|
|
|
pub struct LogoutChatGptParams {}
|
2025-08-17 10:03:52 -07:00
|
|
|
|
|
2025-08-19 19:50:28 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct LogoutChatGptResponse {}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GetAuthStatusParams {
|
2025-08-22 13:10:11 -07:00
|
|
|
|
/// If true, include the current auth token (if available) in the response.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub include_token: Option<bool>,
|
|
|
|
|
|
/// If true, attempt to refresh the token before returning status.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub refresh_token: Option<bool>,
|
2025-08-19 19:50:28 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-03 17:05:03 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ExecOneOffCommandParams {
|
|
|
|
|
|
/// Command argv to execute.
|
|
|
|
|
|
pub command: Vec<String>,
|
|
|
|
|
|
/// Timeout of the command in milliseconds.
|
|
|
|
|
|
/// If not specified, a sensible default is used server-side.
|
|
|
|
|
|
pub timeout_ms: Option<u64>,
|
|
|
|
|
|
/// Optional working directory for the process. Defaults to server config cwd.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cwd: Option<PathBuf>,
|
|
|
|
|
|
/// Optional explicit sandbox policy overriding the server default.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub sandbox_policy: Option<SandboxPolicy>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct ExecArbitraryCommandResponse {
|
|
|
|
|
|
pub exit_code: i32,
|
|
|
|
|
|
pub stdout: String,
|
|
|
|
|
|
pub stderr: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-17 10:03:52 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub struct GetAuthStatusResponse {
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_method: Option<AuthMode>,
|
2025-08-22 13:10:11 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_token: Option<String>,
|
2025-09-11 09:16:34 -07:00
|
|
|
|
|
|
|
|
|
|
// Indicates that auth method must be valid to use the server.
|
|
|
|
|
|
// This can be false if using a custom provider that is configured
|
|
|
|
|
|
// with requires_openai_auth == false.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub requires_openai_auth: Option<bool>,
|
2025-08-20 20:36:34 -07:00
|
|
|
|
}
|
2025-08-17 10:03:52 -07:00
|
|
|
|
|
2025-09-08 10:30:13 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct GetUserAgentResponse {
|
|
|
|
|
|
pub user_agent: String,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-10 17:03:35 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct UserInfoResponse {
|
|
|
|
|
|
/// Note: `alleged_user_email` is not currently verified. We read it from
|
|
|
|
|
|
/// the local auth.json, which the user could theoretically modify. In the
|
|
|
|
|
|
/// future, we may add logic to verify the email against the server before
|
|
|
|
|
|
/// returning it.
|
|
|
|
|
|
pub alleged_user_email: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-27 09:59:03 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-09-04 16:26:41 -07:00
|
|
|
|
pub struct GetUserSavedConfigResponse {
|
|
|
|
|
|
pub config: UserSavedConfig,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-11 23:44:17 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SetDefaultModelParams {
|
2025-09-12 11:35:51 -07:00
|
|
|
|
/// If set to None, this means `model` should be cleared in config.toml.
|
2025-09-11 23:44:17 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model: Option<String>,
|
2025-09-12 11:35:51 -07:00
|
|
|
|
/// If set to None, this means `model_reasoning_effort` should be cleared
|
|
|
|
|
|
/// in config.toml.
|
2025-09-11 23:44:17 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reasoning_effort: Option<ReasoningEffort>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SetDefaultModelResponse {}
|
|
|
|
|
|
|
2025-09-04 16:26:41 -07:00
|
|
|
|
/// UserSavedConfig contains a subset of the config. It is meant to expose mcp
|
|
|
|
|
|
/// client-configurable settings that can be specified in the NewConversation
|
|
|
|
|
|
/// and SendUserTurn requests.
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct UserSavedConfig {
|
2025-08-27 09:59:03 -07:00
|
|
|
|
/// Approvals
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub approval_policy: Option<AskForApproval>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub sandbox_mode: Option<SandboxMode>,
|
2025-09-04 16:26:41 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub sandbox_settings: Option<SandboxSettings>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
|
2025-09-04 16:26:41 -07:00
|
|
|
|
/// Model-specific configuration
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model: Option<String>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model_reasoning_effort: Option<ReasoningEffort>,
|
2025-09-04 16:26:41 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model_reasoning_summary: Option<ReasoningSummary>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub model_verbosity: Option<Verbosity>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Tools
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub tools: Option<Tools>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
|
|
|
|
|
|
/// Profiles
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub profile: Option<String>,
|
2025-09-04 16:26:41 -07:00
|
|
|
|
#[serde(default)]
|
|
|
|
|
|
pub profiles: HashMap<String, Profile>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// MCP representation of a [`codex_core::config_profile::ConfigProfile`].
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct Profile {
|
|
|
|
|
|
pub model: Option<String>,
|
|
|
|
|
|
/// The key in the `model_providers` map identifying the
|
|
|
|
|
|
/// [`ModelProviderInfo`] to use.
|
|
|
|
|
|
pub model_provider: Option<String>,
|
|
|
|
|
|
pub approval_policy: Option<AskForApproval>,
|
|
|
|
|
|
pub model_reasoning_effort: Option<ReasoningEffort>,
|
|
|
|
|
|
pub model_reasoning_summary: Option<ReasoningSummary>,
|
|
|
|
|
|
pub model_verbosity: Option<Verbosity>,
|
|
|
|
|
|
pub chatgpt_base_url: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
/// MCP representation of a [`codex_core::config::ToolsToml`].
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct Tools {
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub web_search: Option<bool>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub view_image: Option<bool>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// MCP representation of a [`codex_core::config_types::SandboxWorkspaceWrite`].
|
|
|
|
|
|
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SandboxSettings {
|
|
|
|
|
|
#[serde(default)]
|
|
|
|
|
|
pub writable_roots: Vec<PathBuf>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub network_access: Option<bool>,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub exclude_tmpdir_env_var: Option<bool>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
2025-09-04 16:26:41 -07:00
|
|
|
|
pub exclude_slash_tmp: Option<bool>,
|
2025-08-27 09:59:03 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserMessageParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub items: Vec<InputItem>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-15 10:05:58 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserTurnParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
pub items: Vec<InputItem>,
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
pub approval_policy: AskForApproval,
|
|
|
|
|
|
pub sandbox_policy: SandboxPolicy,
|
|
|
|
|
|
pub model: String,
|
2025-09-12 12:06:33 -07:00
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub effort: Option<ReasoningEffort>,
|
2025-08-15 10:05:58 -07:00
|
|
|
|
pub summary: ReasoningSummary,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-15 10:05:58 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserTurnResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:12:03 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct InterruptConversationParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, TS)]
|
2025-08-13 23:12:03 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-17 21:40:31 -07:00
|
|
|
|
pub struct InterruptConversationResponse {
|
|
|
|
|
|
pub abort_reason: TurnAbortReason,
|
|
|
|
|
|
}
|
2025-08-13 23:12:03 -07:00
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct SendUserMessageResponse {}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AddConversationListenerParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct RemoveConversationListenerParams {
|
|
|
|
|
|
pub subscription_id: Uuid,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[serde(rename_all = "camelCase")]
|
2025-08-14 10:58:04 -07:00
|
|
|
|
#[serde(tag = "type", content = "data")]
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
pub enum InputItem {
|
|
|
|
|
|
Text {
|
|
|
|
|
|
text: String,
|
|
|
|
|
|
},
|
|
|
|
|
|
/// Pre‑encoded data: URI image.
|
|
|
|
|
|
Image {
|
|
|
|
|
|
image_url: String,
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
|
|
/// Local image path provided by the user. This will be converted to an
|
|
|
|
|
|
/// `Image` variant (base64 data URL) during request serialization.
|
|
|
|
|
|
LocalImage {
|
2025-08-13 23:00:50 -07:00
|
|
|
|
path: PathBuf,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-13 23:00:50 -07:00
|
|
|
|
// TODO(mbolin): Need test to ensure these constants match the enum variants.
|
|
|
|
|
|
|
|
|
|
|
|
pub const APPLY_PATCH_APPROVAL_METHOD: &str = "applyPatchApproval";
|
|
|
|
|
|
pub const EXEC_COMMAND_APPROVAL_METHOD: &str = "execCommandApproval";
|
|
|
|
|
|
|
|
|
|
|
|
/// Request initiated from the server and sent to the client.
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
#[serde(tag = "method", rename_all = "camelCase")]
|
|
|
|
|
|
pub enum ServerRequest {
|
|
|
|
|
|
/// Request to approve a patch.
|
|
|
|
|
|
ApplyPatchApproval {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ApplyPatchApprovalParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
/// Request to exec a command.
|
|
|
|
|
|
ExecCommandApproval {
|
|
|
|
|
|
#[serde(rename = "id")]
|
|
|
|
|
|
request_id: RequestId,
|
|
|
|
|
|
params: ExecCommandApprovalParams,
|
|
|
|
|
|
},
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ApplyPatchApprovalParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
2025-08-14 16:09:12 -07:00
|
|
|
|
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
|
|
|
|
|
|
/// and [codex_core::protocol::PatchApplyEndEvent].
|
|
|
|
|
|
pub call_id: String,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub file_changes: HashMap<PathBuf, FileChange>,
|
|
|
|
|
|
/// Optional explanatory reason (e.g. request for extra write access).
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reason: Option<String>,
|
|
|
|
|
|
/// When set, the agent is asking the user to allow writes under this root
|
|
|
|
|
|
/// for the remainder of the session (unclear if this is honored today).
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub grant_root: Option<PathBuf>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ExecCommandApprovalParams {
|
|
|
|
|
|
pub conversation_id: ConversationId,
|
2025-08-14 16:09:12 -07:00
|
|
|
|
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
|
|
|
|
|
|
/// and [codex_core::protocol::ExecCommandEndEvent].
|
|
|
|
|
|
pub call_id: String,
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub command: Vec<String>,
|
|
|
|
|
|
pub cwd: PathBuf,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub reason: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ExecCommandApprovalResponse {
|
|
|
|
|
|
pub decision: ReviewDecision,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
2025-08-13 23:00:50 -07:00
|
|
|
|
pub struct ApplyPatchApprovalResponse {
|
|
|
|
|
|
pub decision: ReviewDecision,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-29 12:19:09 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
#[ts(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct FuzzyFileSearchParams {
|
|
|
|
|
|
pub query: String,
|
|
|
|
|
|
pub roots: Vec<String>,
|
|
|
|
|
|
// if provided, will cancel any previous request that used the same value
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub cancellation_token: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Superset of [`codex_file_search::FileMatch`]
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
pub struct FuzzyFileSearchResult {
|
|
|
|
|
|
pub root: String,
|
|
|
|
|
|
pub path: String,
|
|
|
|
|
|
pub score: u32,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub indices: Option<Vec<u32>>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
pub struct FuzzyFileSearchResponse {
|
|
|
|
|
|
pub files: Vec<FuzzyFileSearchResult>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-20 20:36:34 -07:00
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct LoginChatGptCompleteNotification {
|
|
|
|
|
|
pub login_id: Uuid,
|
|
|
|
|
|
pub success: bool,
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub error: Option<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
|
|
|
|
|
#[serde(rename_all = "camelCase")]
|
|
|
|
|
|
pub struct AuthStatusChangeNotification {
|
|
|
|
|
|
/// Current authentication method; omitted if signed out.
|
|
|
|
|
|
#[serde(skip_serializing_if = "Option::is_none")]
|
|
|
|
|
|
pub auth_method: Option<AuthMode>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS, Display)]
|
2025-09-04 17:49:50 -07:00
|
|
|
|
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
|
|
|
|
|
|
#[strum(serialize_all = "camelCase")]
|
2025-08-20 20:36:34 -07:00
|
|
|
|
pub enum ServerNotification {
|
|
|
|
|
|
/// Authentication status changed
|
|
|
|
|
|
AuthStatusChange(AuthStatusChangeNotification),
|
|
|
|
|
|
|
|
|
|
|
|
/// ChatGPT login flow completed
|
|
|
|
|
|
LoginChatGptComplete(LoginChatGptCompleteNotification),
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-04 17:49:50 -07:00
|
|
|
|
impl ServerNotification {
|
|
|
|
|
|
pub fn to_params(self) -> Result<serde_json::Value, serde_json::Error> {
|
|
|
|
|
|
match self {
|
|
|
|
|
|
ServerNotification::AuthStatusChange(params) => serde_json::to_value(params),
|
|
|
|
|
|
ServerNotification::LoginChatGptComplete(params) => serde_json::to_value(params),
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
#[cfg(test)]
|
|
|
|
|
|
mod tests {
|
|
|
|
|
|
use super::*;
|
2025-09-23 13:31:36 -07:00
|
|
|
|
use anyhow::Result;
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
use pretty_assertions::assert_eq;
|
|
|
|
|
|
use serde_json::json;
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
2025-09-23 13:31:36 -07:00
|
|
|
|
fn serialize_new_conversation() -> Result<()> {
|
2025-08-13 23:00:50 -07:00
|
|
|
|
let request = ClientRequest::NewConversation {
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
request_id: RequestId::Integer(42),
|
|
|
|
|
|
params: NewConversationParams {
|
2025-09-22 20:10:52 -07:00
|
|
|
|
model: Some("gpt-5-codex".to_string()),
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
profile: None,
|
|
|
|
|
|
cwd: None,
|
2025-08-18 09:36:57 -07:00
|
|
|
|
approval_policy: Some(AskForApproval::OnRequest),
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
sandbox: None,
|
|
|
|
|
|
config: None,
|
|
|
|
|
|
base_instructions: None,
|
|
|
|
|
|
include_plan_tool: None,
|
2025-08-15 11:55:53 -04:00
|
|
|
|
include_apply_patch_tool: None,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
},
|
|
|
|
|
|
};
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
json!({
|
|
|
|
|
|
"method": "newConversation",
|
|
|
|
|
|
"id": 42,
|
|
|
|
|
|
"params": {
|
2025-09-22 20:10:52 -07:00
|
|
|
|
"model": "gpt-5-codex",
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
"approvalPolicy": "on-request"
|
|
|
|
|
|
}
|
|
|
|
|
|
}),
|
2025-09-23 13:31:36 -07:00
|
|
|
|
serde_json::to_value(&request)?,
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
);
|
2025-09-23 13:31:36 -07:00
|
|
|
|
Ok(())
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|
2025-09-08 14:54:47 -07:00
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn test_conversation_id_default_is_not_zeroes() {
|
|
|
|
|
|
let id = ConversationId::default();
|
2025-09-18 07:37:03 -07:00
|
|
|
|
assert_ne!(id.uuid, Uuid::nil());
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
2025-09-23 13:31:36 -07:00
|
|
|
|
fn conversation_id_serializes_as_plain_string() -> Result<()> {
|
|
|
|
|
|
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
|
2025-09-18 07:37:03 -07:00
|
|
|
|
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
json!("67e55044-10b1-426f-9247-bb680e5fe0c8"),
|
2025-09-23 13:31:36 -07:00
|
|
|
|
serde_json::to_value(id)?
|
2025-09-18 07:37:03 -07:00
|
|
|
|
);
|
2025-09-23 13:31:36 -07:00
|
|
|
|
Ok(())
|
2025-09-18 07:37:03 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
2025-09-23 13:31:36 -07:00
|
|
|
|
fn conversation_id_deserializes_from_plain_string() -> Result<()> {
|
2025-09-18 07:37:03 -07:00
|
|
|
|
let id: ConversationId =
|
2025-09-23 13:31:36 -07:00
|
|
|
|
serde_json::from_value(json!("67e55044-10b1-426f-9247-bb680e5fe0c8"))?;
|
2025-09-18 07:37:03 -07:00
|
|
|
|
|
|
|
|
|
|
assert_eq!(
|
2025-09-23 13:31:36 -07:00
|
|
|
|
ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?,
|
2025-09-18 07:37:03 -07:00
|
|
|
|
id,
|
|
|
|
|
|
);
|
2025-09-23 13:31:36 -07:00
|
|
|
|
Ok(())
|
2025-09-08 14:54:47 -07:00
|
|
|
|
}
|
feat: support traditional JSON-RPC request/response in MCP server (#2264)
This introduces a new set of request types that our `codex mcp`
supports. Note that these do not conform to MCP tool calls so that
instead of having to send something like this:
```json
{
"jsonrpc": "2.0",
"method": "tools/call",
"id": 42,
"params": {
"name": "newConversation",
"arguments": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
}
```
we can send something like this:
```json
{
"jsonrpc": "2.0",
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5",
"approvalPolicy": "on-request"
}
}
```
Admittedly, this new format is not a valid MCP tool call, but we are OK
with that right now. (That is, not everything we might want to request
of `codex mcp` is something that is appropriate for an autonomous agent
to do.)
To start, this introduces four request types:
- `newConversation`
- `sendUserMessage`
- `addConversationListener`
- `removeConversationListener`
The new `mcp-server/tests/codex_message_processor_flow.rs` shows how
these can be used.
The types are defined on the `CodexRequest` enum, so we introduce a new
`CodexMessageProcessor` that is responsible for dealing with requests
from this enum. The top-level `MessageProcessor` has been updated so
that when `process_request()` is called, it first checks whether the
request conforms to `CodexRequest` and dispatches it to
`CodexMessageProcessor` if so.
Note that I also decided to use `camelCase` for the on-the-wire format,
as that seems to be the convention for MCP.
For the moment, the new protocol is defined in `wire_format.rs` within
the `mcp-server` crate, but in a subsequent PR, I will probably move it
to its own crate to ensure the protocol has minimal dependencies and
that we can codegen a schema from it.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2264).
* #2278
* __->__ #2264
2025-08-13 17:36:29 -07:00
|
|
|
|
}
|