This is a very large PR with some non-backwards-compatible changes. Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish server that had two overlapping responsibilities: - Running an MCP server, providing some basic tool calls. - Running the app server used to power experiences such as the VS Code extension. This PR aims to separate these into distinct concepts: - `codex mcp-server` for the MCP server - `codex app-server` for the "application server" Note `codex mcp` still exists because it already has its own subcommands for MCP management (`list`, `add`, etc.) The MCP logic continues to live in `codex-rs/mcp-server` whereas the refactored app server logic is in the new `codex-rs/app-server` folder. Note that most of the existing integration tests in `codex-rs/mcp-server/tests/suite` were actually for the app server, so all the tests have been moved with the exception of `codex-rs/mcp-server/tests/suite/mod.rs`. Because this is already a large diff, I tried not to change more than I had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses the name `McpProcess` for now, but I will do some mechanical renamings to things like `AppServer` in subsequent PRs. While `mcp-server` and `app-server` share some overlapping functionality (like reading streams of JSONL and dispatching based on message types) and some differences (completely different message types), I ended up doing a bit of copypasta between the two crates, as both have somewhat similar `message_processor.rs` and `outgoing_message.rs` files for now, though I expect them to diverge more in the near future. One material change is that of the initialize handshake for `codex app-server`, as we no longer use the MCP types for that handshake. Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an `Initialize` variant to `ClientRequest`, which takes the `ClientInfo` object we need to update the `USER_AGENT_SUFFIX` in `codex-rs/app-server/src/message_processor.rs`. One other material change is in `codex-rs/app-server/src/codex_message_processor.rs` where I eliminated a use of the `send_event_as_notification()` method I am generally trying to deprecate (because it blindly maps an `EventMsg` into a `JSONNotification`) in favor of `send_server_notification()`, which takes a `ServerNotification`, as that is intended to be a custom enum of all notification types supported by the app server. So to make this update, I had to introduce a new variant of `ServerNotification`, `SessionConfigured`, which is a non-backwards compatible change with the old `codex mcp`, and clients will have to be updated after the next release that contains this PR. Note that `codex-rs/app-server/tests/suite/list_resume.rs` also had to be update to reflect this change. I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility crate to avoid some of the copying between `mcp-server` and `app-server`.
77 lines
2.3 KiB
Rust
77 lines
2.3 KiB
Rust
use std::path::Path;
|
|
|
|
use app_test_support::McpProcess;
|
|
use app_test_support::to_response;
|
|
use codex_core::config::ConfigToml;
|
|
use codex_protocol::mcp_protocol::SetDefaultModelParams;
|
|
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
|
|
use mcp_types::JSONRPCResponse;
|
|
use mcp_types::RequestId;
|
|
use pretty_assertions::assert_eq;
|
|
use tempfile::TempDir;
|
|
use tokio::time::timeout;
|
|
|
|
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
|
|
|
|
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
|
async fn set_default_model_persists_overrides() {
|
|
let codex_home = TempDir::new().expect("create tempdir");
|
|
create_config_toml(codex_home.path()).expect("write config.toml");
|
|
|
|
let mut mcp = McpProcess::new(codex_home.path())
|
|
.await
|
|
.expect("spawn mcp process");
|
|
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
|
|
.await
|
|
.expect("init timeout")
|
|
.expect("init failed");
|
|
|
|
let params = SetDefaultModelParams {
|
|
model: Some("gpt-4.1".to_string()),
|
|
reasoning_effort: None,
|
|
};
|
|
|
|
let request_id = mcp
|
|
.send_set_default_model_request(params)
|
|
.await
|
|
.expect("send setDefaultModel");
|
|
|
|
let resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
|
)
|
|
.await
|
|
.expect("setDefaultModel timeout")
|
|
.expect("setDefaultModel response");
|
|
|
|
let _: SetDefaultModelResponse =
|
|
to_response(resp).expect("deserialize setDefaultModel response");
|
|
|
|
let config_path = codex_home.path().join("config.toml");
|
|
let config_contents = tokio::fs::read_to_string(&config_path)
|
|
.await
|
|
.expect("read config.toml");
|
|
let config_toml: ConfigToml = toml::from_str(&config_contents).expect("parse config.toml");
|
|
|
|
assert_eq!(
|
|
ConfigToml {
|
|
model: Some("gpt-4.1".to_string()),
|
|
model_reasoning_effort: None,
|
|
..Default::default()
|
|
},
|
|
config_toml,
|
|
);
|
|
}
|
|
|
|
// Helper to create a config.toml; mirrors create_conversation.rs
|
|
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
|
|
let config_toml = codex_home.join("config.toml");
|
|
std::fs::write(
|
|
config_toml,
|
|
r#"
|
|
model = "gpt-5-codex"
|
|
model_reasoning_effort = "medium"
|
|
"#,
|
|
)
|
|
}
|