Commit Graph

14 Commits

Author SHA1 Message Date
Owen Lin
89c00611c2 [app-server] remove serde(skip_serializing_if = "Option::is_none") annotations (#5939)
We had this annotation everywhere in app-server APIs which made it so
that fields get serialized as `field?: T`, meaning if the field as
`None` we would omit the field in the payload. Removing this annotation
changes it so that we return `field: T | null` instead, which makes
codex app-server's API more aligned with the convention of public OpenAI
APIs like Responses.

Separately, remove the `#[ts(optional_fields = nullable)]` annotations
that were recently added which made all the TS types become `field?: T |
null` which is not great since clients need to handle undefined and
null.

I think generally it'll be best to have optional types be either:
- `field: T | null` (preferred, aligned with public OpenAI APIs)
- `field?: T` where we have to, such as types generated from the MCP
schema:
https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/schema/2025-06-18/schema.ts
(see changes to `mcp-types/`)

I updated @etraut-openai's unit test to check that all generated TS
types are one or the other, not both (so will error if we have a type
that has `field?: T | null`). I don't think there's currently a good use
case for that - but we can always revisit.
2025-10-30 18:18:53 +00:00
Rasmus Rygaard
846960ae3d Generate JSON schema for app-server protocol (#5063)
Add annotations and an export script that let us generate app-server
protocol types as typescript and JSONSchema.

The script itself is a bit hacky because we need to manually label some
of the types. Unfortunately it seems that enum variants don't get good
names by default and end up with something like `EventMsg1`,
`EventMsg2`, etc. I'm not an expert in this by any means, but since this
is only run manually and we already need to enumerate the types required
to describe the protocol, it didn't seem that much worse. An ideal
solution here would be to have some kind of root that we could generate
schemas for in one go, but I'm not sure if that's compatible with how we
generate the protocol today.
2025-10-20 11:45:11 -07:00
Michael Bolin
44262d8fd8 fix: ensure output of codex-rs/mcp-types/generate_mcp_types.py matches codex-rs/mcp-types/src/lib.rs (#3439)
https://github.com/openai/codex/pull/3395 updated `mcp-types/src/lib.rs`
by hand, but that file is generated code that is produced by
`mcp-types/generate_mcp_types.py`. Unfortunately, we do not have
anything in CI to verify this right now, but I will address that in a
subsequent PR.

#3395 ended up introducing a change that added a required field when
deserializing `InitializeResult`, breaking Codex when used as an MCP
client, so the quick fix in #3436 was to make the new field `Optional`
with `skip_serializing_if = "Option::is_none"`, but that did not address
the problem that `mcp-types/generate_mcp_types.py` and
`mcp-types/src/lib.rs` are out of sync.

This PR gets things back to where they are in sync. It removes the
custom `mcp_types::McpClientInfo` type that was added to
`mcp-types/src/lib.rs` and forces us to use the generated
`mcp_types::Implementation` type. Though this PR also updates
`generate_mcp_types.py` to generate the additional `user_agent:
Optional<String>` field on `Implementation` so that we can continue to
specify it when Codex operates as an MCP server.

However, this also requires us to specify `user_agent: None` when Codex
operates as an MCP client.

We may want to introduce our own `InitializeResult` type that is
specific to when we run as a server to avoid this in the future, but my
immediate goal is just to get things back in sync.
2025-09-10 16:14:41 -07:00
Gabriel Peal
8d766088e6 Make user_agent optional (#3436)
# External (non-OpenAI) Pull Request Requirements

Currently, mcp server fail to start with:
```
🖐  MCP client for `<CLIENT>` failed to start: missing field `user_agent`
````

It isn't clear to me yet why this is happening. My understanding is that
this struct is simply added as a new field to the response but this
should fix it until I figure out the full story here.

<img width="714" height="262" alt="CleanShot 2025-09-10 at 13 58 59"
src="https://github.com/user-attachments/assets/946b1313-5c1c-43d3-8ae8-ecc3de3406fc"
/>
2025-09-10 14:15:02 -07:00
Gabriel Peal
8636bff46d Set a user agent suffix when used as a mcp server (#3395)
This automatically adds a user agent suffix whenever the CLI is used as
a MCP server
2025-09-10 02:32:57 +00:00
Michael Bolin
a4f76bd75a chore: add TS annotation to generated mcp-types (#2424)
Adds the `TS` annotation from https://crates.io/crates/ts-rs to all
types to facilitate codegen.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2424).
* __->__ #2424
* #2423
2025-08-18 09:38:47 -07:00
Michael Bolin
018003e52f feat: leverage elicitations in the MCP server (#1623)
This updates the MCP server so that if it receives an
`ExecApprovalRequest` from the `Codex` session, it in turn sends an [MCP
elicitation](https://modelcontextprotocol.io/specification/draft/client/elicitation)
to the client to ask for the approval decision. Upon getting a response,
it forwards the client's decision via `Op::ExecApproval`.

Admittedly, we should be doing the same thing for
`ApplyPatchApprovalRequest`, but this is our first time experimenting
with elicitations, so I'm inclined to defer wiring that code path up
until we feel good about how this one works.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1623).
* __->__ #1623
* #1622
* #1621
* #1620
2025-07-19 01:32:03 -04:00
Michael Bolin
e78ec00e73 chore: support MCP schema 2025-06-18 (#1621)
This updates the schema in `generate_mcp_types.py` from `2025-03-26` to
`2025-06-18`, regenerates `mcp-types/src/lib.rs`, and then updates all
the code that uses `mcp-types` to honor the changes.

Ran

```
npx @modelcontextprotocol/inspector just codex mcp
```

and verified that I was able to invoke the `codex` tool, as expected.


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1621).
* #1623
* #1622
* __->__ #1621
2025-07-19 00:09:34 -04:00
Reilly Wood
a67a67f325 codex-rs: make tool calls prettier (#1211)
This PR overhauls how active tool calls and completed tool calls are
displayed:

1. More use of colour to indicate success/failure and distinguish
between components like tool name+arguments
2. Previously, the entire `CallToolResult` was serialized to JSON and
pretty-printed. Now, we extract each individual `CallToolResultContent`
and print those
1. The previous solution was wasting space by unnecessarily showing
details of the `CallToolResult` struct to users, without formatting the
actual tool call results nicely
2. We're now able to show users more information from tool results in
less space, with nicer formatting when tools return JSON results

### Before:

<img width="1251" alt="Screenshot 2025-06-03 at 11 24 26"
src="https://github.com/user-attachments/assets/5a58f222-219c-4c53-ace7-d887194e30cf"
/>

### After:

<img width="1265" alt="image"
src="https://github.com/user-attachments/assets/99fe54d0-9ebe-406a-855b-7aa529b91274"
/>

## Future Work

1. Integrate image tool result handling better. We should be able to
display images even if they're not the first `CallToolResultContent`
2. Users should have some way to view the full version of truncated tool
results
3. It would be nice to add some left padding for tool results, make it
more clear that they are results. This is doable, just a little fiddly
due to the way `first_visible_line` scrolling works
4. There's almost certainly a better way to format JSON than "all on 1
line with spaces to make Ratatui wrapping work". But I think that works
OK for now.
2025-06-03 14:29:26 -07:00
jcoens-openai
87cf120873 Workspace lints and disallow unwrap (#855)
Sets submodules to use workspace lints. Added denying unwrap as a
workspace level lint, which found a couple of cases where we could have
propagated errors. Also manually labeled ones that were fine by my eye.
2025-05-08 09:46:18 -07:00
jcoens-openai
8a89d3aeda Update cargo to 2024 edition (#842)
Some effects of this change:
- New formatting changes across many files. No functionality changes
should occur from that.
- Calls to `set_env` are considered unsafe, since this only happens in
tests we wrap them in `unsafe` blocks
2025-05-07 08:37:48 -07:00
Michael Bolin
21cd953dbd feat: introduce mcp-server crate (#792)
This introduces the `mcp-server` crate, which contains a barebones MCP
server that provides an `echo` tool that echoes the user's request back
to them.

To test it out, I launched
[modelcontextprotocol/inspector](https://github.com/modelcontextprotocol/inspector)
like so:

```
mcp-server$ npx @modelcontextprotocol/inspector cargo run --
```

and opened up `http://127.0.0.1:6274` in my browser:


![image](https://github.com/user-attachments/assets/83fc55d4-25c2-4497-80cd-e9702283ff93)

I also had to make a small fix to `mcp-types`, adding
`#[serde(untagged)]` to a number of `enum`s.
2025-05-02 17:25:58 -07:00
Michael Bolin
865e518771 fix: mcp-types serialization wasn't quite working (#791)
While creating a basic MCP server in
https://github.com/openai/codex/pull/792, I discovered a number of bugs
with the initial `mcp-types` crate that I needed to fix in order to
implement the server.

For example, I discovered that when serializing a message, `"jsonrpc":
"2.0"` was not being included.

I changed the codegen so that the field is added as:

```rust
    #[serde(rename = "jsonrpc", default = "default_jsonrpc")]
    pub jsonrpc: String,
```

This ensures that the field is serialized as `"2.0"`, though the field
still has to be assigned, which is tedious. I may experiment with
`Default` or something else in the future. (I also considered creating a
custom serializer, but I'm not sure it's worth the trouble.)

While here, I also added `MCP_SCHEMA_VERSION` and `JSONRPC_VERSION` as
`pub const`s for the crate.

I also discovered that MCP rejects sending `null` for optional fields,
so I had to add `#[serde(skip_serializing_if = "Option::is_none")]` on
`Option` fields.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/791).
* #792
* __->__ #791
2025-05-02 16:38:05 -07:00
Michael Bolin
83961e0299 feat: introduce mcp-types crate (#787)
This adds our own `mcp-types` crate to our Cargo workspace. We vendor in
the
[`2025-03-26/schema.json`](05f2045136/schema/2025-03-26/schema.json)
from the MCP repo and introduce a `generate_mcp_types.py` script to
codegen the `lib.rs` from the JSON schema.

Test coverage is currently light, but I plan to refine things as we
start making use of this crate.

And yes, I am aware that
https://github.com/modelcontextprotocol/rust-sdk exists, though the
published https://crates.io/crates/rmcp appears to be a competing
effort. While things are up in the air, it seems better for us to
control our own version of this code.

Incidentally, Codex did a lot of the work for this PR. I told it to
never edit `lib.rs` directly and instead to update
`generate_mcp_types.py` and then re-run it to update `lib.rs`. It
followed these instructions and once things were working end-to-end, I
iteratively asked for changes to the tests until the API looked
reasonable (and the code worked). Codex was responsible for figuring out
what to do to `generate_mcp_types.py` to achieve the requested test/API
changes.
2025-05-02 13:33:14 -07:00