Because conversations that use the Responses API can have encrypted
reasoning messages, trying to resume a conversation with a different
provider could lead to confusing "failed to decrypt" errors. (This is
reproducible by starting a conversation using ChatGPT login and resuming
it as a conversation that uses OpenAI models via Azure.)
This changes `ListConversationsParams` to take a `model_providers:
Option<Vec<String>>` and adds `model_provider` on each
`ConversationSummary` it returns so these cases can be disambiguated.
Note this ended up making changes to
`codex-rs/core/src/rollout/tests.rs` because it had a number of cases
where it expected `Some` for the value of `next_cursor`, but the list of
rollouts was complete, so according to this docstring:
bcd64c7e72/codex-rs/app-server-protocol/src/protocol.rs (L334-L337)
If there are no more items to return, then `next_cursor` should be
`None`. This PR updates that logic.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/5658).
* #5803
* #5793
* __->__ #5658
codex-app-server
codex app-server is the harness Codex uses to power rich interfaces such as the Codex VS Code extension. The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
Protocol
Similar to MCP, codex app-server supports bidirectional communication, streaming JSONL over stdio. The protocol is JSON-RPC 2.0, though the "jsonrpc":"2.0" header is omitted.
Message Schema
Currently, you can dump a TypeScript version of the schema using codex generate-ts. It is specific to the version of Codex you used to run generate-ts, so the two are guaranteed to be compatible.
codex generate-ts --out DIR