Commit Graph

11 Commits

Author SHA1 Message Date
jif-oai
f7b4e29609 feat: feature flag (#4948)
Add proper feature flag instead of having custom flags for everything.
This is just for experimental/wip part of the code
It can be used through CLI:
```bash
codex --enable unified_exec --disable view_image_tool
```

Or in the `config.toml`
```toml
# Global toggles applied to every profile unless overridden.
[features]
apply_patch_freeform = true
view_image_tool = false
```

Follow-up:
In a following PR, the goal is to have a default have `bundles` of
features that we can associate to a model
2025-10-14 17:50:00 +00:00
Michael Bolin
5881c0d6d4 fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.

In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.

Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.

We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.

Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-10-01 02:16:26 +00:00
pakrym-oai
5775174ec2 Never store requests (#3212)
When item ids are sent to Responses API it will load them from the
database ignoring the provided values. This adds extra latency.

Not having the mode to store requests also allows us to simplify the
code.

## Breaking change

The `disable_response_storage` configuration option is removed.
2025-09-05 10:41:47 -07:00
Dylan
82ed7bd285 [mcp-server] Update read config interface (#3093)
## Summary
Follow-up to #3056

This PR updates the mcp-server interface for reading the config settings
saved by the user. At risk of introducing _another_ Config struct, I
think it makes sense to avoid tying our protocol to ConfigToml, as its
become a bit unwieldy. GetConfigTomlResponse was a de-facto struct for
this already - better to make it explicit, in my opinion.

This is technically a breaking change of the mcp-server protocol, but
given the previous interface was introduced so recently in #2725, and we
have not yet even started to call it, I propose proceeding with the
breaking change - but am open to preserving the old endpoint.

## Testing
- [x] Added additional integration test coverage
2025-09-04 16:26:41 -07:00
Dylan
db5276f8e6 chore: Clean up verbosity config (#3056)
## Summary
It appears that #2108 hit a merge conflict with #2355 - I failed to
notice the path difference when re-reviewing the former. This PR
rectifies that, and consolidates it into the protocol package, in line
with our philosophy of specifying types in one place.

## Testing
- [x] Adds config test for model_verbosity
2025-09-03 12:20:31 -07:00
vjain419
80b00a193e feat(gpt5): add model_verbosity for GPT‑5 via Responses API (#2108)
**Summary**
- Adds `model_verbosity` config (values: low, medium, high).
- Sends `text.verbosity` only for GPT‑5 family models via the Responses
API.
- Updates docs and adds serialization tests.

**Motivation**
- GPT‑5 introduces a verbosity control to steer output length/detail
without pro
mpt surgery.
- Exposing it as a config knob keeps prompts stable and makes behavior
explicit
and repeatable.

**Changes**
- Config:
  - Added `Verbosity` enum (low|medium|high).
- Added optional `model_verbosity` to `ConfigToml`, `Config`, and
`ConfigProfi
le`.
- Request wiring:
  - Extended `ResponsesApiRequest` with optional `text` object.
- Populates `text.verbosity` only when model family is `gpt-5`; omitted
otherw
ise.
- Tests:
- Verifies `text.verbosity` serializes when set and is omitted when not
set.
- Docs:
  - Added “GPT‑5 Verbosity” section in `codex-rs/README.md`.
  - Added `model_verbosity` section to `codex-rs/config.md`.

**Usage**
- In `~/.codex/config.toml`:
  - `model = "gpt-5"`
  - `model_verbosity = "low"` (or `"medium"` default, `"high"`)
- CLI override example:
  - `codex -c model="gpt-5" -c model_verbosity="high"`

**API Impact**
- Requests to GPT‑5 via Responses API include: `text: { verbosity:
"low|medium|h
igh" }` when configured.
- For legacy models or Chat Completions providers, `text` is omitted.

**Backward Compatibility**
- Default behavior unchanged when `model_verbosity` is not set (server
default “
medium”).

**Testing**
- Added unit tests for serialization/omission of `text.verbosity`.
- Ran `cargo fmt` and `cargo test --all-features` (all green).

**Docs**
- `README.md`: new “GPT‑5 Verbosity” note under Config with example.
- `config.md`: new `model_verbosity` section.

**Out of Scope**
- No changes to temperature/top_p or other GPT‑5 parameters.
- No changes to Chat Completions wiring.

**Risks / Notes**
- If OpenAI changes the wire shape for verbosity, we may need to update
`Respons
esApiRequest`.
- Behavior gated to `gpt-5` model family to avoid unexpected effects
elsewhere.

**Checklist**
- [x] Code gated to GPT‑5 family only
- [x] Docs updated (`README.md`, `config.md`)
- [x] Tests added and passing
- [x] Formatting applied

Release note: Add `model_verbosity` config to control GPT‑5 output verbosity via the Responses API (low|medium|high).
2025-08-22 09:12:10 -07:00
Ahmed Ibrahim
c9963b52e9 consolidate reasoning enums into one (#2428)
We have three enums for each of reasoning summaries and reasoning effort
with same values. They can be consolidated into one.
2025-08-18 11:50:17 -07:00
ae
dc15a5cf0b feat: accept custom instructions in profiles (#1803)
Allows users to set their experimental_instructions_file in configs.

For example the below enables experimental instructions when running
`codex -p foo`.
```
[profiles.foo]
experimental_instructions_file = "/Users/foo/.codex/prompt.md"
```

# Testing
-  Running against a profile with experimental_instructions_file works.
-  Running against a profile without experimental_instructions_file
works.
-  Running against no profile with experimental_instructions_file
works.
-  Running against no profile without experimental_instructions_file
works.
2025-08-04 09:34:46 -07:00
Gabriel Peal
bfeb8c92a5 Add codex apply to apply a patch created from the Codex remote agent (#1528)
In order to to this, I created a new `chatgpt` crate where we can put
any code that interacts directly with ChatGPT as opposed to the OpenAI
API. I added a disclaimer to the README for it that it should primarily
be modified by OpenAI employees.


https://github.com/user-attachments/assets/bb978e33-d2c9-4d8e-af28-c8c25b1988e8
2025-07-11 13:30:11 -04:00
Yusuf Eren
e444a50cf0 feat: add reasoning fields to profile settings (#1484) 2025-07-08 12:05:22 -07:00
Michael Bolin
3c03c25e56 feat: introduce --profile for Rust CLI (#921)
This introduces a much-needed "profile" concept where users can specify
a collection of options under one name and then pass that via
`--profile` to the CLI.

This PR introduces the `ConfigProfile` struct and makes it a field of
`CargoToml`. It further updates
`Config::load_from_base_config_with_overrides()` to respect
`ConfigProfile`, overriding default values where appropriate. A detailed
unit test is added at the end of `config.rs` to verify this behavior.

Details on how to use this feature have also been added to
`codex-rs/README.md`.
2025-05-13 16:52:52 -07:00