Commit Graph

335 Commits

Author SHA1 Message Date
Jeremy Rose
95a9938d3a fix trampling projects table when accepting trusted dirs (#3434)
Co-authored-by: Codex <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>
2025-09-10 23:01:31 +00:00
Jeremy Rose
f69f07b028 put workspace roots in the environment context (#3375)
to keep the tool description constant when the writable roots change.
2025-09-10 15:10:52 -07:00
dedrisian-oai
87654ec0b7 Persist model & reasoning changes (#2799)
Persists `/model` changes across both general and profile-specific
sessions.
2025-09-10 20:53:46 +00:00
Michael Bolin
51d9e05de7 Back out "feat: POSIX unification and snapshot sessions (#3179)" (#3430)
This reverts https://github.com/openai/codex/pull/3179.

#3179 appears to introduce a regression where sourcing dotfiles causes a
bunch of activity in the title bar (and potentially slows things down?)


https://github.com/user-attachments/assets/a68f7fb3-0749-4e0e-a321-2aa6993e01da

Verified this no longer happens after backing out #3179.

Original commit changeset: 62bd0e3d9d
2025-09-10 12:40:24 -07:00
Eric Traut
39db113cc9 Added images to UserMessageEvent (#3400)
This PR adds an `images` field to the existing `UserMessageEvent` so we
can encode zero or more images associated with a user message. This
allows images to be restored when conversations are restored.
2025-09-10 10:18:43 -07:00
Ahmed Ibrahim
45bd5ca4b9 Move initial history to protocol (#3422)
To fix an edge case of forking then resuming

#3419
2025-09-10 10:17:24 -07:00
Michael Bolin
c13c3dadbf fix: remove unnecessary #[allow(dead_code)] annotation (#3357) 2025-09-10 08:19:05 -07:00
Gabriel Peal
8636bff46d Set a user agent suffix when used as a mcp server (#3395)
This automatically adds a user agent suffix whenever the CLI is used as
a MCP server
2025-09-10 02:32:57 +00:00
Ahmed Ibrahim
43809a454e Introduce rollout items (#3380)
This PR introduces Rollout items. This enable us to rollout eventmsgs
and session meta.

This is mostly #3214 with rebase on main
2025-09-09 23:52:33 +00:00
Andrew Tan
de6559f2ab Include apply_patch tool for oss models from gpt-oss providers with different naming convention (e.g. openai/gpt-oss-*) (#2811)
Model providers like Groq, Openrouter, AWS Bedrock, VertexAI and others
typically prefix the name of gpt-oss models with `openai`, e.g.
`openai/gpt-oss-120b`.

This PR is to match the model name slug using `contains` instead of
`starts_with` to ensure that the `apply_patch` tool is included in the
tools for models names like `openai/gpt-oss-120b`

Without this, the gpt-oss models will often try to call the
`apply_patch` tool directly instead of via the `shell` command, leading
to validation errors.

I have run all the local checks.

Note: The gpt-oss models from non-Ollama providers are typically run via
a profile with a different base_url (instead of with the `--oss` flag)

---------

Co-authored-by: Andrew Tan <andrewtan@Andrews-Mac.local>
2025-09-09 15:02:02 -07:00
Gabriel Peal
5eab4c7ab4 Replace config.responses_originator_header_internal_override with CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR (#3388)
The previous config approach had a few issues:
1. It is part of the config but not designed to be used externally
2. It had to be wired through many places (look at the +/- on this PR
3. It wasn't guaranteed to be set consistently everywhere because we
don't have a super well defined way that configs stack. For example, the
extension would configure during newConversation but anything that
happened outside of that (like login) wouldn't get it.

This env var approach is cleaner and also creates one less thing we have
to deal with when coming up with a better holistic story around configs.

One downside is that I removed the unit test testing for the override
because I don't want to deal with setting the global env or spawning
child processes and figuring out how to introspect their originator
header. The new code is sufficiently simple and I tested it e2e that I
feel as if this is still worth it.
2025-09-09 17:23:23 -04:00
Wang
ac8a3155d6 feat(core): re-export InitialHistory from conversation_manager (#3270)
This commit adds a re-export for InitialHistory from the internal
conversation_manager module in codex-core's lib.rs.

The `RolloutRecorder::get_rollout_history` method (exposed via `pub use
rollout::RolloutRecorder;`, already present in lib.rs) returns an
`InitialHistory` type, which is defined in the private
conversation_manager module. Without this re-export, consumers of the
public RolloutRecorder API would not be able to directly use the return
type, as they cannot access the private module. This would result in an
inconvenient experience where the method's return value cannot be
handled without additional, non-obvious imports.

By adding `pub use conversation_manager::InitialHistory;`, we make
InitialHistory available as `codex_core::InitialHistory`, improving API
ergonomics for users of the rollout functionality while keeping the
conversation_manager module internal.

No functional changes are made; this is a pure re-export for better
usability.

Signed-off-by: M4n5ter <m4n5terrr@gmail.com>
2025-09-09 10:37:08 -07:00
Michael Bolin
ace14e8d36 feat: add ArchiveConversation to ClientRequest (#3353)
Adds support for `ArchiveConversation` in the JSON-RPC server that takes
a `(ConversationId, PathBuf)` pair and:

- verifies the `ConversationId` corresponds to the rollout id at the
`PathBuf`
- if so, invokes
`ConversationManager.remove_conversation(ConversationId)`
- if the `CodexConversation` was in memory, send `Shutdown` and wait for
`ShutdownComplete` with a timeout
- moves the `.jsonl` file to `$CODEX_HOME/archived_sessions`

---------

Co-authored-by: Gabriel Peal <gabriel@openai.com>
2025-09-09 11:39:00 -04:00
Michael Bolin
2a76a08a9e fix: include rollout_path in NewConversationResponse (#3352)
Adding the `rollout_path` to the `NewConversationResponse` makes it so a
client can perform subsequent operations on a `(ConversationId,
PathBuf)` pair. #3353 will introduce support for `ArchiveConversation`.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/3352).
* #3353
* __->__ #3352
2025-09-09 00:11:48 -07:00
jif-oai
62bd0e3d9d feat: POSIX unification and snapshot sessions (#3179)
## Session snapshot
For POSIX shell, the goal is to take a snapshot of the interactive shell
environment, store it in a session file located in `.codex/` and only
source this file for every command that is run.
As a result, if a snapshot files exist, `bash -lc <CALL>` get replaced
by `bash -c <CALL>`.

This also fixes the issue that `bash -lc` does not source `.bashrc`,
resulting in missing env variables and aliases in the codex session.
## POSIX unification
Unify `bash` and `zsh` shell into a POSIX shell. The rational is that
the tool will not use any `zsh` specific capabilities.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-09-08 18:09:45 -07:00
Jeremy Rose
ac58749bd3 allow mach-lookup for com.apple.system.opendirectoryd.libinfo (#3334)
in the base sandbox policy. this is [allowed in Chrome
renderers](https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/common.sb;l=266;drc=7afa0043cfcddb3ef9dafe5acbfc01c2f7e7df01),
so I feel it's fairly safe.
2025-09-08 16:28:52 -07:00
Gabriel Peal
5eaaf307e1 Generate more typescript types and return conversation id with ConversationSummary (#3219)
This PR does multiple things that are necessary for conversation resume
to work from the extension. I wanted to make sure everything worked so
these changes wound up in one PR:
1. Generate more ts types
2. Resume rollout history files rather than create a new one every time
it is resumed so you don't see a duplicate conversation in history for
every resume. Chatted with @aibrahim-oai to verify this
3. Return conversation_id in conversation summaries
4. [Cleanup] Use serde and strong types for a lot of the rollout file
parsing
2025-09-08 17:54:47 -04:00
Biturd
cad37009e1 fix: improve MCP server initialization error handling #3196 #2346 #2555 (#3243)
• I have signed the CLA by commenting the required sentence and
triggered recheck.
• Local checks are all green (fmt / clippy / test).
• Could you please approve the pending GitHub Actions workflows
(first-time contributor), and when convenient, help with one approving
review so I can proceed? Thanks!

  ## Summary
- Catch and log task panics during server initialization instead of
propagating JoinError
- Handle tool listing failures gracefully, allowing partial server
initialization
- Improve error resilience on macOS where init timeouts are more common

  ## Test plan
  - [x] Test MCP server initialization with timeout scenarios
  - [x] Verify graceful handling of tool listing failures
  - [x] Confirm improved error messages and logging
  - [x] Test on macOS 

 ## Fix issue  #3196 #2346 #2555
### fix before:
<img width="851" height="363" alt="image"
src="https://github.com/user-attachments/assets/e1f9c749-71fd-4873-a04f-d3fc4cbe0ae6"
/>

<img width="775" height="108" alt="image"
src="https://github.com/user-attachments/assets/4e4748bd-9dd6-42b5-b38b-8bfe9341a441"
/>

### fix improved:
<img width="966" height="528" alt="image"
src="https://github.com/user-attachments/assets/418324f3-e37a-4a3c-8bdd-934f9ff21dfb"
/>

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-09-08 09:28:12 -07:00
dolan
6efb52e545 feat(mcp): per-server startup timeout (#3182)
Seeing timeouts on certain, slow mcp server starting up when codex is
invoked. Before this change, the timeout was a hard-coded 10s. Need the
ability to define arbitrary timeouts on a per-server basis.

## Summary of changes

- Add startup_timeout_ms to McpServerConfig with 10s default when unset
- Use per-server timeout for initialize and tools/list
- Introduce ManagedClient to store client and timeout; rename
LIST_TOOLS_TIMEOUT to DEFAULT_STARTUP_TIMEOUT
- Update docs to document startup_timeout_ms with example and options
table

---------

Co-authored-by: Matthew Dolan <dolan-openai@users.noreply.github.com>
2025-09-08 08:12:08 -07:00
Gabriel Peal
c8fab51372 Use ConversationId instead of raw Uuids (#3282)
We're trying to migrate from `session_id: Uuid` to `conversation_id:
ConversationId`. Not only does this give us more type safety but it
unifies our terminology across Codex and with the implementation of
session resuming, a conversation (which can span multiple sessions) is
more appropriate.

I started this impl on https://github.com/openai/codex/pull/3219 as part
of getting resume working in the extension but it's big enough that it
should be broken out.
2025-09-07 23:22:25 -04:00
pakrym-oai
0269096229 Move token usage/context information to session level (#3221)
Move context information into the main loop so it can be used to
interrupt the loop or start auto-compaction.
2025-09-06 15:19:23 +00:00
Anton Panasenko
ba9620aea7 [codex] respect overrides for model family configuration from toml file (#3176) 2025-09-05 16:56:58 -07:00
pakrym-oai
5775174ec2 Never store requests (#3212)
When item ids are sent to Responses API it will load them from the
database ignoring the provided values. This adds extra latency.

Not having the mode to store requests also allows us to simplify the
code.

## Breaking change

The `disable_response_storage` configuration option is removed.
2025-09-05 10:41:47 -07:00
jif-oai
ba631e7928 ZSH on UNIX system and better detection (#3187) 2025-09-05 09:51:01 -07:00
Jeremy Rose
d6182becbe syntax-highlight bash lines (#3142)
i'm not yet convinced i have the best heuristics for what to highlight,
but this feels like a useful step towards something a bit easier to
read, esp. when the model is producing large commands.

<img width="669" height="589" alt="Screenshot 2025-09-03 at 8 21 56 PM"
src="https://github.com/user-attachments/assets/b9cbcc43-80e8-4d41-93c8-daa74b84b331"
/>

also a fairly significant refactor of our line wrapping logic.
2025-09-05 14:10:32 +00:00
pakrym-oai
7df9e9c664 Correctly calculate remaining context size (#3190)
We had multiple issues with context size calculation:
1. `initial_prompt_tokens` calculation based on cache size is not
reliable, cache misses might set it to much higher value. For now
hardcoded to a safer constant.
2. Input context size for GPT-5 is 272k (that's where 33% came from).

Fixes.
2025-09-04 23:34:14 +00:00
Dylan
82ed7bd285 [mcp-server] Update read config interface (#3093)
## Summary
Follow-up to #3056

This PR updates the mcp-server interface for reading the config settings
saved by the user. At risk of introducing _another_ Config struct, I
think it makes sense to avoid tying our protocol to ConfigToml, as its
become a bit unwieldy. GetConfigTomlResponse was a de-facto struct for
this already - better to make it explicit, in my opinion.

This is technically a breaking change of the mcp-server protocol, but
given the previous interface was introduced so recently in #2725, and we
have not yet even started to call it, I propose proceeding with the
breaking change - but am open to preserving the old endpoint.

## Testing
- [x] Added additional integration test coverage
2025-09-04 16:26:41 -07:00
Anton Panasenko
e60a44cbab [codex] move configuration for reasoning summary format to model family config type (#3171) 2025-09-04 11:00:01 -07:00
Michael Bolin
0a83db5512 fix: use a more efficient wire format for ExecCommandOutputDeltaEvent.chunk (#3163)
When serializing to JSON, the existing solution created an enormous
array of ints, which is far more bytes on the wire than a base64-encoded
string would be.
2025-09-04 08:21:58 -07:00
Michael Bolin
bd4fa85507 fix: add callback to map before sending request to fix race condition (#3146)
Last week, I thought I found the smoking gun in our flaky integration
tests where holding these locks could have led to potential deadlock:

- https://github.com/openai/codex/pull/2876
- https://github.com/openai/codex/pull/2878

Yet even after those PRs went in, we continued to see flakinees in our
integration tests! Though with the additional logging added as part of
debugging those tests, I now saw things like:

```
read message from stdout: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) })
notification: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) })
read message from stdout: Request(JSONRPCRequest { id: Integer(0), jsonrpc: "2.0", method: "execCommandApproval", params: Some(Object {"conversation_id": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}) })
writing message to stdin: Response(JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} })
in read_stream_until_notification_message(codex/event/task_complete)
[mcp stderr] 2025-09-04T00:00:59.738585Z  INFO codex_mcp_server::message_processor: <- response: JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} }
[mcp stderr] 2025-09-04T00:00:59.738740Z DEBUG codex_core::codex: Submission sub=Submission { id: "1", op: ExecApproval { id: "0", decision: Approved } }
[mcp stderr] 2025-09-04T00:00:59.738832Z  WARN codex_core::codex: No pending approval found for sub_id: 0
```

That is, a response was sent for a request, but no callback was in place
to handle the response!

This time, I think I may have found the underlying issue (though the
fixes for holding locks for too long may have also been part of it),
which is I found cases where we were sending the request:


234c0a0469/codex-rs/core/src/codex.rs (L597)

before inserting the `Sender` into the `pending_approvals` map (which
has to wait on acquiring a mutex):


234c0a0469/codex-rs/core/src/codex.rs (L598-L601)

so it is possible the request could go out and the client could respond
before `pending_approvals` was updated!

Note this was happening in both `request_command_approval()` and
`request_patch_approval()`, which maps to the sorts of errors we have
been seeing when these integration tests have been flaking on us.

While here, I am also adding some extra logging that prints if inserting
into `pending_approvals` overwrites an entry as opposed to purely
inserting one. Today, a conversation can have only one pending request
at a time, but as we are planning to support parallel tool calls, this
invariant may not continue to hold, in which case we need to revisit
this abstraction.
2025-09-04 07:38:28 -07:00
Ahmed Ibrahim
234c0a0469 TUI: Add session resume picker (--resume) and quick resume (--continue) (#3135)
Adds a TUI resume flow with an interactive picker and quick resume.

- CLI: 
  - --resume / -r: open picker to resume a prior session
  - --continue   / -l: resume the most recent session (no picker)
- Behavior on resume: initial history is replayed, welcome banner
hidden, and the first redraw is suppressed to avoid flicker.
- Implementation:
- New tui/src/resume_picker.rs (paginated listing via
RolloutRecorder::list_conversations)
  - App::run accepts ResumeSelection; resumes from disk when requested
- ChatWidget refactor with ChatWidgetInit and new_from_existing; replays
initial messages
- Tests: cover picker sorting/preview extraction and resumed-history
rendering.
- Docs: getting-started updated with flags and picker usage.



https://github.com/user-attachments/assets/1bb6469b-e5d1-42f6-bec6-b1ae6debda3b
2025-09-04 06:20:40 +00:00
Ahmed Ibrahim
2b96f9f569 Dividing UserMsgs into categories to send it back to the tui (#3127)
This PR does the following:

- divides user msgs into 3 categories: plain, user instructions, and
environment context
- Centralizes adding user instructions and environment context to a
degree
- Improve the integration testing

Building on top of #3123

Specifically this
[comment](https://github.com/openai/codex/pull/3123#discussion_r2319885089).
We need to send the user message while ignoring the User Instructions
and Environment Context we attach.
2025-09-04 05:34:50 +00:00
Ahmed Ibrahim
f2036572b6 Replay EventMsgs from Response Items when resuming a session with history. (#3123)
### Overview

This PR introduces the following changes:
	1.	Adds a unified mechanism to convert ResponseItem into EventMsg.
2. Ensures that when a session is initialized with initial history, a
vector of EventMsg is sent along with the session configuration. This
allows clients to re-render the UI accordingly.
	3. 	Added integration testing

### Caveats

This implementation does not send every EventMsg that was previously
dispatched to clients. The excluded events fall into two categories:
	•	“Arguably” rolled-out events
Examples include tool calls and apply-patch calls. While these events
are conceptually rolled out, we currently only roll out ResponseItems.
These events are already being handled elsewhere and transformed into
EventMsg before being sent.
	•	Non-rolled-out events
Certain events such as TurnDiff, Error, and TokenCount are not rolled
out at all.

### Future Directions

At present, resuming a session involves maintaining two states:
	•	UI State
Clients can replay most of the important UI from the provided EventMsg
history.
	•	Model State
The model receives the complete session history to reconstruct its
internal state.

This design provides a solid foundation. If, in the future, more precise
UI reconstruction is needed, we have two potential paths:
1. Introduce a third data structure that allows us to derive both
ResponseItems and EventMsgs.
2. Clearly divide responsibilities: the core system ensures the
integrity of the model state, while clients are responsible for
reconstructing the UI.
2025-09-04 04:47:00 +00:00
Dylan
db5276f8e6 chore: Clean up verbosity config (#3056)
## Summary
It appears that #2108 hit a merge conflict with #2355 - I failed to
notice the path difference when re-reviewing the former. This PR
rectifies that, and consolidates it into the protocol package, in line
with our philosophy of specifying types in one place.

## Testing
- [x] Adds config test for model_verbosity
2025-09-03 12:20:31 -07:00
Sing303
0e827b6598 Auto-approve DangerFullAccess patches on non-sandboxed platforms (#2988)
**What?**
Auto-approve patches when `SandboxPolicy::DangerFullAccess` is enabled
on platforms without sandbox support.
Changes in `codex-rs/core/src/safety.rs`: return
`SafetyCheck::AutoApprove { sandbox_type: SandboxType::None }` when no
sandbox is available and DangerFullAccess is set.

**Why?**
On platforms lacking sandbox support, requiring explicit user approval
despite `DangerFullAccess` being explicitly enabled adds friction
without additional safety. This aligns behavior with the stated policy
intent.

**How?**
Extend `assess_patch_safety` match:

* If `get_platform_sandbox()` returns `Some`, keep `AutoApprove {
sandbox_type }`.
* If `None` **and** `SandboxPolicy::DangerFullAccess`, return
`AutoApprove { SandboxType::None }`.
* Otherwise, fall back to `AskUser`.

**Tests**

* Local checks:
  ```bash
cargo test && cargo clippy --tests && cargo fmt -- --config
imports_granularity=Item
  ```
(Additionally: `just fmt`, `just fix -p codex-core`, `cargo check -p
codex-core`.)

**Docs**
No user-facing CLI changes. No README/help updates needed.

**Risk/Impact**
Reduces prompts on non-sandboxed platforms when DangerFullAccess is
explicitly chosen; consistent with policy semantics.

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-09-03 10:57:47 -07:00
Ahmed Ibrahim
daaadfb260 Introduce Rollout Policy (#3116)
Have a helper function for deciding if we are rolling out a function or
not
2025-09-03 17:37:07 +00:00
pakrym-oai
c636f821ae Add a common way to create HTTP client (#3110)
Ensure User-Agent and originator are always sent.
2025-09-03 10:11:02 -07:00
Jeremy Rose
97000c6e6d core: correct sandboxed shell tool description (reads allowed anywhere) (#3069)
Correct the `shell` tool description for sandboxed runs and add targeted
tests.

- Fix the WorkspaceWrite description to clearly state that writes
outside the writable roots require escalated permissions; reads are not
restricted. The previous wording/formatting could be read as restricting
reads outside the workspace.
- Render the writable roots list on its own lines under a newline after
"writable roots:" for clarity.
- Show the "Commands that require network access" note only in
WorkspaceWrite when network is disabled.
- Add focused tests that call `create_shell_tool_for_sandbox` directly
and assert the exact description text for WorkspaceWrite, ReadOnly, and
DangerFullAccess.
- Update AGENTS.md to note that `just fmt` can be run automatically
without asking.
2025-09-03 10:02:34 -07:00
Ahmed Ibrahim
a56eb48195 Use the new search tool (#3086)
We were using the preview search tool in the past. We should use the new
one.
2025-09-03 01:16:47 -07:00
Ahmed Ibrahim
d77b33ded7 core(rollout): extract rollout module, add listing API, and return file heads (#1634)
- Move rollout persistence and listing into a dedicated module:
rollout/{recorder,list}.
- Expose lightweight conversation listing that returns file paths plus
the first 5 JSONL records for preview.
2025-09-03 07:39:19 +00:00
pchuri
6aa306c584 feat: add stable file locking using std::fs APIs (#2894)
## Summary

This PR implements advisory file locking for the message history using
Rust 1.89+ stabilized std::fs::File locking APIs, eliminating the need
for external dependencies.

## Key Changes

- **Stable API Usage**: Uses std::fs::File::try_lock() and
try_lock_shared() APIs stabilized in Rust 1.89
- **Cross-Platform Compatibility**: 
  - Unix systems use try_lock_shared() for advisory read locks
  - Windows systems use try_lock() due to different lock semantics
- **Retry Logic**: Maintains existing retry behavior for concurrent
access scenarios
- **No External Dependencies**: Removes need for external file locking
crates

## Technical Details

The implementation provides advisory file locking to prevent corruption
when multiple Codex processes attempt to write to the message history
file simultaneously. The locking is platform-aware to handle differences
in Windows vs Unix file locking behavior.

## Testing

-  Builds successfully on all platforms
-  Existing message history tests pass
-  File locking retry logic verified

Related to discussion in #2773 about using stabilized Rust APIs instead
of external dependencies.

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-09-02 23:46:27 -07:00
Jeremy Rose
53413c728e parse cd foo && ... for exec and apply_patch (#3083)
sometimes the model likes to run "cd foo && ..." instead of using the
workdir parameter of exec. handle them roughly the same.
2025-09-03 05:26:06 +00:00
Dominik Kundel
b127a3643f Improve gpt-oss compatibility (#2461)
The gpt-oss models require reasoning with subsequent Chat Completions
requests because otherwise the model forgets why the tools were called.
This change fixes that and also adds some additional missing
documentation around how to handle context windows in Ollama and how to
show the CoT if you desire to.
2025-09-02 19:49:03 -07:00
Anton Panasenko
a93a907c7e [feat] use experimental reasoning summary (#3071)
<img width="1512" height="442" alt="Screenshot 2025-09-02 at 3 49 46 PM"
src="https://github.com/user-attachments/assets/26c3c1cf-b7ed-4520-a12a-8d38a8e0c318"
/>
2025-09-02 18:47:14 -07:00
pakrym-oai
03e2796ca4 Move CodexAuth and AuthManager to the core crate (#3074)
Fix a long standing layering issue.
2025-09-02 18:36:19 -07:00
Eric Traut
051f185ce3 Added back the logic to handle rate-limit errors when using API key (#3070)
A previous PR removed this when adding rate-limit errors for the ChatGPT
auth path.
2025-09-02 17:50:15 -07:00
Dylan
6f75114695 [apply-patch] Fix lark grammar (#2651)
## Summary
Fixes an issue with the lark grammar definition for the apply_patch
freeform tool. This does NOT change the defaults, merely patches the
root cause of the issue we were seeing with empty lines, and an issue
with config flowing through correctly.

Specifically, the following requires that a line is non-empty:
```
add_line: "+" /(.+)/ LF -> line
```
but many changes _should_ involve creating/updating empty lines. The new
definition is:
```
add_line: "+" /(.*)/ LF -> line
```

## Testing
- [x] Tested locally, reproduced the issue without the update and
confirmed that the model will produce empty lines wiht the new lark
grammar
2025-09-02 17:38:19 -07:00
Ahmed Ibrahim
431a10fc50 chore: unify history loading (#2736)
We have two ways of loading conversation with a previous history. Fork
conversation and the experimental resume that we had before. In this PR,
I am unifying their code path. The path is getting the history items and
recording them in a brand new conversation. This PR also constraint the
rollout recorder responsibilities to be only recording to the disk and
loading from the disk.

The PR also fixes a current bug when we have two forking in a row:
History 1:
<Environment Context>
UserMessage_1
UserMessage_2
UserMessage_3

**Fork with n = 1 (only remove one element)**
History 2:
<Environment Context>
UserMessage_1
UserMessage_2
<Environment Context>

**Fork with n = 1 (only remove one element)**
History 2:
<Environment Context>
UserMessage_1
UserMessage_2
**<Environment Context>**

This shouldn't happen but because we were appending the `<Environment
Context>` after each spawning and it's considered as _user message_.
Now, we don't add this message if restoring and old conversation.
2025-09-02 22:44:29 +00:00
Jeremy Rose
e442ecedab rework message styling (#2877)
https://github.com/user-attachments/assets/cf07f62b-1895-44bb-b9c3-7a12032eb371
2025-09-02 17:29:58 +00:00
Michael Bolin
c988ce28fe fix: drop Mutex before calling tx_approve.send() (#2876) 2025-08-28 22:49:29 -07:00