This allows `gh api` to work in the workspace-write sandbox w/ network
enabled. Without this we see e.g.
```
$ codex debug seatbelt --full-auto gh api repos/openai/codex/pulls --paginate -X GET -F state=all
Get "https://api.github.com/repos/openai/codex/pulls?per_page=100&state=all": tls: failed to verify certificate: x509: OSStatus -26276
```
Some PRs are being submitted without reference to existing bug reports
or feature requests. This updates the PR template and contributing
guidelines to request that all PRs from the community contain such a
link. This provides additional context and helps prioritize, track, and
assess PRs.
Show a warning when Auto Sandbox mode becomes enabled, if we detect
Everyone-writable directories, since they cannot be protected by the
current implementation of the Sandbox.
This PR also includes changes to how we detect Everyone-writable to be
*much* faster
Implements:
```
turn/start
turn/interrupt
```
along with their integration tests. These are relatively light wrappers
around the existing core logic, and changes to core logic are minimal.
However, an improvement made for developer ergonomics:
- `turn/start` replaces both `SendUserMessage` (no turn overrides) and
`SendUserTurn` (can override model, approval policy, etc.)
turns out the ToC was including itself when generating, which messed up
comparisons and sometimes made the file rewrite endlessly.
also fixed the slice so `<!-- End ToC -->` doesn’t get duplicated when
we insert the new ToC.
should behave nicely now - no extra rewrites, no doubled markers.
Co-authored-by: Eric Traut <etraut@openai.com>
We currently allow the user to dismiss the login menu via Ctrl+C. This
leaves them in a bad state where they're not auth'ed but have an input
prompt. In the extension, this isn't a problem because we don't allow
the user to dismiss the login screen.
Testing: I confirmed that Ctrl+C no longer dismisses the login menu.
This is an alternative (simpler) fix for a [community
PR](https://github.com/openai/codex/pull/3234).
This PR implements `account/login/start` and `account/login/completed`.
Instead of having separate endpoints for login with chatgpt and api, we
have a single enum handling different login methods. For sync auth
methods like sign in with api key, we still send a `completed`
notification back to be compatible with the async login flow.
I'm seeing two tests fail intermittently in CI. This PR attempts to
address (or at least mitigate) the flakiness.
* summarize_context_three_requests_and_instructions - The test snapshots
server.received_requests() immediately after observing TaskComplete.
Because the OpenAI /v1/responses call is streamed, the HTTP request can
still be draining when that event fires, so wiremock occasionally
reports only two captured requests. Fix is to wait for async activity to
complete.
* archive_conversation_moves_rollout_into_archived_directory - times out
on a slow CI run. Mitigation is to increase timeout value from 10s to
20s.
Implements:
```
thread/list
thread/start
thread/resume
thread/archive
```
along with their integration tests. These are relatively light wrappers
around the existing core logic, and changes to core logic are minimal.
However, an improvement made for developer ergonomics:
- `thread/start` and `thread/resume` automatically attaches a
conversation listener internally, so clients don't have to make a
separate `AddConversationListener` call like they do today.
For consistency, also updated `model/list` and `feedback/upload` (naming
conventions, list API params).
Currently, when the access token expires, we attempt to use the refresh
token to acquire a new access token. This works most of the time.
However, there are situations where the refresh token is expired,
exhausted (already used to perform a refresh), or revoked. In those
cases, the current logic treats the error as transient and attempts to
retry it repeatedly.
This PR changes the token refresh logic to differentiate between
permanent and transient errors. It also changes callers to treat the
permanent errors as fatal rather than retrying them. And it provides
better error messages to users so they understand how to address the
problem. These error messages should also help us further understand why
we're seeing examples of refresh token exhaustion.
Here is the error message in the CLI. The same text appears within the
extension.
<img width="863" height="38" alt="image"
src="https://github.com/user-attachments/assets/7ffc0d08-ebf0-4900-b9a9-265064202f4f"
/>
I also correct the spelling of "Re-connecting", which shouldn't have a
hyphen in it.
Testing: I manually tested these code paths by adding temporary code to
programmatically cause my refresh token to be exhausted (by calling the
token refresh endpoint in a tight loop more than 50 times). I then
simulated an access token expiration, which caused the token refresh
logic to be invoked. I confirmed that the updated logic properly handled
the error condition.
Note: We earlier discussed the idea of forcefully logging out the user
at the point where token refresh failed. I made several attempts to do
this, and all of them resulted in a bad UX. It's important to surface
this error to users in a way that explains the problem and tells them
that they need to log in again. We also previously discussed deleting
the auth.json file when this condition is detected. That also creates
problems because it effectively changes the auth status from logged in
to logged out, and this causes odd failures and inconsistent UX. I think
it's therefore better not to delete auth.json in this case. If the user
closes the CLI or VSCE and starts it again, we properly detect that the
access token is expired and the refresh token is "dead", and we force
the user to go through the login flow at that time.
This should address aspects of #6191, #5679, and #5505
This is just a refactor of `conversation_history` file by breaking it up
into multiple smaller ones with helper. This refactor will help us move
more functionality related to context management here. in a clean way.
- introduce RenderableItem to support both owned and borrowed children
in composite Renderables
- refactor some of our gnarlier manual layouts, BottomPane and
ChatWidget, to use ColumnRenderable
- Renderable and friends now handle cursor_pos()
## Summary
- Adds `ModelReasoningEffort` type to TypeScript SDK with values:
`minimal`, `low`, `medium`, `high`
- Adds `modelReasoningEffort` option to `ThreadOptions`
- Forwards the option to the codex CLI via `--config
model_reasoning_effort="<value>"`
- Includes test coverage for the new option
## Changes
- `sdk/typescript/src/threadOptions.ts`: Define `ModelReasoningEffort`
type and add to `ThreadOptions`
- `sdk/typescript/src/index.ts`: Export `ModelReasoningEffort` type
- `sdk/typescript/src/exec.ts`: Forward `modelReasoningEffort` to CLI as
config flag
- `sdk/typescript/src/thread.ts`: Pass option through to exec (+ debug
logging)
- `sdk/typescript/tests/run.test.ts`: Add test for
`modelReasoningEffort` flag forwarding
---------
Co-authored-by: Eric Traut <etraut@openai.com>
Previously it was not possible for codex to run commands as the init
process (pid 1) in linux. Commands run in containers tend to see their
own pid as 1. See https://github.com/openai/codex/issues/4198
This pr implements the solution mentioned in that issue.
Co-authored-by: Eric Traut <etraut@openai.com>
Previously, the `nix build .#default` command fails due to a missing
output hash in the `./codex-rs/default.nix` for `crossterm-0.28.1`:
```
error: No hash was found while vendoring the git dependency crossterm-0.28.1. You can add
a hash through the `outputHashes` argument of `importCargoLock`:
outputHashes = {
"crossterm-0.28.1" = "<hash>";
};
If you use `buildRustPackage`, you can add this attribute to the `cargoLock`
attribute set.
```
This PR adds the missing hash:
```diff
cargoLock.outputHashes = {
"ratatui-0.29.0" = "sha256-HBvT5c8GsiCxMffNjJGLmHnvG77A6cqEL+1ARurBXho=";
+ "crossterm-0.28.1" = "sha256-6qCtfSMuXACKFb9ATID39XyFDIEMFDmbx6SSmNe+728=";
};
```
With this change, `nix build .#default` succeeds:
```
> nix build .#default --max-jobs 1 --cores 2
warning: Git tree '/home/lukas/r/github.com/lukasl-dev/codex' is dirty
[1/0/1 built] building codex-rs-0.1.0 (buildPhase)[1/0/1 built] building codex-rs-0.1.0 (buildP[1/0/1 built] building codex-rs-0.1.0 (buildPhase): [1/0/1 built] building codex-rs-0.1.0 (b[1/0/1 built] building codex-rs-0.1.0 (buildPhase): Compi[1/0/1 built] building codex-rs-0.1
> ./result/bin/codex
You are running Codex in /home/lukas/r/github.com/lukasl-dev/codex
Since this folder is version controlled, you may wish to allow Codex to work in this folder without asking for approval.
...
```
**Typescript and JSON schema exports**
While working on Thread/Turn/Items type definitions, I realize we will
run into name conflicts between v1 and v2 APIs (e.g. `RateLimitWindow`
which won't be reusable since v1 uses `RateLimitWindow` from `protocol/`
which uses snake_case, but we want to expose camelCase everywhere, so
we'll define a V2 version of that struct that serializes as camelCase).
To set us up for a clean and isolated v2 API, generate types into a
`v2/` namespace for both typescript and JSON schema.
- TypeScript: v2 types emit under `out_dir/v2/*.ts`, and root index.ts
now re-exports them via `export * as v2 from "./v2"`;.
- JSON Schemas: v2 definitions bundle under `#/definitions/v2/*` rather
than the root.
The location for the original types (v1 and types pulled from
`protocol/` and other core crates) haven't changed and are still at the
root. This is for backwards compatibility: no breaking changes to
existing usages of v1 APIs and types.
**Notifications**
While working on export.rs, I:
- refactored server/client notifications with macros (like we already do
for methods) so they also get exported (I noticed they weren't being
exported at all).
- removed the hardcoded list of types to export as JSON schema by
leveraging the existing macros instead
- and took a stab at API V2 notifications. These aren't wired up yet,
and I expect to iterate on these this week.
The deprecation message is currently a bit confusing. Users may not
understand what is `[features].x`. I updated the docs and the
deprecation message for more guidance.
---------
Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
## Summary
musl 1.2.5 includes [several fixes to DNS over
TCP](https://www.openwall.com/lists/musl/2024/03/01/2), which appears to
be the root cause of #6116.
This approach is a bit janky, but according to codex:
> On the Ubuntu 24.04 runners we use, apt-cache policy musl-tools shows
only the distro build (1.2.4-2ubuntu2)"
We should build with this version and confirm.
## Testing
- [ ] TODO: test and see if this fixes Azure issues
V2 for `account/updated` and `account/logout` for app server. correspond
to old `authStatusChange` and `LogoutChatGpt` respectively. Followup PRs
will make other v2 endpoints call `account/updated` instead of
`authStatusChange` too.
## Problem
The `is_api_message` function in `conversation_history.rs` had a
misalignment between its documentation and implementation:
- **Comment stated**: "Anything that is not a system message or
'reasoning' message is considered an API message"
- **Code behavior**: Was returning `true` for `ResponseItem::Reasoning`,
meaning reasoning messages were incorrectly treated as API messages
This inconsistency could lead to reasoning messages being persisted in
conversation history when they should be filtered out.
## Root Cause
Investigation revealed that reasoning messages are explicitly excluded
throughout the codebase:
1. **Chat completions API** (lines 267-272 in `chat_completions.rs`)
omits reasoning from conversation history:
```rust
ResponseItem::Reasoning { .. } | ResponseItem::Other => {
// Omit these items from the conversation history.
continue;
}
```
2. **Existing tests** like `drops_reasoning_when_last_role_is_user` and
`ignores_reasoning_before_last_user` validate that reasoning should be
excluded from API payloads
## Solution
Fixed the `is_api_message` function to align with its documentation and
the rest of the codebase:
```rust
// Before: Reasoning was incorrectly returning true
ResponseItem::Reasoning { .. } | ResponseItem::WebSearchCall { .. } => true,
// After: Reasoning correctly returns false
ResponseItem::WebSearchCall { .. } => true,
ResponseItem::Reasoning { .. } | ResponseItem::Other => false,
```
## Testing
- Enhanced existing test to verify reasoning messages are properly
filtered out
- All 264 core tests pass, including 8 chat completions tests that
validate reasoning behavior
- No regressions introduced
This ensures reasoning messages are consistently excluded from API
message processing across the entire codebase.
I have read the CLA Document and I hereby sign the CLA
Closes#4452
This fixes a usability issue where users with symlinked folders in their
working directory couldn't search those files using the `@` file search
feature.
## Rationale
The "bug" was in the file search implementation in
`codex-rs/file-search/src/lib.rs`. The `WalkBuilder` was using default
settings which don't follow symlinks, causing two related issues:
1. Partial search results: The `@` search would find symlinked
directories but couldn't find files inside them
2. Inconsistent behavior: Users expect symlinked folders to behave like
regular folders in search results.
## Root cause
The `ignore` crate's `WalkBuilder` defaults to `.follow_links(false)`
[[source](9802945e63/crates/ignore/src/walk.rs (L532))],
so when traversing the file system, it would:
- Detect symlinked directories as directory entries
- But not traverse into them to index their contents
- The `get_file_path` function would then filter out actual directories,
leaving only the symlinked folder itself as a result
Fix: Added `.follow_links(true)` to the `WalkBuilder` configuration,
making the file search follow symlinks and index their contents just
like regular directories.
This change maintains backward compatibility since symlink following is
generally expected behavior for file search tools, and it aligns with
how users expect the `@` search feature to work.
Co-authored-by: Eric Traut <etraut@openai.com>
I was missing an example config.toml, and following docs/config.md alone
was slower. I had GPT-5 scan the codebase for every accepted config key,
check the defaults, and generate a single example config.toml with
annotations. It lists all keys Codex reads from TOML, sets each to its
effective default where it exists, leaves optional ones commented, and
adds short comments on purpose and valid values. This should make
onboarding faster and reduce configuration errors. I can rename it to
config.example.toml or move it under docs/ if you prefer.
This fixes bug #6121.
The `input_messages` field passed to the notify handler is currently
empty because the logic is incorrectly including the OutputText rather
than InputText. I've fixed that and added proper filtering to remove
messages associated with AGENTS.md and other context injected by the
harness.
Testing: I wrote a notify handler and verified that the user prompt is
correctly passed through to the handler.
## Summary
- replace the word part enum with a simple `is_word_separator` helper
- keep word-boundary logic aligned with the helper and punctuation-aware
behavior
- extend forward/backward deletion tests to cover whitespace around
separators
## Testing
- just fix -p codex-tui
- cargo test -p codex-tui
------
https://chatgpt.com/codex/tasks/task_i_68f91c71d838832ca2a3c4f0ec1b55d4
This value is used to determine whether mid-turn compaction is required.
Reasoning items are only excluded between turns (and soon will start to
be preserved even across turns) so it's incorrect to subtract
reasoning_output_tokens mid term.
This will result in higher values reported between turns but we are also
looking into preserving reasoning items for the entire conversation to
improve performance and caching.
Error message for attempting to OAuth with a remote RCP is incorrect and
misleading. The correct config is
```
[features]
rmcp_client = true
```
Co-authored-by: Eric Traut <etraut@openai.com>
This pull request adds a new documentation section to explain the
available slash commands in Codex. The update introduces a clear
overview and a reference table for built-in commands, making it easier
for users to understand and utilize these features.
Documentation updates:
* Added a new section to `docs/slash_commands.md` describing what slash
commands are and listing all built-in commands with their purposes in a
formatted table.
Hi OpenAI Codex team, currently "Visit chatgpt.com/codex/settings/usage
for up-to-date information on rate limits and credits" message in status
card and error messages. For now, without the "https://" prefix, the
link cannot be clicked directly from most terminals or chat interfaces.
<img width="636" height="127" alt="Screenshot 2025-11-02 at 22 47 06"
src="https://github.com/user-attachments/assets/5ea11e8b-fb74-451c-85dc-f4d492b2678b"
/>
---
The fix is intent to improve this issue:
- It makes the link clickable in terminals that support it, hence better
accessibility
- It follows standard URL formatting practices
- It maintains consistency with other links in the application (like the
existing "https://openai.com/chatgpt/pricing" links)
Thank you!