Commit Graph

691 Commits

Author SHA1 Message Date
aibrahim-oai
5589c6089b approval ui (#1933)
Asking for approval:

<img width="269" height="41" alt="image"
src="https://github.com/user-attachments/assets/b9ced569-3297-4dae-9ce7-0b015c9e14ea"
/>

Allow:

<img width="400" height="31" alt="image"
src="https://github.com/user-attachments/assets/92056b22-efda-4d49-854d-e2943d5fcf17"
/>

Reject:

<img width="372" height="30" alt="image"
src="https://github.com/user-attachments/assets/be9530a9-7d41-4800-bb42-abb9a24fc3ea"
/>

Always Approve:

<img width="410" height="36" alt="image"
src="https://github.com/user-attachments/assets/acf871ba-4c26-4501-b303-7956d0151754"
/>
2025-08-07 02:02:56 -07:00
Michael Bolin
c2c327c723 feat: change shell_environment_policy to default to inherit="all" (#1904)
Trying to use `core` as the default has been "too clever." Users can
always take responsibility for controlling the env without this setting
at all by specifying the `env` they use when calling `codex` in the
first place.

See https://github.com/openai/codex/issues/1249.
2025-08-07 01:55:41 -07:00
Ed Bayes
20084facfe Add spinner animation to TUI status indicator (#1917)
## Summary
- add a pulsing dot loader before the shimmering `Working` label in the
status indicator widget and include a small test asserting the spinner
character is rendered
- also fix a small bug in the ran command header by adding a space
between the  and `Ran command`


https://github.com/user-attachments/assets/6768c9d2-e094-49cb-ad51-44bcac10aa6f

## Testing
- `just fmt`
- `just fix` *(failed: E0658 `let` expressions in core/src/client.rs)*
- `cargo test --all-features` *(failed: E0658 `let` expressions in
core/src/client.rs)*

------
https://chatgpt.com/codex/tasks/task_i_68941bffdb948322b0f4190bc9dbe7f6

---------

Co-authored-by: aibrahim-oai <aibrahim@openai.com>
2025-08-07 08:45:04 +00:00
Michael Bolin
13982d6b4e chore: fix outstanding review comments from the bot on #1919 (#1928)
I should have read the comments before submitting!
2025-08-07 01:30:13 -07:00
ae
0334476894 feat: parse info from auth.json and show in /status (#1923)
- `/status` renders
    ```
    signed in with chatgpt
      login: example@example.com
      plan: plus
    ```
- Setup for using this info in a few more places.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-07 01:27:45 -07:00
Gabriel Peal
6d19b73edf Add logout command to CLI and TUI (#1932)
## Summary
- support `codex logout` via new subcommand and helper that removes the
stored `auth.json`
- expose a `logout` function in `codex-login` and test it
- add `/logout` slash command in the TUI; command list is filtered when
not logged in and the handler deletes `auth.json` then exits

## Testing
- `just fix` *(fails: failed to get `diffy` from crates.io)*
- `cargo test --all-features` *(fails: failed to get `diffy` from
crates.io)*

------
https://chatgpt.com/codex/tasks/task_i_68945c3facac832ca83d48499716fb51
2025-08-07 04:17:33 -04:00
ae
28395df957 [fix] fix absolute and % token counts (#1931)
- For absolute, use non-cached input + output.
- For estimating what % of the model's context window is used, we need
to account for reasoning output tokens from prior turns being dropped
from the context window. We approximate this here by subtracting
reasoning output tokens from the total. This will be off for the current
turn and pending function calls. We can improve it later.
2025-08-07 08:13:36 +00:00
Ed Bayes
eb80614a7c Tint chat composer background (#1921)
## Summary
- give the chat composer a subtle custom background and apply it across
the full area drawn

<img width="1008" height="718" alt="composer-bg"
src="https://github.com/user-attachments/assets/4b0f7f69-722a-438a-b4e9-0165ae8865a6"
/>

- update turn interrupted to be more human readable
<img width="648" height="170" alt="CleanShot 2025-08-06 at 22 44 47@2x"
src="https://github.com/user-attachments/assets/8d35e53a-bbfa-48e7-8612-c280a54e01dd"
/>

## Testing
- `cargo test --all-features` *(fails: `let` expressions in
`core/src/client.rs` require newer rustc)*
- `just fix` *(fails: `let` expressions in `core/src/client.rs` require
newer rustc)*

------
https://chatgpt.com/codex/tasks/task_i_68941f32c1008322bbcc39ee1d29a526
2025-08-07 00:46:45 -07:00
aibrahim-oai
04b40ac179 Move used tokens next to the hints (#1930)
Before:

<img width="341" height="58" alt="image"
src="https://github.com/user-attachments/assets/3b209e42-1157-4f7b-8385-825c865969e8"
/>

After:

<img width="490" height="53" alt="image"
src="https://github.com/user-attachments/assets/5d99b9bc-6ac2-4748-b62c-c0c3217622c2"
/>
2025-08-07 00:45:47 -07:00
easong-openai
4e29c4afe4 Add a UI hint when you press @ (#1903)
This will make @ more discoverable (even though it is currently not
super useful, IMO it should be used to bring files into context from
outside CWD)

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-07 07:41:48 +00:00
Michael Bolin
cd5f9074af feat: add /tmp by default (#1919)
Replaces the `include_default_writable_roots` option on
`sandbox_workspace_write` (that defaulted to `true`, which was slightly
weird/annoying) with `exclude_tmpdir_env_var`, which defaults to
`false`.

Though perhaps more importantly `/tmp` is now enabled by default as part
of `sandbox_mode = "workspace-write"`, though `exclude_slash_tmp =
false` can be used to disable this.
2025-08-07 00:17:00 -07:00
aibrahim-oai
fff2bb39f9 change todo (#1925)
<img width="746" height="135" alt="image"
src="https://github.com/user-attachments/assets/1605b2fb-aa3a-4337-b9e9-93f6ff1361c5"
/>


<img width="747" height="126" alt="image"
src="https://github.com/user-attachments/assets/6b4366bd-8548-4d29-8cfa-cd484d9a2359"
/>
2025-08-07 00:01:38 -07:00
aibrahim-oai
f15e0fe1df Ensure exec command end always emitted (#1908)
## Summary
- defer ExecCommandEnd emission until after sandbox resolution
- make sandbox error handler return final exec output and response
- align sandbox error stderr with response content and rename to
`final_output`
- replace unstable `let` chains in client command header logic

## Testing
- `just fmt`
- `just fix`
- `cargo test --all-features` *(fails: NotPresent in
core/tests/client.rs)*

------
https://chatgpt.com/codex/tasks/task_i_6893e63b0c408321a8e1ff2a052c4c51
2025-08-07 06:25:56 +00:00
ae
f0fe61c667 feat: use ctrl c in interrupt hint (#1926)
https://chatgpt.com/codex/tasks/task_i_689441c33e1c832c85ceda166dab5d33
2025-08-06 23:22:58 -07:00
ae
935ad5c6f2 feat: >_ (#1924) 2025-08-06 22:54:54 -07:00
aibrahim-oai
ec20e84d80 Change the UI of apply patch (#1907)
<img width="487" height="108" alt="image"
src="https://github.com/user-attachments/assets/3f6ffd56-36f6-40bc-b999-64279705416a"
/>

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-07 05:25:41 +00:00
easong-openai
2098b40369 Scrollable slash commands (#1830)
Scrollable slash commands. Part 1 of the multi PR.
2025-08-06 21:23:09 -07:00
aibrahim-oai
4971d54ca7 Show timing and token counts in status indicator (#1909)
## Summary
- track start time and cumulative tokens in status indicator
- display dim "(Ns • N tokens • Ctrl z to interrupt)" text after
animated Working header
- propagate token usage updates to status indicator views



https://github.com/user-attachments/assets/b73210c1-1533-40b5-b6c2-3c640029fd54


## Testing
- `just fmt`
- `just fix` *(fails: let expressions in this position are unstable)*
- `cargo test --all-features` *(fails: let expressions in this position
are unstable)*

------
https://chatgpt.com/codex/tasks/task_i_6893ec0d74a883218b94005172d7bc4c
2025-08-06 21:20:09 -07:00
Gabriel Peal
8a990b5401 Migrate GitWarning to OnboardingScreen (#1915)
This paves the way to do per-directory approval settings
(https://github.com/openai/codex/pull/1912).

This also lets us pass in a Config/ChatWidgetArgs into onboarding which
can then mutate it and emit the ChatWidgetArgs it wants at the end which
may be modified by the said approval settings.

<img width="1180" height="428" alt="CleanShot 2025-08-06 at 19 30 55"
src="https://github.com/user-attachments/assets/4dcfda42-0f5e-4b6d-a16d-2597109cc31c"
/>
2025-08-06 22:39:07 -04:00
aibrahim-oai
a5e17cda6b Run command UI (#1897)
Edit how commands show:

<img width="243" height="119" alt="image"
src="https://github.com/user-attachments/assets/13d5608e-3b66-4b8d-8fe7-ce464310d85d"
/>
2025-08-07 00:10:59 +00:00
pap-openai
8a980399c5 fix cursor file name insert (#1896)
Cursor wasn't moving when inserting a file, resulting in being not at
the end of the filename when inserting the file.
This fixes it by moving the cursor to the end of the file + one trailing
space.


Example screenshot after selecting a file when typing `@`
<img width="823" height="268" alt="image"
src="https://github.com/user-attachments/assets/ec6e3741-e1ba-4752-89d2-11f14a2bd69f"
/>
2025-08-06 16:58:06 -07:00
pap-openai
af8c1cdf12 fix meta+b meta+f (option+left/right) (#1895)
Option+Left or Option+Right should move cursor to beginning/end of the
word.

We weren't listening to what terminals are sending (on MacOS) and were
therefore printing b or f instead of moving cursor. We were actually in
the first match clause and returning char insertion
(https://github.com/openai/codex/pull/1895/files#diff-6bf130cd00438cc27a38c5a4d9937a27cf9a324c191de4b74fc96019d362be6dL209)

Tested on Apple Terminal, iTerm, Ghostty
2025-08-06 16:16:47 -07:00
pakrym-oai
57c973b571 Add 2025-08-06 model family (#1899) 2025-08-06 23:14:02 +00:00
Gabriel Peal
2d5de795aa First pass at a TUI onboarding (#1876)
This sets up the scaffolding and basic flow for a TUI onboarding
experience. It covers sign in with ChatGPT, env auth, as well as some
safety guidance.

Next up:
1. Replace the git warning screen
2. Use this to configure default approval/sandbox modes


Note the shimmer flashes are from me slicing the video, not jank.

https://github.com/user-attachments/assets/0fbe3479-fdde-41f3-87fb-a7a83ab895b8
2025-08-06 18:22:14 -04:00
Dylan
f25b2e8e2c Propagate apply_patch filesystem errors (#1892)
## Summary
We have been returning `exit code 0` from the apply patch command when
writes fail, which causes our `exec` harness to pass back confusing
messages to the model. Instead, we should loudly fail so that the
harness and the model can handle these errors appropriately.

Also adds a test to confirm this behavior.

## Testing
- `cargo test -p codex-apply-patch`
2025-08-06 14:58:53 -07:00
ae
a575effbb0 feat: interrupt running task on ctrl-z (#1880)
- Arguably a bugfix as previously CTRL-Z didn't do anything.
- Only in TUI mode for now. This may make sense in other modes... to be
researched.
- The TUI runs the terminal in raw mode and the signals arrive as key
events, so we handle CTRL-Z as a key event just like CTRL-C.
- Not adding UI for it as a composer redesign is coming, and we can just
add it then.
- We should follow with CTRL-Z a second time doing the native terminal
action.
2025-08-06 21:56:34 +00:00
ae
6cef86f05b feat: update launch screen (#1881)
- Updates the launch screen to:
  ```
  >_ You are using OpenAI Codex in ~/code/codex/codex-rs
  
   Try one of the following commands to get started:
  
   1. /init - Create an AGENTS.md file with instructions for Codex
   2. /status - Show current session configuration and token usage
   3. /compact - Compact the chat history
   4. /new - Start a new chat
   ```
- These aren't the perfect commands, but as more land soon we can
update.
- We should also add logic later to make /init only show when there's no
existing AGENTS.md.
- Majorly need to iterate on copy.

<img width="905" height="769" alt="image"
src="https://github.com/user-attachments/assets/5912939e-fb0e-4e76-94ff-785261e2d6ee"
/>
2025-08-06 14:36:48 -07:00
pakrym-oai
8262ba58b2 Prefer env var auth over default codex auth (#1861)
## Summary
- Prioritize provider-specific API keys over default Codex auth when
building requests
- Add test to ensure provider env var auth overrides default auth

## Testing
- `just fmt`
- `just fix` *(fails: `let` expressions in this position are unstable)*
- `cargo test --all-features` *(fails: `let` expressions in this
position are unstable)*

------
https://chatgpt.com/codex/tasks/task_i_68926a104f7483208f2c8fd36763e0e3
2025-08-06 13:02:00 -07:00
Jeremy Rose
081caa5a6b show a transient history cell for commands (#1824)
Adds a new "active history cell" for history bits that need to render
more than once before they're inserted into the history. Only used for
commands right now.


https://github.com/user-attachments/assets/925f01a0-e56d-4613-bc25-fdaa85d8aea5

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-06 12:03:45 -07:00
Michael Bolin
4344537742 chore: rename INIT.md to prompt_for_init_command.md and move closer to usage (#1886)
Addressing my post-commit review feedback on
https://github.com/openai/codex/pull/1822.
2025-08-06 11:58:57 -07:00
Michael Bolin
64f2f2eca2 fix: support $CODEX_HOME/AGENTS.md instead of $CODEX_HOME/instructions.md (#1891)
The docs and code do not match. It turns out the docs are "right" in
they are what we have been meaning to support, so this PR updates the
code:


ae88b69b09/README.md (L298-L302)

Support for `instructions.md` is a holdover from the TypeScript CLI, so
we are just going to drop support for it altogether rather than maintain
it in perpetuity.
2025-08-06 11:48:03 -07:00
Michael Bolin
ae88b69b09 fix: add more instructions to ensure GitHub Action reviews only the necessary code (#1887)
Empirically, we have seen the GitHub Action comment on code outside of
the PR, so try to provide additional instructions in the prompt to avoid
this.
2025-08-06 10:39:58 -07:00
Charlie Weems
ffe24991b7 Initial implementation of /init (#1822)
Basic /init command that appends an instruction to create AGENTS.md to
the conversation history.
2025-08-06 09:10:23 -07:00
Dylan
dc468d563f [env] Remove git config for now (#1884)
## Summary
Forgot to remove this in #1869 last night! Too much of a performance hit
on the main thread. We can bring it back via an async thread on startup.
2025-08-06 08:05:17 -07:00
Dylan
3e8bcf0247 [prompts] Add <environment_context> (#1869)
## Summary
Includes a new user message in the api payload which provides useful
environment context for the model, so it knows about things like the
current working directory and the sandbox.

## Testing
Updated unit tests
2025-08-06 01:13:31 -07:00
Dylan
cda39e417f [tests] Investigate flakey mcp-server test (#1877)
## Summary
Have seen these tests flaking over the course of today on different
boxes. `wiremock` seems to be generally written with tokio/threads in
mind but based on the weird panics from the tests, let's see if this
helps.
2025-08-06 00:07:58 -07:00
ae
d642b07fcc [feat] add /status slash command (#1873)
- Added a `/status` command, which will be useful when we update the
home screen to print less status.
- Moved `create_config_summary_entries` to common since it's used in a
few places.
- Noticed we inconsistently had periods in slash command descriptions
and just removed them everywhere.
- Noticed the diff description was overflowing so made it shorter.
2025-08-05 23:57:52 -07:00
Michael Bolin
7b3ab968a0 docs: add more detail to the codex-rust-review (#1875)
This PR attempts to break `codex-rust-review.md` into sections so that
it is easier to consume.

It also adds a healthy new section on "Assertions in Tests" that has
been on my mind for awhile.
2025-08-06 06:36:10 +00:00
Michael Bolin
02e7965228 fix: add stricter checks and better error messages to create_github_release.sh (#1874)
This script attempts to verify that:

- You have no local, uncommitted changes.
- You are on `main`
- The commit you are on exists on `main` also exists on the origin
`https://github.com/openai/codex`, i.e., it is not just a commit you
have pushed to your local version of `main`

As part of this, try to print better error message if/when these
conditions are violated.
2025-08-05 23:33:21 -07:00
Michael Bolin
493e4c9463 fix: only tag as prerelease when the version has an -alpha or -beta suffix (#1872)
Hardcoding to `prerelease: true` is a holdover from before we had
migrated to the Rust CLI for releases and decided on how we were doing
version numbers.

To date, I have had to change the release status from "prerelease" to
"actual release" manually through the GitHub Releases web page. This is
a semi-serious problem because I've discovered that it messes up
Homebrew's automation if the version number _looks_ like a real release
but turns out to be a prerelease. The release potentially gets skipped
from being published on Homebrew, so it's important to set the value
correctly from the start.

I verified that `steps.release_name.outputs.name` does not include the
`rust-v` prefix from the tag name.
2025-08-05 23:11:29 -07:00
ae
1f7003b476 tweak comment (#1871)
Belatedly address CR feedback about a comment.

------
https://chatgpt.com/codex/tasks/task_i_6892e8070be4832cba379f2955f5b8bc
2025-08-05 23:02:00 -07:00
Michael Bolin
eaf2fb5b4f fix: fully enumerate EventMsg in chatwidget.rs (#1866)
https://github.com/openai/codex/pull/1868 is a related fix that was in
flight simultaenously, but after talking to @easong-openai, this:

- logs instead of renders for `BackgroundEvent`
- logs for `TurnDiff`
- renders for `PatchApplyEnd`
2025-08-05 22:44:27 -07:00
easong-openai
f8d70d67b6 Add OSS model info (#1860)
Add somewhat arbitrarily chosen context window/output limit.
2025-08-05 22:35:00 -07:00
easong-openai
966d957faf fixes no git repo warning (#1863)
Fix broken git warning

<img width="797" height="482" alt="broken-screen"
src="https://github.com/user-attachments/assets/9c52ed9b-13d8-4f1d-bb37-7c51acac615d"
/>
2025-08-05 22:34:14 -07:00
ae
b90c15abc4 clear terminal on launch (#1870) 2025-08-05 22:01:34 -07:00
aibrahim-oai
31dcae67db Remove Turndiff and Apply patch from the render (#1868)
Make the tui more specific on what to render. Apply patch End and Turn
diff needs special handling.

Avoiding this issue:

<img width="503" height="138" alt="image"
src="https://github.com/user-attachments/assets/4c010ea8-701e-46d2-aa49-88b37fe0e5d9"
/>
2025-08-05 21:32:03 -07:00
Dylan
725dd6be6a [approval_policy] Add OnRequest approval_policy (#1865)
## Summary
A split-up PR of #1763 , stacked on top of a tools refactor #1858 to
make the change clearer. From the previous summary:

> Let's try something new: tell the model about the sandbox, and let it
decide when it will need to break the sandbox. Some local testing
suggests that it works pretty well with zero iteration on the prompt!

## Testing
- [x] Added unit tests
- [x] Tested locally and it appears to work smoothly!
2025-08-05 20:44:20 -07:00
Dylan
aff97ed7dd [core] Separate tools config from openai client (#1858)
## Summary
In an effort to make tools easier to work with and more configurable,
I'm introducing `ToolConfig` and updating `Prompt` to take in a general
list of Tools. I think this is simpler and better for a few reasons:
- We can easily assemble tools from various sources (our own harness,
mcp servers, etc.) and we can consolidate the logic for constructing the
logic in one place that is separate from serialization.
- client.rs no longer needs arbitrary config values, it just takes in a
list of tools to serialize

A hefty portion of the PR is now updating our conversion of
`mcp_types::Tool` to `OpenAITool`, but considering that @bolinfest
accurately called this out as a TODO long ago, I think it's time we
tackled it.

## Testing
- [x] Experimented locally, no changes, as expected
- [x] Added additional unit tests
- [x] Responded to rust-review
2025-08-05 19:27:52 -07:00
Michael Bolin
afa8f0d617 fix: exit cleanly when ShutdownComplete is received (#1864)
Previous to this PR, `ShutdownComplete` was not being handled correctly
in `codex exec`, so it always ended up printing the following to stderr:

```
ERROR codex_exec: Error receiving event: InternalAgentDied
```

Because we were not breaking out of the loop for `ShutdownComplete`,
inevitably `codex.next_event()` would get called again and
`rx_event.recv()` would fail and the error would get mapped to
`InternalAgentDied`:


ea7d3f27bd/codex-rs/core/src/codex.rs (L190-L197)

For reference, https://github.com/openai/codex/pull/1647 introduced the
`ShutdownComplete` variant.
2025-08-05 19:19:36 -07:00
Dylan
ea7d3f27bd [core] Stop escalating timeouts (#1853)
## Summary
Escalating out of sandbox is (almost always) not going to fix
long-running commands timing out - therefore we should just pass the
failure back to the model instead of asking the user to re-run a command
that took a long time anyway.

## Testing
- [x] Ran locally with a timeout and confirmed this worked as expected
2025-08-05 17:52:25 -07:00