Commit Graph

200 Commits

Author SHA1 Message Date
pakrym-oai
cb78f2333e Set user-agent (#2230)
Use the same well-defined value in all cases when sending user-agent
header
2025-08-12 16:40:04 +00:00
ae
320f150c68 fix: update ctrl-z to suspend tui (#2113)
- Lean on ctrl-c and esc to interrupt.
- (Only on unix.)

https://github.com/user-attachments/assets/7ce6c57f-6ee2-40c2-8cd2-b31265f16c1c
2025-08-12 05:03:58 +00:00
aibrahim-oai
336952ae2e TUI: Show apply patch diff. Stack: [2/2] (#2050)
Show the diff for apply patch

<img width="801" height="345" alt="image"
src="https://github.com/user-attachments/assets/a15d6112-e83e-4612-a2bd-43285689a358"
/>


Stack:
-> #2050
#2049
2025-08-11 18:32:59 -07:00
Michael Bolin
ae81fbf83f fix: remove unused import in release mode (#2201)
Moves `use codex_core::protocol::EventMsg` inside the block annotated
with `#[cfg(debug_assertions)]` since that was the only place in the
file that was using it.

This eliminates the `warning: unused import:` when building with `cargo
build --release` in `cargo-rs/tui`.

Note this was not breaking CI because we do not build release builds on
CI since we're impatient :P
2025-08-11 17:11:36 -07:00
Gabriel Peal
6220e8ac2e [TUI] Split multiline commands (#2202)
Fixes:
<img width="5084" height="1160" alt="CleanShot 2025-08-11 at 16 02 55"
src="https://github.com/user-attachments/assets/ccdbf39d-dc8b-4214-ab65-39ac89841d1c"
/>
2025-08-11 19:11:46 -04:00
ae
a48372ce5d feat: add a /mention slash command (#2114)
- To help people discover @mentions.
- Command just places a @ in the composer.
- #2115 then improves the behavior of @mentions with empty queries.
2025-08-11 14:15:41 -07:00
Gabriel Peal
4368f075d0 [3/3] Merge sequential exec commands (#2110)
This PR merges and dedupes sequential exec cells so they stack neatly on
sequential lines rather than separate blocks.

This is particularly useful because the model will often sed 200 lines
of a file multiple times in a row and this nicely collapses them.


https://github.com/user-attachments/assets/04cccda5-e2ba-4a97-a613-4547587aa15c

Part 1: https://github.com/openai/codex/pull/2095
Part 2: https://github.com/openai/codex/pull/2097
2025-08-11 12:40:12 -07:00
aibrahim-oai
85e4f564a3 Chores: Refactor approval Patch UI. Stack: [1/2] (#2049)
- Moved the logic for the apply patch in its own file

Stack:
#2050
-> #2049
2025-08-11 19:31:34 +00:00
Gabriel Peal
b76a562c49 [2/3] Retain the TUI last exec history cell so that it can be updated by the next tool call (#2097)
Right now, every time an exec ends, we emit it to history which makes it
immutable. In order to be able to update or merge successive tool calls
(which will be useful after https://github.com/openai/codex/pull/2095),
we need to retain it as the active cell.

This also changes the cell to contain the metadata necessary to render
it so it can be updated rather than baking in the final text lines when
the cell is created.


Part 1: https://github.com/openai/codex/pull/2095
Part 3: https://github.com/openai/codex/pull/2110
2025-08-11 14:43:58 -04:00
Gabriel Peal
7f6408720b [1/3] Parse exec commands and format them more nicely in the UI (#2095)
# Note for reviewers
The bulk of this PR is in in the new file, `parse_command.rs`. This file
is designed to be written TDD and implemented with Codex. Do not worry
about reviewing the code, just review the unit tests (if you want). If
any cases are missing, we'll add more tests and have Codex fix them.

I think the best approach will be to land and iterate. I have some
follow-ups I want to do after this lands. The next PR after this will
let us merge (and dedupe) multiple sequential cells of the same such as
multiple read commands. The deduping will also be important because the
model often reads the same file multiple times in a row in chunks

===

This PR formats common commands like reading, formatting, testing, etc
more nicely:

It tries to extract things like file names, tests and falls back to the
cmd if it doesn't. It also only shows stdout/err if the command failed.

<img width="770" height="238" alt="CleanShot 2025-08-09 at 16 05 15"
src="https://github.com/user-attachments/assets/0ead179a-8910-486b-aa3d-7d26264d751e"
/>
<img width="348" height="158" alt="CleanShot 2025-08-09 at 16 05 32"
src="https://github.com/user-attachments/assets/4302681b-5e87-4ff3-85b4-0252c6c485a9"
/>
<img width="834" height="324" alt="CleanShot 2025-08-09 at 16 05 56 2"
src="https://github.com/user-attachments/assets/09fb3517-7bd6-40f6-a126-4172106b700f"
/>

Part 2: https://github.com/openai/codex/pull/2097
Part 3: https://github.com/openai/codex/pull/2110
2025-08-11 14:26:15 -04:00
aibrahim-oai
fa0a879444 show feedback message after /Compact command (#2162)
This PR updates ChatWidget to ensure that when AgentMessage,
AgentReasoning, or AgentReasoningRawContent events arrive without any
streamed deltas, the final text from the event is rendered before the
stream is finalized. Previously, these handlers ignored the event text
in such cases, relying solely on prior deltas.

<img width="603" height="189" alt="image"
src="https://github.com/user-attachments/assets/868516f2-7963-4603-9af4-adb1b1eda61e"
/>
2025-08-11 10:41:23 -07:00
ae
a191945ed6 fix: token usage display and context calculation (#2117)
- I had a recent conversation where the one-liner showed using 11M
tokens! But looking into it 10M were cached. So I looked into it and I
think we had a regression here. ->
- Use blended total tokens for chat composer usage display
- Compute remaining context using tokens_in_context_window helper

------
https://chatgpt.com/codex/tasks/task_i_68981a16c0a4832cbf416017390930e5
2025-08-11 07:19:15 -07:00
Gabriel Peal
9d8d7d8704 Middle-truncate tool output and show more lines (#2096)
Command output can contain important bits of information at the
beginning or end. This shows a bit more output and truncates in the
middle.

This will work better paired with
https://github.com/openai/codex/pull/2095 which will omit output for
simple successful reads/searches/etc.

<img width="1262" height="496" alt="CleanShot 2025-08-09 at 13 01 05"
src="https://github.com/user-attachments/assets/9d989eb6-f81e-4118-9745-d20728eeef71"
/>


------
https://chatgpt.com/codex/tasks/task_i_68978cd19f9c832cac4975e44dcd99a0
2025-08-11 00:32:56 -04:00
pakrym-oai
329f01b728 feat: allow esc to interrupt session (#2054)
## Summary
- allow Esc to interrupt the current session when a task is running
- document Esc as an interrupt key in status indicator

## Testing
- `just fmt`
- `just fix` *(fails: E0658 `let` expressions in this position are
unstable)*
- `cargo test --all-features` *(fails: E0658 `let` expressions in this
position are unstable)*

------
https://chatgpt.com/codex/tasks/task_i_689698cf605883208f57b0317ff6a303
2025-08-08 18:59:54 -07:00
aibrahim-oai
4a916ba914 Show ChatGPT login URL during onboarding (#2028)
## Summary
- display authentication URL in the ChatGPT sign-in screen while
onboarding

<img width="684" height="151" alt="image"
src="https://github.com/user-attachments/assets/a8c32cb0-77f6-4a3f-ae3b-6695247c994d"
/>
2025-08-09 01:30:34 +00:00
Josh LeBlanc
216e9e2ed0 Fix rust build on windows (#2019)
This pull request implements a fix from #2000, as well as fixed an
additional problem with path lengths on windows that prevents the login
from displaying.

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-08 10:57:16 -07:00
Gabriel Peal
c3a8ab8511 Fix multiline exec command rendering (#2023)
With Ratatui, if a single line contains newlines, it increments y but
not x so each subsequent line continued from the same x position as the
previous line ended on.

Before
<img width="2010" height="376" alt="CleanShot 2025-08-08 at 09 13 13"
src="https://github.com/user-attachments/assets/09feefbd-c5ee-4631-8967-93ab108c352a"
/>
After
<img width="1002" height="364" alt="CleanShot 2025-08-08 at 09 11 54"
src="https://github.com/user-attachments/assets/a58b47cf-777f-436a-93d9-ab277046a577"
/>
2025-08-08 13:52:24 -04:00
easong-openai
52e12f2b6c Revert "Streaming markdown (#1920)" (#1981)
This reverts commit 2b7139859e.
2025-08-08 01:38:39 +00:00
easong-openai
2b7139859e Streaming markdown (#1920)
We wait until we have an entire newline, then format it with markdown and stream in to the UI. This reduces time to first token but is the right thing to do with our current rendering model IMO. Also lets us add word wrapping!
2025-08-07 18:26:47 -07:00
Michael Bolin
cd06b28d84 fix: default to credits from ChatGPT auth, when possible (#1971)
Uses this rough strategy for authentication:

```
if auth.json
	if auth.json.API_KEY is NULL # new auth
		CHAT
	else # old auth
		if plus or pro or team
			CHAT
		else 
			API_KEY
		
else OPENAI_API_KEY
```

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1970).
* __->__ #1971
* #1970
* #1966
* #1965
* #1962
2025-08-07 18:00:31 -07:00
Michael Bolin
b991c04f86 chore: move top-level load_auth() to CodexAuth::from_codex_home() (#1966)
There are two valid ways to create an instance of `CodexAuth`:
`from_api_key()` and `from_codex_home()`. Now both are static methods of
`CodexAuth` and are listed first in the implementation.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1966).
* #1971
* #1970
* __->__ #1966
* #1965
* #1962
2025-08-07 16:49:37 -07:00
Michael Bolin
7d67159587 fix: public load_auth() fn always called with include_env_var=true (#1961)
Apparently `include_env_var=false` was only used for testing, so clean
up the API a little to make that clear.


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1961).
* #1962
* __->__ #1961
2025-08-07 14:19:30 -07:00
Michael Bolin
f74fe7af7b fix: fix mistaken bitwise OR in #1949 (#1957)
This is hard for me to test conclusively because I have the default of
`ctrl left/right` used to migrate between Spaces on macOS.
2025-08-07 20:11:06 +00:00
Jeremy Rose
c787603812 ctrl+arrows also move words (#1949)
this was removed at some point, but this is a common keybind for word
left/right.
2025-08-07 18:27:44 +00:00
Ed Bayes
09adbf9132 remove composer bg (#1944)
passes local tests
2025-08-07 10:04:49 -07:00
Dylan
bc28b87c7b [config] Onboarding flow with persistence (#1929)
## Summary
In collaboration with @gpeal: upgrade the onboarding flow, and persist
user settings.

---------

Co-authored-by: Gabriel Peal <gabriel@openai.com>
2025-08-07 09:27:38 -07:00
pakrym-oai
c87fb83d81 Calculate remaining context based on last token usage (#1940)
We should only take last request size (in tokens) into account
2025-08-07 05:17:18 -07:00
ae
12d29c2779 feat: add tip to upgrade to ChatGPT plan (#1938) 2025-08-07 11:10:13 +00:00
ae
c4dc6a80bf feat: improve output of /status (#1936)
Now it looks like this:
```
/status
📂 Workspace
  • Path: ~/code/codex/codex-rs
  • Approval Mode: on-request
  • Sandbox: workspace-write

👤 Account
  • Signed in with ChatGPT
  • Login: example@example.com
  • Plan: Pro

🧠 Model
  • Name: ?!?!?!?!?!
  • Provider: OpenAI

📊 Token Usage
  • Input: 11940 (+ 7999 cached)
  • Output: 2639
  • Total: 14579
```
2025-08-07 12:02:58 +01:00
ae
7c20160676 feat: /prompts slash command (#1937)
- Shows several example prompts which include @-mentions 

------
https://chatgpt.com/codex/tasks/task_i_6894779ba8cc832ca0c871d17ee06aae
2025-08-07 11:55:59 +01:00
Ed Bayes
1e4bf81653 Update copy (#1935)
Updated copy

---------

Co-authored-by: pap-openai <pap@openai.com>
2025-08-07 03:29:33 -07:00
aibrahim-oai
5589c6089b approval ui (#1933)
Asking for approval:

<img width="269" height="41" alt="image"
src="https://github.com/user-attachments/assets/b9ced569-3297-4dae-9ce7-0b015c9e14ea"
/>

Allow:

<img width="400" height="31" alt="image"
src="https://github.com/user-attachments/assets/92056b22-efda-4d49-854d-e2943d5fcf17"
/>

Reject:

<img width="372" height="30" alt="image"
src="https://github.com/user-attachments/assets/be9530a9-7d41-4800-bb42-abb9a24fc3ea"
/>

Always Approve:

<img width="410" height="36" alt="image"
src="https://github.com/user-attachments/assets/acf871ba-4c26-4501-b303-7956d0151754"
/>
2025-08-07 02:02:56 -07:00
Ed Bayes
20084facfe Add spinner animation to TUI status indicator (#1917)
## Summary
- add a pulsing dot loader before the shimmering `Working` label in the
status indicator widget and include a small test asserting the spinner
character is rendered
- also fix a small bug in the ran command header by adding a space
between the  and `Ran command`


https://github.com/user-attachments/assets/6768c9d2-e094-49cb-ad51-44bcac10aa6f

## Testing
- `just fmt`
- `just fix` *(failed: E0658 `let` expressions in core/src/client.rs)*
- `cargo test --all-features` *(failed: E0658 `let` expressions in
core/src/client.rs)*

------
https://chatgpt.com/codex/tasks/task_i_68941bffdb948322b0f4190bc9dbe7f6

---------

Co-authored-by: aibrahim-oai <aibrahim@openai.com>
2025-08-07 08:45:04 +00:00
ae
0334476894 feat: parse info from auth.json and show in /status (#1923)
- `/status` renders
    ```
    signed in with chatgpt
      login: example@example.com
      plan: plus
    ```
- Setup for using this info in a few more places.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-07 01:27:45 -07:00
Gabriel Peal
6d19b73edf Add logout command to CLI and TUI (#1932)
## Summary
- support `codex logout` via new subcommand and helper that removes the
stored `auth.json`
- expose a `logout` function in `codex-login` and test it
- add `/logout` slash command in the TUI; command list is filtered when
not logged in and the handler deletes `auth.json` then exits

## Testing
- `just fix` *(fails: failed to get `diffy` from crates.io)*
- `cargo test --all-features` *(fails: failed to get `diffy` from
crates.io)*

------
https://chatgpt.com/codex/tasks/task_i_68945c3facac832ca83d48499716fb51
2025-08-07 04:17:33 -04:00
ae
28395df957 [fix] fix absolute and % token counts (#1931)
- For absolute, use non-cached input + output.
- For estimating what % of the model's context window is used, we need
to account for reasoning output tokens from prior turns being dropped
from the context window. We approximate this here by subtracting
reasoning output tokens from the total. This will be off for the current
turn and pending function calls. We can improve it later.
2025-08-07 08:13:36 +00:00
Ed Bayes
eb80614a7c Tint chat composer background (#1921)
## Summary
- give the chat composer a subtle custom background and apply it across
the full area drawn

<img width="1008" height="718" alt="composer-bg"
src="https://github.com/user-attachments/assets/4b0f7f69-722a-438a-b4e9-0165ae8865a6"
/>

- update turn interrupted to be more human readable
<img width="648" height="170" alt="CleanShot 2025-08-06 at 22 44 47@2x"
src="https://github.com/user-attachments/assets/8d35e53a-bbfa-48e7-8612-c280a54e01dd"
/>

## Testing
- `cargo test --all-features` *(fails: `let` expressions in
`core/src/client.rs` require newer rustc)*
- `just fix` *(fails: `let` expressions in `core/src/client.rs` require
newer rustc)*

------
https://chatgpt.com/codex/tasks/task_i_68941f32c1008322bbcc39ee1d29a526
2025-08-07 00:46:45 -07:00
aibrahim-oai
04b40ac179 Move used tokens next to the hints (#1930)
Before:

<img width="341" height="58" alt="image"
src="https://github.com/user-attachments/assets/3b209e42-1157-4f7b-8385-825c865969e8"
/>

After:

<img width="490" height="53" alt="image"
src="https://github.com/user-attachments/assets/5d99b9bc-6ac2-4748-b62c-c0c3217622c2"
/>
2025-08-07 00:45:47 -07:00
easong-openai
4e29c4afe4 Add a UI hint when you press @ (#1903)
This will make @ more discoverable (even though it is currently not
super useful, IMO it should be used to bring files into context from
outside CWD)

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-07 07:41:48 +00:00
aibrahim-oai
fff2bb39f9 change todo (#1925)
<img width="746" height="135" alt="image"
src="https://github.com/user-attachments/assets/1605b2fb-aa3a-4337-b9e9-93f6ff1361c5"
/>


<img width="747" height="126" alt="image"
src="https://github.com/user-attachments/assets/6b4366bd-8548-4d29-8cfa-cd484d9a2359"
/>
2025-08-07 00:01:38 -07:00
ae
f0fe61c667 feat: use ctrl c in interrupt hint (#1926)
https://chatgpt.com/codex/tasks/task_i_689441c33e1c832c85ceda166dab5d33
2025-08-06 23:22:58 -07:00
ae
935ad5c6f2 feat: >_ (#1924) 2025-08-06 22:54:54 -07:00
aibrahim-oai
ec20e84d80 Change the UI of apply patch (#1907)
<img width="487" height="108" alt="image"
src="https://github.com/user-attachments/assets/3f6ffd56-36f6-40bc-b999-64279705416a"
/>

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-07 05:25:41 +00:00
easong-openai
2098b40369 Scrollable slash commands (#1830)
Scrollable slash commands. Part 1 of the multi PR.
2025-08-06 21:23:09 -07:00
aibrahim-oai
4971d54ca7 Show timing and token counts in status indicator (#1909)
## Summary
- track start time and cumulative tokens in status indicator
- display dim "(Ns • N tokens • Ctrl z to interrupt)" text after
animated Working header
- propagate token usage updates to status indicator views



https://github.com/user-attachments/assets/b73210c1-1533-40b5-b6c2-3c640029fd54


## Testing
- `just fmt`
- `just fix` *(fails: let expressions in this position are unstable)*
- `cargo test --all-features` *(fails: let expressions in this position
are unstable)*

------
https://chatgpt.com/codex/tasks/task_i_6893ec0d74a883218b94005172d7bc4c
2025-08-06 21:20:09 -07:00
Gabriel Peal
8a990b5401 Migrate GitWarning to OnboardingScreen (#1915)
This paves the way to do per-directory approval settings
(https://github.com/openai/codex/pull/1912).

This also lets us pass in a Config/ChatWidgetArgs into onboarding which
can then mutate it and emit the ChatWidgetArgs it wants at the end which
may be modified by the said approval settings.

<img width="1180" height="428" alt="CleanShot 2025-08-06 at 19 30 55"
src="https://github.com/user-attachments/assets/4dcfda42-0f5e-4b6d-a16d-2597109cc31c"
/>
2025-08-06 22:39:07 -04:00
aibrahim-oai
a5e17cda6b Run command UI (#1897)
Edit how commands show:

<img width="243" height="119" alt="image"
src="https://github.com/user-attachments/assets/13d5608e-3b66-4b8d-8fe7-ce464310d85d"
/>
2025-08-07 00:10:59 +00:00
pap-openai
8a980399c5 fix cursor file name insert (#1896)
Cursor wasn't moving when inserting a file, resulting in being not at
the end of the filename when inserting the file.
This fixes it by moving the cursor to the end of the file + one trailing
space.


Example screenshot after selecting a file when typing `@`
<img width="823" height="268" alt="image"
src="https://github.com/user-attachments/assets/ec6e3741-e1ba-4752-89d2-11f14a2bd69f"
/>
2025-08-06 16:58:06 -07:00
pap-openai
af8c1cdf12 fix meta+b meta+f (option+left/right) (#1895)
Option+Left or Option+Right should move cursor to beginning/end of the
word.

We weren't listening to what terminals are sending (on MacOS) and were
therefore printing b or f instead of moving cursor. We were actually in
the first match clause and returning char insertion
(https://github.com/openai/codex/pull/1895/files#diff-6bf130cd00438cc27a38c5a4d9937a27cf9a324c191de4b74fc96019d362be6dL209)

Tested on Apple Terminal, iTerm, Ghostty
2025-08-06 16:16:47 -07:00
Gabriel Peal
2d5de795aa First pass at a TUI onboarding (#1876)
This sets up the scaffolding and basic flow for a TUI onboarding
experience. It covers sign in with ChatGPT, env auth, as well as some
safety guidance.

Next up:
1. Replace the git warning screen
2. Use this to configure default approval/sandbox modes


Note the shimmer flashes are from me slicing the video, not jank.

https://github.com/user-attachments/assets/0fbe3479-fdde-41f3-87fb-a7a83ab895b8
2025-08-06 18:22:14 -04:00