Commit Graph

82 Commits

Author SHA1 Message Date
Jeremy Rose
76dc3f6054 show diff output in the pager (#2568)
this shows `/diff` output in an overlay like the transcript, instead of
dumping it into history.



https://github.com/user-attachments/assets/48e79b65-7f66-45dd-97b3-d5c627ac7349
2025-08-22 08:24:13 -07:00
Jeremy Rose
db934e438e read all AGENTS.md up to git root (#2532)
This updates our logic for AGENTS.md to match documented behavior, which
is to read all AGENTS.md files from cwd up to git root.
2025-08-21 08:52:17 -07:00
easong-openai
8ad56be06e Parse and expose stream errors (#2540) 2025-08-21 01:15:24 -07:00
Jeremy Rose
9193eb6b53 show thinking in transcript (#2538)
record the full reasoning trace and show it in transcript mode
2025-08-20 17:09:46 -07:00
ae
250ae37c84 tui: link docs when no MCP servers configured (#2516) 2025-08-20 14:58:04 -07:00
ae
ee8c4ad23a feat: copy tweaks (#2502)
- For selectable options, use sentences starting in lowercase and not
ending with periods. To be honest I don't love this style, but better to
be consistent for now.
- Tweak some other strings.
- Put in more compelling suggestions on launch. Excited to put `/mcp` in
there next.
2025-08-20 07:26:14 +00:00
Michael Bolin
50c48e88f5 chore: upgrade to Rust 1.89 (#2465)
Codex created this PR from the following prompt:

> upgrade this entire repo to Rust 1.89. Note that this requires
updating codex-rs/rust-toolchain.toml as well as the workflows in
.github/. Make sure that things are "clippy clean" as this change will
likely uncover new Clippy errors. `just fmt` and `cargo clippy --tests`
are sufficient to check for correctness

Note this modifies a lot of lines because it folds nested `if`
statements using `&&`.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2465).
* #2467
* __->__ #2465
2025-08-19 13:22:02 -07:00
ae
783654e218 feat: move session ID bullet in /status (#2462)
## Summary
- just want to declutter the top level workspace section

## Testing
- `just fmt`
- `just fix` *(fails: error[E0658] let expressions in this position are
unstable in codex-protocol)*
- `cargo test -p codex-tui` *(fails: error[E0658] let expressions in
this position are unstable)*

------
https://chatgpt.com/codex/tasks/task_i_68a4a7311dbc832caf14f52e0fbaf9c2
2025-08-19 11:27:05 -07:00
Dylan
e7e5fe91c8 [tui] Support /mcp command (#2430)
## Summary
Adds a `/mcp` command to list active tools. We can extend this command
to allow configuration of MCP tools, but for now a simple list command
will help debug if your config.toml and your tools are working as
expected.
2025-08-19 09:00:31 -07:00
ae
da69d50c60 fix: stop using ANSI blue (#2421)
- One less color.
- Replaced with cyan which looks better next to other cyan components.
2025-08-18 16:02:25 +00:00
ae
5bce369c4d fix: clean up styles & colors and define in styles.md (#2401)
New style guide:

  # Headers, primary, and secondary text
  
- **Headers:** Use `bold`. For markdown with various header levels,
leave in the `#` signs.
  - **Primary text:** Default.
  - **Secondary text:** Use `dim`.
  
  # Foreground colors
  
- **Default:** Most of the time, just use the default foreground color.
`reset` can help get it back.
- **Selection:** Use ANSI `blue`. (Ed & AE want to make this cyan too,
but we'll do that in a followup since it's riskier in different themes.)
  - **User input tips and status indicators:** Use ANSI `cyan`.
  - **Success and additions:** Use ANSI `green`.
  - **Errors, failures and deletions:** Use ANSI `red`.
  - **Codex:** Use ANSI `magenta`.
  
  # Avoid
  
- Avoid custom colors because there's no guarantee that they'll contrast
well or look good on various terminal color themes.
- Avoid ANSI `black`, `white`, `yellow` as foreground colors because the
terminal theme will do a better job. (Use `reset` if you need to in
order to get those.) The exception is if you need contrast rendering
over a manually colored background.
  
  (There are some rules to try to catch this in `clippy.toml`.)

# Testing

Tested in a variety of light and dark color themes in Terminal, iTerm2, and Ghostty.
2025-08-18 08:26:29 -07:00
Jeremy Rose
7a80d3c96c replace /prompts with a rotating placeholder (#2314) 2025-08-15 19:37:10 -07:00
Jeremy Rose
1ad8ae2579 color the status letter in apply patch summary (#2337)
<img width="440" height="77" alt="Screenshot 2025-08-14 at 8 30 30 PM"
src="https://github.com/user-attachments/assets/c6169a3a-2e98-4ace-b7ee-918cf4368b7a"
/>
2025-08-15 20:25:48 +00:00
Michael Bolin
d262244725 fix: introduce codex-protocol crate (#2355) 2025-08-15 12:44:40 -07:00
Jeremy Rose
917e29803b tui: include optional full command line in history display (#2334)
Add env var to show the raw, unparsed command line under parsed
commands. When we have transcript mode we should show the full command
there, but this is useful for debugging.
2025-08-14 22:06:42 -07:00
pakrym-oai
5552688621 Format multiline commands (#2333)
<img width="966" height="729" alt="image"
src="https://github.com/user-attachments/assets/fa45b7e1-cd46-427f-b2bc-8501e9e4760b"
/>
<img width="797" height="530" alt="image"
src="https://github.com/user-attachments/assets/6993eec5-e157-4df7-b558-15643ad10d64"
/>
2025-08-14 19:49:42 -07:00
easong-openai
d0b907d399 re-implement session id in status (#2332)
Basically the same thing as https://github.com/openai/codex/pull/2297
2025-08-15 02:14:46 +00:00
Jeremy Rose
235987843c add a timer to running exec commands (#2321)
sometimes i switch back to codex and i don't know how long a command has
been running.

<img width="744" height="462" alt="Screenshot 2025-08-14 at 3 30 07 PM"
src="https://github.com/user-attachments/assets/bd80947f-5a47-43e6-ad19-69c2995a2a29"
/>
2025-08-14 19:32:45 -04:00
Jeremy Rose
7038827bf4 fix bash commands being incorrectly quoted in display (#2313)
The "display format" of commands was sometimes producing incorrect
quoting like `echo foo '>' bar`, which is importantly different from the
actual command that was being run. This refactors ParsedCommand to have
a string in `cmd` instead of a vec, as a `vec` can't accurately capture
a full command.
2025-08-14 17:08:29 -04:00
Jeremy Rose
585f7b0679 HistoryCell is a trait (#2283)
refactors HistoryCell to be a trait instead of an enum. Also collapse
the many "degenerate" HistoryCell enums which were just a store of lines
into a single PlainHistoryCell type.

The goal here is to allow more ways of rendering history cells (e.g.
expanded/collapsed/"live"), and I expect we will return to more varied
types of HistoryCell as we develop this area.
2025-08-14 14:10:05 -04:00
Jeremy Rose
bb9ce3cb78 tui: standardize tree prefix glyphs to └ (#2274)
Replace mixed `⎿` and `L` prefixes with `└` in TUI rendering.

<img width="454" height="659" alt="Screenshot 2025-08-13 at 4 02 03 PM"
src="https://github.com/user-attachments/assets/61c9c7da-830b-4040-bb79-a91be90870ca"
/>
2025-08-13 19:14:03 -04:00
aibrahim-oai
cbf972007a use modifier dim instead of gray and .dim (#2273)
gray color doesn't work very well with white terminals. `.dim` doesn't
have an effect for some reason.

after:
<img width="1080" height="149" alt="image"
src="https://github.com/user-attachments/assets/26c0f8bb-550d-4d71-bd06-11b3189bc1d7"
/>

Before
<img width="1077" height="186" alt="image"
src="https://github.com/user-attachments/assets/b1fba0c7-bc4d-4da1-9754-6c0a105e8cd1"
/>
2025-08-13 22:50:50 +00:00
easong-openai
cb312dfdb4 Update header from Working once batched commands are done (#2249)
Update commands from Working to Complete or Failed after they're done

before:
<img width="725" height="332" alt="image"
src="https://github.com/user-attachments/assets/fb93d21f-5c4a-42bc-a154-14f4fe99d5f9"
/>

after:
<img width="464" height="65" alt="image"
src="https://github.com/user-attachments/assets/15ec7c3b-355f-473e-9a8e-eab359ec5f0d"
/>
2025-08-13 11:10:48 -07:00
easong-openai
6340acd885 Re-add markdown streaming (#2029)
Wait for newlines, then render markdown on a line by line basis. Word wrap it for the current terminal size and then spit it out line by line into the UI. Also adds tests and fixes some UI regressions.
2025-08-12 17:37:28 -07:00
Ed Bayes
eaa3969e68 Show "Update plan" in TUI plan updates (#2192)
## Summary
- Display "Update plan" instead of "Update to do" when the plan is
updated in the TUI

## Testing
- `just fmt`
- `just fix` *(fails: E0658 `let` expressions in this position are
unstable)*
- `cargo test --all-features` *(fails: E0658 `let` expressions in this
position are unstable)*

------
https://chatgpt.com/codex/tasks/task_i_6897f78fc5908322be488f02db42a5b9
2025-08-12 13:26:57 -07:00
aibrahim-oai
336952ae2e TUI: Show apply patch diff. Stack: [2/2] (#2050)
Show the diff for apply patch

<img width="801" height="345" alt="image"
src="https://github.com/user-attachments/assets/a15d6112-e83e-4612-a2bd-43285689a358"
/>


Stack:
-> #2050
#2049
2025-08-11 18:32:59 -07:00
Gabriel Peal
6220e8ac2e [TUI] Split multiline commands (#2202)
Fixes:
<img width="5084" height="1160" alt="CleanShot 2025-08-11 at 16 02 55"
src="https://github.com/user-attachments/assets/ccdbf39d-dc8b-4214-ab65-39ac89841d1c"
/>
2025-08-11 19:11:46 -04:00
Gabriel Peal
4368f075d0 [3/3] Merge sequential exec commands (#2110)
This PR merges and dedupes sequential exec cells so they stack neatly on
sequential lines rather than separate blocks.

This is particularly useful because the model will often sed 200 lines
of a file multiple times in a row and this nicely collapses them.


https://github.com/user-attachments/assets/04cccda5-e2ba-4a97-a613-4547587aa15c

Part 1: https://github.com/openai/codex/pull/2095
Part 2: https://github.com/openai/codex/pull/2097
2025-08-11 12:40:12 -07:00
aibrahim-oai
85e4f564a3 Chores: Refactor approval Patch UI. Stack: [1/2] (#2049)
- Moved the logic for the apply patch in its own file

Stack:
#2050
-> #2049
2025-08-11 19:31:34 +00:00
Gabriel Peal
b76a562c49 [2/3] Retain the TUI last exec history cell so that it can be updated by the next tool call (#2097)
Right now, every time an exec ends, we emit it to history which makes it
immutable. In order to be able to update or merge successive tool calls
(which will be useful after https://github.com/openai/codex/pull/2095),
we need to retain it as the active cell.

This also changes the cell to contain the metadata necessary to render
it so it can be updated rather than baking in the final text lines when
the cell is created.


Part 1: https://github.com/openai/codex/pull/2095
Part 3: https://github.com/openai/codex/pull/2110
2025-08-11 14:43:58 -04:00
Gabriel Peal
7f6408720b [1/3] Parse exec commands and format them more nicely in the UI (#2095)
# Note for reviewers
The bulk of this PR is in in the new file, `parse_command.rs`. This file
is designed to be written TDD and implemented with Codex. Do not worry
about reviewing the code, just review the unit tests (if you want). If
any cases are missing, we'll add more tests and have Codex fix them.

I think the best approach will be to land and iterate. I have some
follow-ups I want to do after this lands. The next PR after this will
let us merge (and dedupe) multiple sequential cells of the same such as
multiple read commands. The deduping will also be important because the
model often reads the same file multiple times in a row in chunks

===

This PR formats common commands like reading, formatting, testing, etc
more nicely:

It tries to extract things like file names, tests and falls back to the
cmd if it doesn't. It also only shows stdout/err if the command failed.

<img width="770" height="238" alt="CleanShot 2025-08-09 at 16 05 15"
src="https://github.com/user-attachments/assets/0ead179a-8910-486b-aa3d-7d26264d751e"
/>
<img width="348" height="158" alt="CleanShot 2025-08-09 at 16 05 32"
src="https://github.com/user-attachments/assets/4302681b-5e87-4ff3-85b4-0252c6c485a9"
/>
<img width="834" height="324" alt="CleanShot 2025-08-09 at 16 05 56 2"
src="https://github.com/user-attachments/assets/09fb3517-7bd6-40f6-a126-4172106b700f"
/>

Part 2: https://github.com/openai/codex/pull/2097
Part 3: https://github.com/openai/codex/pull/2110
2025-08-11 14:26:15 -04:00
Gabriel Peal
9d8d7d8704 Middle-truncate tool output and show more lines (#2096)
Command output can contain important bits of information at the
beginning or end. This shows a bit more output and truncates in the
middle.

This will work better paired with
https://github.com/openai/codex/pull/2095 which will omit output for
simple successful reads/searches/etc.

<img width="1262" height="496" alt="CleanShot 2025-08-09 at 13 01 05"
src="https://github.com/user-attachments/assets/9d989eb6-f81e-4118-9745-d20728eeef71"
/>


------
https://chatgpt.com/codex/tasks/task_i_68978cd19f9c832cac4975e44dcd99a0
2025-08-11 00:32:56 -04:00
Gabriel Peal
c3a8ab8511 Fix multiline exec command rendering (#2023)
With Ratatui, if a single line contains newlines, it increments y but
not x so each subsequent line continued from the same x position as the
previous line ended on.

Before
<img width="2010" height="376" alt="CleanShot 2025-08-08 at 09 13 13"
src="https://github.com/user-attachments/assets/09feefbd-c5ee-4631-8967-93ab108c352a"
/>
After
<img width="1002" height="364" alt="CleanShot 2025-08-08 at 09 11 54"
src="https://github.com/user-attachments/assets/a58b47cf-777f-436a-93d9-ab277046a577"
/>
2025-08-08 13:52:24 -04:00
easong-openai
52e12f2b6c Revert "Streaming markdown (#1920)" (#1981)
This reverts commit 2b7139859e.
2025-08-08 01:38:39 +00:00
easong-openai
2b7139859e Streaming markdown (#1920)
We wait until we have an entire newline, then format it with markdown and stream in to the UI. This reduces time to first token but is the right thing to do with our current rendering model IMO. Also lets us add word wrapping!
2025-08-07 18:26:47 -07:00
Michael Bolin
cd06b28d84 fix: default to credits from ChatGPT auth, when possible (#1971)
Uses this rough strategy for authentication:

```
if auth.json
	if auth.json.API_KEY is NULL # new auth
		CHAT
	else # old auth
		if plus or pro or team
			CHAT
		else 
			API_KEY
		
else OPENAI_API_KEY
```

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1970).
* __->__ #1971
* #1970
* #1966
* #1965
* #1962
2025-08-07 18:00:31 -07:00
ae
12d29c2779 feat: add tip to upgrade to ChatGPT plan (#1938) 2025-08-07 11:10:13 +00:00
ae
c4dc6a80bf feat: improve output of /status (#1936)
Now it looks like this:
```
/status
📂 Workspace
  • Path: ~/code/codex/codex-rs
  • Approval Mode: on-request
  • Sandbox: workspace-write

👤 Account
  • Signed in with ChatGPT
  • Login: example@example.com
  • Plan: Pro

🧠 Model
  • Name: ?!?!?!?!?!
  • Provider: OpenAI

📊 Token Usage
  • Input: 11940 (+ 7999 cached)
  • Output: 2639
  • Total: 14579
```
2025-08-07 12:02:58 +01:00
ae
7c20160676 feat: /prompts slash command (#1937)
- Shows several example prompts which include @-mentions 

------
https://chatgpt.com/codex/tasks/task_i_6894779ba8cc832ca0c871d17ee06aae
2025-08-07 11:55:59 +01:00
Ed Bayes
1e4bf81653 Update copy (#1935)
Updated copy

---------

Co-authored-by: pap-openai <pap@openai.com>
2025-08-07 03:29:33 -07:00
aibrahim-oai
5589c6089b approval ui (#1933)
Asking for approval:

<img width="269" height="41" alt="image"
src="https://github.com/user-attachments/assets/b9ced569-3297-4dae-9ce7-0b015c9e14ea"
/>

Allow:

<img width="400" height="31" alt="image"
src="https://github.com/user-attachments/assets/92056b22-efda-4d49-854d-e2943d5fcf17"
/>

Reject:

<img width="372" height="30" alt="image"
src="https://github.com/user-attachments/assets/be9530a9-7d41-4800-bb42-abb9a24fc3ea"
/>

Always Approve:

<img width="410" height="36" alt="image"
src="https://github.com/user-attachments/assets/acf871ba-4c26-4501-b303-7956d0151754"
/>
2025-08-07 02:02:56 -07:00
Ed Bayes
20084facfe Add spinner animation to TUI status indicator (#1917)
## Summary
- add a pulsing dot loader before the shimmering `Working` label in the
status indicator widget and include a small test asserting the spinner
character is rendered
- also fix a small bug in the ran command header by adding a space
between the  and `Ran command`


https://github.com/user-attachments/assets/6768c9d2-e094-49cb-ad51-44bcac10aa6f

## Testing
- `just fmt`
- `just fix` *(failed: E0658 `let` expressions in core/src/client.rs)*
- `cargo test --all-features` *(failed: E0658 `let` expressions in
core/src/client.rs)*

------
https://chatgpt.com/codex/tasks/task_i_68941bffdb948322b0f4190bc9dbe7f6

---------

Co-authored-by: aibrahim-oai <aibrahim@openai.com>
2025-08-07 08:45:04 +00:00
ae
0334476894 feat: parse info from auth.json and show in /status (#1923)
- `/status` renders
    ```
    signed in with chatgpt
      login: example@example.com
      plan: plus
    ```
- Setup for using this info in a few more places.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-07 01:27:45 -07:00
ae
28395df957 [fix] fix absolute and % token counts (#1931)
- For absolute, use non-cached input + output.
- For estimating what % of the model's context window is used, we need
to account for reasoning output tokens from prior turns being dropped
from the context window. We approximate this here by subtracting
reasoning output tokens from the total. This will be off for the current
turn and pending function calls. We can improve it later.
2025-08-07 08:13:36 +00:00
Ed Bayes
eb80614a7c Tint chat composer background (#1921)
## Summary
- give the chat composer a subtle custom background and apply it across
the full area drawn

<img width="1008" height="718" alt="composer-bg"
src="https://github.com/user-attachments/assets/4b0f7f69-722a-438a-b4e9-0165ae8865a6"
/>

- update turn interrupted to be more human readable
<img width="648" height="170" alt="CleanShot 2025-08-06 at 22 44 47@2x"
src="https://github.com/user-attachments/assets/8d35e53a-bbfa-48e7-8612-c280a54e01dd"
/>

## Testing
- `cargo test --all-features` *(fails: `let` expressions in
`core/src/client.rs` require newer rustc)*
- `just fix` *(fails: `let` expressions in `core/src/client.rs` require
newer rustc)*

------
https://chatgpt.com/codex/tasks/task_i_68941f32c1008322bbcc39ee1d29a526
2025-08-07 00:46:45 -07:00
aibrahim-oai
fff2bb39f9 change todo (#1925)
<img width="746" height="135" alt="image"
src="https://github.com/user-attachments/assets/1605b2fb-aa3a-4337-b9e9-93f6ff1361c5"
/>


<img width="747" height="126" alt="image"
src="https://github.com/user-attachments/assets/6b4366bd-8548-4d29-8cfa-cd484d9a2359"
/>
2025-08-07 00:01:38 -07:00
aibrahim-oai
ec20e84d80 Change the UI of apply patch (#1907)
<img width="487" height="108" alt="image"
src="https://github.com/user-attachments/assets/3f6ffd56-36f6-40bc-b999-64279705416a"
/>

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-07 05:25:41 +00:00
aibrahim-oai
a5e17cda6b Run command UI (#1897)
Edit how commands show:

<img width="243" height="119" alt="image"
src="https://github.com/user-attachments/assets/13d5608e-3b66-4b8d-8fe7-ce464310d85d"
/>
2025-08-07 00:10:59 +00:00
ae
6cef86f05b feat: update launch screen (#1881)
- Updates the launch screen to:
  ```
  >_ You are using OpenAI Codex in ~/code/codex/codex-rs
  
   Try one of the following commands to get started:
  
   1. /init - Create an AGENTS.md file with instructions for Codex
   2. /status - Show current session configuration and token usage
   3. /compact - Compact the chat history
   4. /new - Start a new chat
   ```
- These aren't the perfect commands, but as more land soon we can
update.
- We should also add logic later to make /init only show when there's no
existing AGENTS.md.
- Majorly need to iterate on copy.

<img width="905" height="769" alt="image"
src="https://github.com/user-attachments/assets/5912939e-fb0e-4e76-94ff-785261e2d6ee"
/>
2025-08-06 14:36:48 -07:00
Jeremy Rose
081caa5a6b show a transient history cell for commands (#1824)
Adds a new "active history cell" for history bits that need to render
more than once before they're inserted into the history. Only used for
commands right now.


https://github.com/user-attachments/assets/925f01a0-e56d-4613-bc25-fdaa85d8aea5

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-06 12:03:45 -07:00