2025-05-14 10:13:29 -07:00
|
|
|
|
//! Bottom pane: shows the ChatComposer or a BottomPaneView, if one is active.
|
|
|
|
|
|
|
2025-06-28 15:04:23 -07:00
|
|
|
|
use crate::app_event_sender::AppEventSender;
|
2025-08-20 13:47:24 -07:00
|
|
|
|
use crate::tui::FrameRequester;
|
2025-06-28 15:04:23 -07:00
|
|
|
|
use crate::user_approval_widget::ApprovalRequest;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
use bottom_pane_view::BottomPaneView;
|
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
|
|
|
|
use codex_core::protocol::TokenUsage;
|
2025-06-28 15:04:23 -07:00
|
|
|
|
use codex_file_search::FileMatch;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
use crossterm::event::KeyEvent;
|
|
|
|
|
|
use ratatui::buffer::Buffer;
|
|
|
|
|
|
use ratatui::layout::Rect;
|
|
|
|
|
|
use ratatui::widgets::WidgetRef;
|
|
|
|
|
|
|
|
|
|
|
|
mod approval_modal_view;
|
|
|
|
|
|
mod bottom_pane_view;
|
|
|
|
|
|
mod chat_composer;
|
feat: record messages from user in ~/.codex/history.jsonl (#939)
This is a large change to support a "history" feature like you would
expect in a shell like Bash.
History events are recorded in `$CODEX_HOME/history.jsonl`. Because it
is a JSONL file, it is straightforward to append new entries (as opposed
to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be
valid JSON, each new entry entails rewriting the entire file). Because
it is possible for there to be multiple instances of Codex CLI writing
to `history.jsonl` at once, we use advisory file locking when working
with `history.jsonl` in `codex-rs/core/src/message_history.rs`.
Because we believe history is a sufficiently useful feature, we enable
it by default. Though to provide some safety, we set the file
permissions of `history.jsonl` to be `o600` so that other users on the
system cannot read the user's history. We do not yet support a default
list of `SENSITIVE_PATTERNS` as the TypeScript CLI does:
https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17
We are going to take a more conservative approach to this list in the
Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude
sensitive information like API tokens, it would also exclude valuable
information such as references to Git commits.
As noted in the updated documentation, users can opt-out of history by
adding the following to `config.toml`:
```toml
[history]
persistence = "none"
```
Because `history.jsonl` could, in theory, be quite large, we take a[n
arguably overly pedantic] approach in reading history entries into
memory. Specifically, we start by telling the client the current number
of entries in the history file (`history_entry_count`) as well as the
inode (`history_log_id`) of `history.jsonl` (see the new fields on
`SessionConfiguredEvent`).
The client is responsible for keeping new entries in memory to create a
"local history," but if the user hits up enough times to go "past" the
end of local history, then the client should use the new
`GetHistoryEntryRequest` in the protocol to fetch older entries.
Specifically, it should pass the `history_log_id` it was given
originally and work backwards from `history_entry_count`. (It should
really fetch history in batches rather than one-at-a-time, but that is
something we can improve upon in subsequent PRs.)
The motivation behind this crazy scheme is that it is designed to defend
against:
* The `history.jsonl` being truncated during the session such that the
index into the history is no longer consistent with what had been read
up to that point. We do not yet have logic to enforce a `max_bytes` for
`history.jsonl`, but once we do, we will aspire to implement it in a way
that should result in a new inode for the file on most systems.
* New items from concurrent Codex CLI sessions amending to the history.
Because, in absence of truncation, `history.jsonl` is an append-only
log, so long as the client reads backwards from `history_entry_count`,
it should always get a consistent view of history. (That said, it will
not be able to read _new_ commands from concurrent sessions, but perhaps
we will introduce a `/` command to reload latest history or something
down the road.)
Admittedly, my testing of this feature thus far has been fairly light. I
expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
|
|
|
|
mod chat_composer_history;
|
feat: add support for commands in the Rust TUI (#935)
Introduces support for slash commands like in the TypeScript CLI. We do
not support the full set of commands yet, but the core abstraction is
there now.
In particular, we have a `SlashCommand` enum and due to thoughtful use
of the [strum](https://crates.io/crates/strum) crate, it requires
minimal boilerplate to add a new command to the list.
The key new piece of UI is `CommandPopup`, though the keyboard events
are still handled by `ChatComposer`. The behavior is roughly as follows:
* if the first character in the composer is `/`, the command popup is
displayed (if you really want to send a message to Codex that starts
with a `/`, simply put a space before the `/`)
* while the popup is displayed, up/down can be used to change the
selection of the popup
* if there is a selection, hitting tab completes the command, but does
not send it
* if there is a selection, hitting enter sends the command
* if the prefix of the composer matches a command, the command will be
visible in the popup so the user can see the description (commands could
take arguments, so additional text may appear after the command name
itself)
https://github.com/user-attachments/assets/39c3e6ee-eeb7-4ef7-a911-466d8184975f
Incidentally, Codex wrote almost all the code for this PR!
2025-05-14 12:55:49 -07:00
|
|
|
|
mod command_popup;
|
feat: add support for @ to do file search (#1401)
Introduces support for `@` to trigger a fuzzy-filename search in the
composer. Under the hood, this leverages
https://crates.io/crates/nucleo-matcher to do the fuzzy matching and
https://crates.io/crates/ignore to build up the list of file candidates
(so that it respects `.gitignore`).
For simplicity (at least for now), we do not do any caching between
searches like VS Code does for its file search:
https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218
Because we do not do any caching, I saw queries take up to three seconds
on large repositories with hundreds of thousands of files. To that end,
we do not perform searches synchronously on each keystroke, but instead
dispatch an event to do the search on a background thread that
asynchronously reports back to the UI when the results are available.
This is largely handled by the `FileSearchManager` introduced in this
PR, which also has logic for debouncing requests so there is at most one
search in flight at a time.
While we could potentially polish and tune this feature further, it may
already be overengineered for how it will be used, in practice, so we
can improve things going forward if it turns out that this is not "good
enough" in the wild.
Note this feature does not work like `@` in the TypeScript CLI, which
was more like directory-based tab completion. In the Rust CLI, `@`
triggers a full-repo fuzzy-filename search.
Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
|
|
|
|
mod file_search_popup;
|
2025-08-19 10:55:07 -07:00
|
|
|
|
mod list_selection_view;
|
2025-08-06 21:23:09 -07:00
|
|
|
|
mod popup_consts;
|
|
|
|
|
|
mod scroll_state;
|
|
|
|
|
|
mod selection_popup_common;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
mod status_indicator_view;
|
2025-08-03 11:31:35 -07:00
|
|
|
|
mod textarea;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
|
2025-07-28 12:00:06 -07:00
|
|
|
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
|
|
|
|
|
pub(crate) enum CancellationEvent {
|
|
|
|
|
|
Ignored,
|
|
|
|
|
|
Handled,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-05-14 10:13:29 -07:00
|
|
|
|
pub(crate) use chat_composer::ChatComposer;
|
|
|
|
|
|
pub(crate) use chat_composer::InputResult;
|
|
|
|
|
|
|
|
|
|
|
|
use approval_modal_view::ApprovalModalView;
|
2025-08-19 10:55:07 -07:00
|
|
|
|
pub(crate) use list_selection_view::SelectionAction;
|
|
|
|
|
|
pub(crate) use list_selection_view::SelectionItem;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
use status_indicator_view::StatusIndicatorView;
|
|
|
|
|
|
|
|
|
|
|
|
/// Pane displayed in the lower half of the chat UI.
|
2025-08-20 13:47:24 -07:00
|
|
|
|
pub(crate) struct BottomPane {
|
2025-05-14 10:13:29 -07:00
|
|
|
|
/// Composer is retained even when a BottomPaneView is displayed so the
|
|
|
|
|
|
/// input state is retained when the view is closed.
|
2025-08-03 11:31:35 -07:00
|
|
|
|
composer: ChatComposer,
|
2025-05-14 10:13:29 -07:00
|
|
|
|
|
|
|
|
|
|
/// If present, this is displayed instead of the `composer`.
|
2025-08-20 13:47:24 -07:00
|
|
|
|
active_view: Option<Box<dyn BottomPaneView>>,
|
2025-05-14 10:13:29 -07:00
|
|
|
|
|
2025-05-15 14:50:30 -07:00
|
|
|
|
app_event_tx: AppEventSender,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: FrameRequester,
|
|
|
|
|
|
|
2025-05-14 10:13:29 -07:00
|
|
|
|
has_input_focus: bool,
|
|
|
|
|
|
is_task_running: bool,
|
2025-06-27 13:37:11 -04:00
|
|
|
|
ctrl_c_quit_hint: bool,
|
2025-08-04 21:23:22 -07:00
|
|
|
|
|
|
|
|
|
|
/// True if the active view is the StatusIndicatorView that replaces the
|
|
|
|
|
|
/// composer during a running task.
|
|
|
|
|
|
status_view_active: bool,
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub(crate) struct BottomPaneParams {
|
2025-05-15 14:50:30 -07:00
|
|
|
|
pub(crate) app_event_tx: AppEventSender,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
pub(crate) frame_requester: FrameRequester,
|
2025-05-14 10:13:29 -07:00
|
|
|
|
pub(crate) has_input_focus: bool,
|
2025-07-31 17:30:44 -07:00
|
|
|
|
pub(crate) enhanced_keys_supported: bool,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
pub(crate) placeholder_text: String,
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-20 13:47:24 -07:00
|
|
|
|
impl BottomPane {
|
2025-08-04 21:23:22 -07:00
|
|
|
|
const BOTTOM_PAD_LINES: u16 = 2;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
pub fn new(params: BottomPaneParams) -> Self {
|
2025-07-31 17:30:44 -07:00
|
|
|
|
let enhanced_keys_supported = params.enhanced_keys_supported;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
Self {
|
2025-07-31 17:30:44 -07:00
|
|
|
|
composer: ChatComposer::new(
|
|
|
|
|
|
params.has_input_focus,
|
|
|
|
|
|
params.app_event_tx.clone(),
|
|
|
|
|
|
enhanced_keys_supported,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
params.placeholder_text,
|
2025-07-31 17:30:44 -07:00
|
|
|
|
),
|
2025-05-14 10:13:29 -07:00
|
|
|
|
active_view: None,
|
|
|
|
|
|
app_event_tx: params.app_event_tx,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: params.frame_requester,
|
2025-05-14 10:13:29 -07:00
|
|
|
|
has_input_focus: params.has_input_focus,
|
|
|
|
|
|
is_task_running: false,
|
2025-06-27 13:37:11 -04:00
|
|
|
|
ctrl_c_quit_hint: false,
|
2025-08-04 21:23:22 -07:00
|
|
|
|
status_view_active: false,
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-07-31 00:43:21 -07:00
|
|
|
|
pub fn desired_height(&self, width: u16) -> u16 {
|
2025-08-04 21:23:22 -07:00
|
|
|
|
let view_height = if let Some(view) = self.active_view.as_ref() {
|
2025-08-12 17:37:28 -07:00
|
|
|
|
view.desired_height(width)
|
2025-08-04 21:23:22 -07:00
|
|
|
|
} else {
|
|
|
|
|
|
self.composer.desired_height(width)
|
|
|
|
|
|
};
|
|
|
|
|
|
|
2025-08-14 14:10:21 -04:00
|
|
|
|
view_height.saturating_add(Self::BOTTOM_PAD_LINES)
|
2025-08-03 11:31:35 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
|
|
|
|
|
|
// Hide the cursor whenever an overlay view is active (e.g. the
|
|
|
|
|
|
// status indicator shown while a task is running, or approval modal).
|
|
|
|
|
|
// In these states the textarea is not interactable, so we should not
|
|
|
|
|
|
// show its caret.
|
|
|
|
|
|
if self.active_view.is_some() {
|
|
|
|
|
|
None
|
|
|
|
|
|
} else {
|
|
|
|
|
|
self.composer.cursor_pos(area)
|
|
|
|
|
|
}
|
2025-07-30 17:06:55 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-05-14 10:13:29 -07:00
|
|
|
|
/// Forward a key event to the active view or the composer.
|
2025-05-15 14:50:30 -07:00
|
|
|
|
pub fn handle_key_event(&mut self, key_event: KeyEvent) -> InputResult {
|
2025-05-14 10:13:29 -07:00
|
|
|
|
if let Some(mut view) = self.active_view.take() {
|
2025-05-15 14:50:30 -07:00
|
|
|
|
view.handle_key_event(self, key_event);
|
2025-05-14 10:13:29 -07:00
|
|
|
|
if !view.is_complete() {
|
|
|
|
|
|
self.active_view = Some(view);
|
2025-08-05 01:56:13 -07:00
|
|
|
|
} else if self.is_task_running {
|
2025-08-20 13:47:24 -07:00
|
|
|
|
let mut v = StatusIndicatorView::new(
|
|
|
|
|
|
self.app_event_tx.clone(),
|
|
|
|
|
|
self.frame_requester.clone(),
|
|
|
|
|
|
);
|
2025-08-05 01:56:13 -07:00
|
|
|
|
v.update_text("waiting for model".to_string());
|
|
|
|
|
|
self.active_view = Some(Box::new(v));
|
|
|
|
|
|
self.status_view_active = true;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
2025-05-15 14:50:30 -07:00
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
InputResult::None
|
2025-05-14 10:13:29 -07:00
|
|
|
|
} else {
|
|
|
|
|
|
let (input_result, needs_redraw) = self.composer.handle_key_event(key_event);
|
|
|
|
|
|
if needs_redraw {
|
2025-05-15 14:50:30 -07:00
|
|
|
|
self.request_redraw();
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
2025-05-15 14:50:30 -07:00
|
|
|
|
input_result
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-07-28 12:00:06 -07:00
|
|
|
|
/// Handle Ctrl-C in the bottom pane. If a modal view is active it gets a
|
|
|
|
|
|
/// chance to consume the event (e.g. to dismiss itself).
|
|
|
|
|
|
pub(crate) fn on_ctrl_c(&mut self) -> CancellationEvent {
|
|
|
|
|
|
let mut view = match self.active_view.take() {
|
|
|
|
|
|
Some(view) => view,
|
|
|
|
|
|
None => return CancellationEvent::Ignored,
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
let event = view.on_ctrl_c(self);
|
|
|
|
|
|
match event {
|
|
|
|
|
|
CancellationEvent::Handled => {
|
|
|
|
|
|
if !view.is_complete() {
|
|
|
|
|
|
self.active_view = Some(view);
|
2025-08-05 01:56:13 -07:00
|
|
|
|
} else if self.is_task_running {
|
|
|
|
|
|
// Modal aborted but task still running – restore status indicator.
|
2025-08-20 13:47:24 -07:00
|
|
|
|
let mut v = StatusIndicatorView::new(
|
|
|
|
|
|
self.app_event_tx.clone(),
|
|
|
|
|
|
self.frame_requester.clone(),
|
|
|
|
|
|
);
|
2025-08-05 01:56:13 -07:00
|
|
|
|
v.update_text("waiting for model".to_string());
|
|
|
|
|
|
self.active_view = Some(Box::new(v));
|
|
|
|
|
|
self.status_view_active = true;
|
2025-07-28 12:00:06 -07:00
|
|
|
|
}
|
|
|
|
|
|
self.show_ctrl_c_quit_hint();
|
|
|
|
|
|
}
|
|
|
|
|
|
CancellationEvent::Ignored => {
|
|
|
|
|
|
self.active_view = Some(view);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
event
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-07-12 15:32:00 -07:00
|
|
|
|
pub fn handle_paste(&mut self, pasted: String) {
|
|
|
|
|
|
if self.active_view.is_none() {
|
|
|
|
|
|
let needs_redraw = self.composer.handle_paste(pasted);
|
|
|
|
|
|
if needs_redraw {
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-11 14:15:41 -07:00
|
|
|
|
pub(crate) fn insert_str(&mut self, text: &str) {
|
|
|
|
|
|
self.composer.insert_str(text);
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-06-27 13:37:11 -04:00
|
|
|
|
pub(crate) fn show_ctrl_c_quit_hint(&mut self) {
|
|
|
|
|
|
self.ctrl_c_quit_hint = true;
|
|
|
|
|
|
self.composer
|
|
|
|
|
|
.set_ctrl_c_quit_hint(true, self.has_input_focus);
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub(crate) fn clear_ctrl_c_quit_hint(&mut self) {
|
|
|
|
|
|
if self.ctrl_c_quit_hint {
|
|
|
|
|
|
self.ctrl_c_quit_hint = false;
|
|
|
|
|
|
self.composer
|
|
|
|
|
|
.set_ctrl_c_quit_hint(false, self.has_input_focus);
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub(crate) fn ctrl_c_quit_hint_visible(&self) -> bool {
|
|
|
|
|
|
self.ctrl_c_quit_hint
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-05-15 14:50:30 -07:00
|
|
|
|
pub fn set_task_running(&mut self, running: bool) {
|
2025-05-14 10:13:29 -07:00
|
|
|
|
self.is_task_running = running;
|
|
|
|
|
|
|
2025-08-04 21:23:22 -07:00
|
|
|
|
if running {
|
|
|
|
|
|
if self.active_view.is_none() {
|
2025-05-14 10:13:29 -07:00
|
|
|
|
self.active_view = Some(Box::new(StatusIndicatorView::new(
|
|
|
|
|
|
self.app_event_tx.clone(),
|
2025-08-20 13:47:24 -07:00
|
|
|
|
self.frame_requester.clone(),
|
2025-05-14 10:13:29 -07:00
|
|
|
|
)));
|
2025-08-04 21:23:22 -07:00
|
|
|
|
self.status_view_active = true;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
2025-08-04 21:23:22 -07:00
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
} else {
|
|
|
|
|
|
// Drop the status view when a task completes, but keep other
|
|
|
|
|
|
// modal views (e.g. approval dialogs).
|
|
|
|
|
|
if let Some(mut view) = self.active_view.take() {
|
|
|
|
|
|
if !view.should_hide_when_task_is_done() {
|
|
|
|
|
|
self.active_view = Some(view);
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
2025-08-04 21:23:22 -07:00
|
|
|
|
self.status_view_active = false;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-19 10:55:07 -07:00
|
|
|
|
/// Show a generic list selection view with the provided items.
|
|
|
|
|
|
pub(crate) fn show_selection_view(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
title: String,
|
|
|
|
|
|
subtitle: Option<String>,
|
|
|
|
|
|
footer_hint: Option<String>,
|
|
|
|
|
|
items: Vec<SelectionItem>,
|
|
|
|
|
|
) {
|
|
|
|
|
|
let view = list_selection_view::ListSelectionView::new(
|
|
|
|
|
|
title,
|
|
|
|
|
|
subtitle,
|
|
|
|
|
|
footer_hint,
|
|
|
|
|
|
items,
|
|
|
|
|
|
self.app_event_tx.clone(),
|
|
|
|
|
|
);
|
|
|
|
|
|
self.active_view = Some(Box::new(view));
|
|
|
|
|
|
self.status_view_active = false;
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-15 15:32:41 -07:00
|
|
|
|
/// Update the live status text shown while a task is running.
|
|
|
|
|
|
/// If a modal view is active (i.e., not the status indicator), this is a no‑op.
|
|
|
|
|
|
pub(crate) fn update_status_text(&mut self, text: String) {
|
|
|
|
|
|
if !self.is_task_running || !self.status_view_active {
|
|
|
|
|
|
return;
|
|
|
|
|
|
}
|
|
|
|
|
|
if let Some(mut view) = self.active_view.take() {
|
|
|
|
|
|
view.update_status_text(text);
|
|
|
|
|
|
self.active_view = Some(view);
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-07-16 08:59:26 -07:00
|
|
|
|
pub(crate) fn composer_is_empty(&self) -> bool {
|
|
|
|
|
|
self.composer.is_empty()
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-06-27 13:37:11 -04:00
|
|
|
|
pub(crate) fn is_task_running(&self) -> bool {
|
|
|
|
|
|
self.is_task_running
|
|
|
|
|
|
}
|
|
|
|
|
|
|
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
|
|
|
|
/// Update the *context-window remaining* indicator in the composer. This
|
|
|
|
|
|
/// is forwarded directly to the underlying `ChatComposer`.
|
|
|
|
|
|
pub(crate) fn set_token_usage(
|
|
|
|
|
|
&mut self,
|
2025-08-07 05:17:18 -07:00
|
|
|
|
total_token_usage: TokenUsage,
|
|
|
|
|
|
last_token_usage: TokenUsage,
|
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
|
|
|
|
model_context_window: Option<u64>,
|
|
|
|
|
|
) {
|
|
|
|
|
|
self.composer
|
2025-08-07 05:17:18 -07:00
|
|
|
|
.set_token_usage(total_token_usage, last_token_usage, model_context_window);
|
feat: show number of tokens remaining in UI (#1388)
When using the OpenAI Responses API, we now record the `usage` field for
a `"response.completed"` event, which includes metrics about the number
of tokens consumed. We also introduce `openai_model_info.rs`, which
includes current data about the most common OpenAI models available via
the API (specifically `context_window` and `max_output_tokens`). If
Codex does not recognize the model, you can set `model_context_window`
and `model_max_output_tokens` explicitly in `config.toml`.
When then introduce a new event type to `protocol.rs`, `TokenCount`,
which includes the `TokenUsage` for the most recent turn.
Finally, we update the TUI to record the running sum of tokens used so
the percentage of available context window remaining can be reported via
the placeholder text for the composer:

We could certainly get much fancier with this (such as reporting the
estimated cost of the conversation), but for now, we are just trying to
achieve feature parity with the TypeScript CLI.
Though arguably this improves upon the TypeScript CLI, as the TypeScript
CLI uses heuristics to estimate the number of tokens used rather than
using the `usage` information directly:
https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16
Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-05-14 10:13:29 -07:00
|
|
|
|
/// Called when the agent requests user approval.
|
2025-05-15 14:50:30 -07:00
|
|
|
|
pub fn push_approval_request(&mut self, request: ApprovalRequest) {
|
2025-05-14 10:13:29 -07:00
|
|
|
|
let request = if let Some(view) = self.active_view.as_mut() {
|
|
|
|
|
|
match view.try_consume_approval_request(request) {
|
|
|
|
|
|
Some(request) => request,
|
|
|
|
|
|
None => {
|
2025-05-15 14:50:30 -07:00
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
return;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
} else {
|
|
|
|
|
|
request
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
// Otherwise create a new approval modal overlay.
|
|
|
|
|
|
let modal = ApprovalModalView::new(request, self.app_event_tx.clone());
|
|
|
|
|
|
self.active_view = Some(Box::new(modal));
|
2025-08-04 21:23:22 -07:00
|
|
|
|
self.status_view_active = false;
|
2025-05-14 10:13:29 -07:00
|
|
|
|
self.request_redraw()
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Height (terminal rows) required by the current bottom pane.
|
2025-05-15 14:50:30 -07:00
|
|
|
|
pub(crate) fn request_redraw(&self) {
|
2025-08-20 13:47:24 -07:00
|
|
|
|
self.frame_requester.schedule_frame();
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
feat: record messages from user in ~/.codex/history.jsonl (#939)
This is a large change to support a "history" feature like you would
expect in a shell like Bash.
History events are recorded in `$CODEX_HOME/history.jsonl`. Because it
is a JSONL file, it is straightforward to append new entries (as opposed
to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be
valid JSON, each new entry entails rewriting the entire file). Because
it is possible for there to be multiple instances of Codex CLI writing
to `history.jsonl` at once, we use advisory file locking when working
with `history.jsonl` in `codex-rs/core/src/message_history.rs`.
Because we believe history is a sufficiently useful feature, we enable
it by default. Though to provide some safety, we set the file
permissions of `history.jsonl` to be `o600` so that other users on the
system cannot read the user's history. We do not yet support a default
list of `SENSITIVE_PATTERNS` as the TypeScript CLI does:
https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17
We are going to take a more conservative approach to this list in the
Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude
sensitive information like API tokens, it would also exclude valuable
information such as references to Git commits.
As noted in the updated documentation, users can opt-out of history by
adding the following to `config.toml`:
```toml
[history]
persistence = "none"
```
Because `history.jsonl` could, in theory, be quite large, we take a[n
arguably overly pedantic] approach in reading history entries into
memory. Specifically, we start by telling the client the current number
of entries in the history file (`history_entry_count`) as well as the
inode (`history_log_id`) of `history.jsonl` (see the new fields on
`SessionConfiguredEvent`).
The client is responsible for keeping new entries in memory to create a
"local history," but if the user hits up enough times to go "past" the
end of local history, then the client should use the new
`GetHistoryEntryRequest` in the protocol to fetch older entries.
Specifically, it should pass the `history_log_id` it was given
originally and work backwards from `history_entry_count`. (It should
really fetch history in batches rather than one-at-a-time, but that is
something we can improve upon in subsequent PRs.)
The motivation behind this crazy scheme is that it is designed to defend
against:
* The `history.jsonl` being truncated during the session such that the
index into the history is no longer consistent with what had been read
up to that point. We do not yet have logic to enforce a `max_bytes` for
`history.jsonl`, but once we do, we will aspire to implement it in a way
that should result in a new inode for the file on most systems.
* New items from concurrent Codex CLI sessions amending to the history.
Because, in absence of truncation, `history.jsonl` is an append-only
log, so long as the client reads backwards from `history_entry_count`,
it should always get a consistent view of history. (That said, it will
not be able to read _new_ commands from concurrent sessions, but perhaps
we will introduce a `/` command to reload latest history or something
down the road.)
Admittedly, my testing of this feature thus far has been fairly light. I
expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
|
|
|
|
|
|
|
|
|
|
// --- History helpers ---
|
|
|
|
|
|
|
|
|
|
|
|
pub(crate) fn set_history_metadata(&mut self, log_id: u64, entry_count: usize) {
|
|
|
|
|
|
self.composer.set_history_metadata(log_id, entry_count);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pub(crate) fn on_history_entry_response(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
log_id: u64,
|
|
|
|
|
|
offset: usize,
|
|
|
|
|
|
entry: Option<String>,
|
|
|
|
|
|
) {
|
|
|
|
|
|
let updated = self
|
|
|
|
|
|
.composer
|
|
|
|
|
|
.on_history_entry_response(log_id, offset, entry);
|
|
|
|
|
|
|
|
|
|
|
|
if updated {
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
feat: add support for @ to do file search (#1401)
Introduces support for `@` to trigger a fuzzy-filename search in the
composer. Under the hood, this leverages
https://crates.io/crates/nucleo-matcher to do the fuzzy matching and
https://crates.io/crates/ignore to build up the list of file candidates
(so that it respects `.gitignore`).
For simplicity (at least for now), we do not do any caching between
searches like VS Code does for its file search:
https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218
Because we do not do any caching, I saw queries take up to three seconds
on large repositories with hundreds of thousands of files. To that end,
we do not perform searches synchronously on each keystroke, but instead
dispatch an event to do the search on a background thread that
asynchronously reports back to the UI when the results are available.
This is largely handled by the `FileSearchManager` introduced in this
PR, which also has logic for debouncing requests so there is at most one
search in flight at a time.
While we could potentially polish and tune this feature further, it may
already be overengineered for how it will be used, in practice, so we
can improve things going forward if it turns out that this is not "good
enough" in the wild.
Note this feature does not work like `@` in the TypeScript CLI, which
was more like directory-based tab completion. In the Rust CLI, `@`
triggers a full-repo fuzzy-filename search.
Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
|
|
|
|
|
2025-06-28 15:04:23 -07:00
|
|
|
|
pub(crate) fn on_file_search_result(&mut self, query: String, matches: Vec<FileMatch>) {
|
feat: add support for @ to do file search (#1401)
Introduces support for `@` to trigger a fuzzy-filename search in the
composer. Under the hood, this leverages
https://crates.io/crates/nucleo-matcher to do the fuzzy matching and
https://crates.io/crates/ignore to build up the list of file candidates
(so that it respects `.gitignore`).
For simplicity (at least for now), we do not do any caching between
searches like VS Code does for its file search:
https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218
Because we do not do any caching, I saw queries take up to three seconds
on large repositories with hundreds of thousands of files. To that end,
we do not perform searches synchronously on each keystroke, but instead
dispatch an event to do the search on a background thread that
asynchronously reports back to the UI when the results are available.
This is largely handled by the `FileSearchManager` introduced in this
PR, which also has logic for debouncing requests so there is at most one
search in flight at a time.
While we could potentially polish and tune this feature further, it may
already be overengineered for how it will be used, in practice, so we
can improve things going forward if it turns out that this is not "good
enough" in the wild.
Note this feature does not work like `@` in the TypeScript CLI, which
was more like directory-based tab completion. In the Rust CLI, `@`
triggers a full-repo fuzzy-filename search.
Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
|
|
|
|
self.composer.on_file_search_result(query, matches);
|
|
|
|
|
|
self.request_redraw();
|
|
|
|
|
|
}
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-20 13:47:24 -07:00
|
|
|
|
impl WidgetRef for &BottomPane {
|
2025-05-14 10:13:29 -07:00
|
|
|
|
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
|
2025-08-04 21:23:22 -07:00
|
|
|
|
if let Some(view) = &self.active_view {
|
2025-08-14 14:10:21 -04:00
|
|
|
|
// Reserve bottom padding lines; keep at least 1 line for the view.
|
|
|
|
|
|
let avail = area.height;
|
|
|
|
|
|
if avail > 0 {
|
2025-08-04 21:23:22 -07:00
|
|
|
|
let pad = BottomPane::BOTTOM_PAD_LINES.min(avail.saturating_sub(1));
|
|
|
|
|
|
let view_rect = Rect {
|
|
|
|
|
|
x: area.x,
|
2025-08-14 14:10:21 -04:00
|
|
|
|
y: area.y,
|
2025-08-04 21:23:22 -07:00
|
|
|
|
width: area.width,
|
|
|
|
|
|
height: avail - pad,
|
|
|
|
|
|
};
|
|
|
|
|
|
view.render(view_rect, buf);
|
|
|
|
|
|
}
|
2025-08-14 14:10:21 -04:00
|
|
|
|
} else {
|
|
|
|
|
|
let avail = area.height;
|
|
|
|
|
|
if avail > 0 {
|
|
|
|
|
|
let composer_rect = Rect {
|
|
|
|
|
|
x: area.x,
|
|
|
|
|
|
y: area.y,
|
|
|
|
|
|
width: area.width,
|
|
|
|
|
|
// Reserve bottom padding
|
|
|
|
|
|
height: avail - BottomPane::BOTTOM_PAD_LINES.min(avail.saturating_sub(1)),
|
|
|
|
|
|
};
|
|
|
|
|
|
(&self.composer).render_ref(composer_rect, buf);
|
|
|
|
|
|
}
|
2025-05-14 10:13:29 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2025-07-28 12:00:06 -07:00
|
|
|
|
|
|
|
|
|
|
#[cfg(test)]
|
|
|
|
|
|
mod tests {
|
|
|
|
|
|
use super::*;
|
|
|
|
|
|
use crate::app_event::AppEvent;
|
2025-08-04 21:23:22 -07:00
|
|
|
|
use ratatui::buffer::Buffer;
|
|
|
|
|
|
use ratatui::layout::Rect;
|
2025-08-20 10:11:09 -07:00
|
|
|
|
use tokio::sync::mpsc::unbounded_channel;
|
2025-07-28 12:00:06 -07:00
|
|
|
|
|
|
|
|
|
|
fn exec_request() -> ApprovalRequest {
|
|
|
|
|
|
ApprovalRequest::Exec {
|
|
|
|
|
|
id: "1".to_string(),
|
|
|
|
|
|
command: vec!["echo".into(), "ok".into()],
|
|
|
|
|
|
reason: None,
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn ctrl_c_on_modal_consumes_and_shows_quit_hint() {
|
2025-08-20 10:11:09 -07:00
|
|
|
|
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
|
2025-07-28 12:00:06 -07:00
|
|
|
|
let tx = AppEventSender::new(tx_raw);
|
|
|
|
|
|
let mut pane = BottomPane::new(BottomPaneParams {
|
|
|
|
|
|
app_event_tx: tx,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: crate::tui::FrameRequester::test_dummy(),
|
2025-07-28 12:00:06 -07:00
|
|
|
|
has_input_focus: true,
|
2025-07-31 17:30:44 -07:00
|
|
|
|
enhanced_keys_supported: false,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
placeholder_text: "Ask Codex to do anything".to_string(),
|
2025-07-28 12:00:06 -07:00
|
|
|
|
});
|
|
|
|
|
|
pane.push_approval_request(exec_request());
|
|
|
|
|
|
assert_eq!(CancellationEvent::Handled, pane.on_ctrl_c());
|
|
|
|
|
|
assert!(pane.ctrl_c_quit_hint_visible());
|
|
|
|
|
|
assert_eq!(CancellationEvent::Ignored, pane.on_ctrl_c());
|
|
|
|
|
|
}
|
2025-08-04 21:23:22 -07:00
|
|
|
|
|
2025-08-12 17:37:28 -07:00
|
|
|
|
// live ring removed; related tests deleted.
|
2025-08-04 21:23:22 -07:00
|
|
|
|
|
2025-08-05 01:56:13 -07:00
|
|
|
|
#[test]
|
|
|
|
|
|
fn overlay_not_shown_above_approval_modal() {
|
2025-08-20 10:11:09 -07:00
|
|
|
|
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
|
2025-08-05 01:56:13 -07:00
|
|
|
|
let tx = AppEventSender::new(tx_raw);
|
|
|
|
|
|
let mut pane = BottomPane::new(BottomPaneParams {
|
|
|
|
|
|
app_event_tx: tx,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: crate::tui::FrameRequester::test_dummy(),
|
2025-08-05 01:56:13 -07:00
|
|
|
|
has_input_focus: true,
|
|
|
|
|
|
enhanced_keys_supported: false,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
placeholder_text: "Ask Codex to do anything".to_string(),
|
2025-08-05 01:56:13 -07:00
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
|
|
// Create an approval modal (active view).
|
|
|
|
|
|
pane.push_approval_request(exec_request());
|
|
|
|
|
|
|
2025-08-14 14:10:21 -04:00
|
|
|
|
// Render and verify the top row does not include an overlay.
|
2025-08-05 01:56:13 -07:00
|
|
|
|
let area = Rect::new(0, 0, 60, 6);
|
|
|
|
|
|
let mut buf = Buffer::empty(area);
|
|
|
|
|
|
(&pane).render_ref(area, &mut buf);
|
|
|
|
|
|
|
|
|
|
|
|
let mut r0 = String::new();
|
|
|
|
|
|
for x in 0..area.width {
|
|
|
|
|
|
r0.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
}
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
!r0.contains("Working"),
|
2025-08-14 14:10:21 -04:00
|
|
|
|
"overlay should not render above modal"
|
2025-08-05 01:56:13 -07:00
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn composer_not_shown_after_denied_if_task_running() {
|
2025-08-20 10:11:09 -07:00
|
|
|
|
let (tx_raw, rx) = unbounded_channel::<AppEvent>();
|
2025-08-05 01:56:13 -07:00
|
|
|
|
let tx = AppEventSender::new(tx_raw);
|
|
|
|
|
|
let mut pane = BottomPane::new(BottomPaneParams {
|
|
|
|
|
|
app_event_tx: tx.clone(),
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: crate::tui::FrameRequester::test_dummy(),
|
2025-08-05 01:56:13 -07:00
|
|
|
|
has_input_focus: true,
|
|
|
|
|
|
enhanced_keys_supported: false,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
placeholder_text: "Ask Codex to do anything".to_string(),
|
2025-08-05 01:56:13 -07:00
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
|
|
// Start a running task so the status indicator replaces the composer.
|
|
|
|
|
|
pane.set_task_running(true);
|
|
|
|
|
|
|
|
|
|
|
|
// Push an approval modal (e.g., command approval) which should hide the status view.
|
|
|
|
|
|
pane.push_approval_request(exec_request());
|
|
|
|
|
|
|
|
|
|
|
|
// Simulate pressing 'n' (deny) on the modal.
|
|
|
|
|
|
use crossterm::event::KeyCode;
|
|
|
|
|
|
use crossterm::event::KeyEvent;
|
|
|
|
|
|
use crossterm::event::KeyModifiers;
|
|
|
|
|
|
pane.handle_key_event(KeyEvent::new(KeyCode::Char('n'), KeyModifiers::NONE));
|
|
|
|
|
|
|
|
|
|
|
|
// After denial, since the task is still running, the status indicator
|
|
|
|
|
|
// should be restored as the active view; the composer should NOT be visible.
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
pane.status_view_active,
|
|
|
|
|
|
"status view should be active after denial"
|
|
|
|
|
|
);
|
|
|
|
|
|
assert!(pane.active_view.is_some(), "active view should be present");
|
|
|
|
|
|
|
|
|
|
|
|
// Render and ensure the top row includes the Working header instead of the composer.
|
|
|
|
|
|
// Give the animation thread a moment to tick.
|
|
|
|
|
|
std::thread::sleep(std::time::Duration::from_millis(120));
|
|
|
|
|
|
let area = Rect::new(0, 0, 40, 3);
|
|
|
|
|
|
let mut buf = Buffer::empty(area);
|
|
|
|
|
|
(&pane).render_ref(area, &mut buf);
|
|
|
|
|
|
let mut row0 = String::new();
|
|
|
|
|
|
for x in 0..area.width {
|
|
|
|
|
|
row0.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
}
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
row0.contains("Working"),
|
|
|
|
|
|
"expected Working header after denial: {row0:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
// Drain the channel to avoid unused warnings.
|
|
|
|
|
|
drop(rx);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn status_indicator_visible_during_command_execution() {
|
2025-08-20 10:11:09 -07:00
|
|
|
|
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
|
2025-08-05 01:56:13 -07:00
|
|
|
|
let tx = AppEventSender::new(tx_raw);
|
|
|
|
|
|
let mut pane = BottomPane::new(BottomPaneParams {
|
|
|
|
|
|
app_event_tx: tx,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: crate::tui::FrameRequester::test_dummy(),
|
2025-08-05 01:56:13 -07:00
|
|
|
|
has_input_focus: true,
|
|
|
|
|
|
enhanced_keys_supported: false,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
placeholder_text: "Ask Codex to do anything".to_string(),
|
2025-08-05 01:56:13 -07:00
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
|
|
// Begin a task: show initial status.
|
|
|
|
|
|
pane.set_task_running(true);
|
|
|
|
|
|
|
|
|
|
|
|
// Allow some frames so the animation thread ticks.
|
|
|
|
|
|
std::thread::sleep(std::time::Duration::from_millis(120));
|
|
|
|
|
|
|
|
|
|
|
|
// Render and confirm the line contains the "Working" header.
|
|
|
|
|
|
let area = Rect::new(0, 0, 40, 3);
|
|
|
|
|
|
let mut buf = Buffer::empty(area);
|
|
|
|
|
|
(&pane).render_ref(area, &mut buf);
|
|
|
|
|
|
|
|
|
|
|
|
let mut row0 = String::new();
|
|
|
|
|
|
for x in 0..area.width {
|
|
|
|
|
|
row0.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
}
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
row0.contains("Working"),
|
|
|
|
|
|
"expected Working header: {row0:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-04 21:23:22 -07:00
|
|
|
|
#[test]
|
|
|
|
|
|
fn bottom_padding_present_for_status_view() {
|
2025-08-20 10:11:09 -07:00
|
|
|
|
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
|
2025-08-04 21:23:22 -07:00
|
|
|
|
let tx = AppEventSender::new(tx_raw);
|
|
|
|
|
|
let mut pane = BottomPane::new(BottomPaneParams {
|
|
|
|
|
|
app_event_tx: tx,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: crate::tui::FrameRequester::test_dummy(),
|
2025-08-04 21:23:22 -07:00
|
|
|
|
has_input_focus: true,
|
|
|
|
|
|
enhanced_keys_supported: false,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
placeholder_text: "Ask Codex to do anything".to_string(),
|
2025-08-04 21:23:22 -07:00
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
|
|
// Activate spinner (status view replaces composer) with no live ring.
|
|
|
|
|
|
pane.set_task_running(true);
|
|
|
|
|
|
|
|
|
|
|
|
// Use height == desired_height; expect 1 status row at top and 2 bottom padding rows.
|
|
|
|
|
|
let height = pane.desired_height(30);
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
height >= 3,
|
|
|
|
|
|
"expected at least 3 rows with bottom padding; got {height}"
|
|
|
|
|
|
);
|
|
|
|
|
|
let area = Rect::new(0, 0, 30, height);
|
|
|
|
|
|
let mut buf = Buffer::empty(area);
|
|
|
|
|
|
(&pane).render_ref(area, &mut buf);
|
|
|
|
|
|
|
|
|
|
|
|
// Top row contains the status header
|
|
|
|
|
|
let mut top = String::new();
|
|
|
|
|
|
for x in 0..area.width {
|
|
|
|
|
|
top.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
}
|
|
|
|
|
|
assert_eq!(buf[(0, 0)].symbol().chars().next().unwrap_or(' '), '▌');
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
top.contains("Working"),
|
|
|
|
|
|
"expected Working header on top row: {top:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
// Bottom two rows are blank padding
|
|
|
|
|
|
let mut r_last = String::new();
|
|
|
|
|
|
let mut r_last2 = String::new();
|
|
|
|
|
|
for x in 0..area.width {
|
|
|
|
|
|
r_last.push(buf[(x, height - 1)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
r_last2.push(buf[(x, height - 2)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
}
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
r_last.trim().is_empty(),
|
|
|
|
|
|
"expected last row blank: {r_last:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
r_last2.trim().is_empty(),
|
|
|
|
|
|
"expected second-to-last row blank: {r_last2:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn bottom_padding_shrinks_when_tiny() {
|
2025-08-20 10:11:09 -07:00
|
|
|
|
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
|
2025-08-04 21:23:22 -07:00
|
|
|
|
let tx = AppEventSender::new(tx_raw);
|
|
|
|
|
|
let mut pane = BottomPane::new(BottomPaneParams {
|
|
|
|
|
|
app_event_tx: tx,
|
2025-08-20 13:47:24 -07:00
|
|
|
|
frame_requester: crate::tui::FrameRequester::test_dummy(),
|
2025-08-04 21:23:22 -07:00
|
|
|
|
has_input_focus: true,
|
|
|
|
|
|
enhanced_keys_supported: false,
|
2025-08-15 22:37:10 -04:00
|
|
|
|
placeholder_text: "Ask Codex to do anything".to_string(),
|
2025-08-04 21:23:22 -07:00
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
|
|
pane.set_task_running(true);
|
|
|
|
|
|
|
|
|
|
|
|
// Height=2 → pad shrinks to 1; bottom row is blank, top row has spinner.
|
|
|
|
|
|
let area2 = Rect::new(0, 0, 20, 2);
|
|
|
|
|
|
let mut buf2 = Buffer::empty(area2);
|
|
|
|
|
|
(&pane).render_ref(area2, &mut buf2);
|
|
|
|
|
|
let mut row0 = String::new();
|
|
|
|
|
|
let mut row1 = String::new();
|
|
|
|
|
|
for x in 0..area2.width {
|
|
|
|
|
|
row0.push(buf2[(x, 0)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
row1.push(buf2[(x, 1)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
}
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
row0.contains("Working"),
|
|
|
|
|
|
"expected Working header on row 0: {row0:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
row1.trim().is_empty(),
|
|
|
|
|
|
"expected bottom padding on row 1: {row1:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
// Height=1 → no padding; single row is the spinner.
|
|
|
|
|
|
let area1 = Rect::new(0, 0, 20, 1);
|
|
|
|
|
|
let mut buf1 = Buffer::empty(area1);
|
|
|
|
|
|
(&pane).render_ref(area1, &mut buf1);
|
|
|
|
|
|
let mut only = String::new();
|
|
|
|
|
|
for x in 0..area1.width {
|
|
|
|
|
|
only.push(buf1[(x, 0)].symbol().chars().next().unwrap_or(' '));
|
|
|
|
|
|
}
|
|
|
|
|
|
assert!(
|
|
|
|
|
|
only.contains("Working"),
|
|
|
|
|
|
"expected Working header with no padding: {only:?}"
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
2025-07-28 12:00:06 -07:00
|
|
|
|
}
|