Files
llmx/codex-rs/tui/src/chatwidget.rs

1445 lines
54 KiB
Rust
Raw Normal View History

use std::collections::HashMap;
use std::collections::VecDeque;
use std::path::PathBuf;
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
use std::sync::Arc;
use codex_core::config::Config;
use codex_core::protocol::AgentMessageDeltaEvent;
use codex_core::protocol::AgentMessageEvent;
use codex_core::protocol::AgentReasoningDeltaEvent;
use codex_core::protocol::AgentReasoningEvent;
use codex_core::protocol::AgentReasoningRawContentDeltaEvent;
use codex_core::protocol::AgentReasoningRawContentEvent;
use codex_core::protocol::ApplyPatchApprovalRequestEvent;
use codex_core::protocol::BackgroundEventEvent;
use codex_core::protocol::ErrorEvent;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecApprovalRequestEvent;
use codex_core::protocol::ExecCommandBeginEvent;
use codex_core::protocol::ExecCommandEndEvent;
use codex_core::protocol::InputItem;
use codex_core::protocol::InputMessageKind;
use codex_core::protocol::ListCustomPromptsResponseEvent;
use codex_core::protocol::McpListToolsResponseEvent;
use codex_core::protocol::McpToolCallBeginEvent;
use codex_core::protocol::McpToolCallEndEvent;
use codex_core::protocol::Op;
use codex_core::protocol::PatchApplyBeginEvent;
2025-08-21 01:15:24 -07:00
use codex_core::protocol::StreamErrorEvent;
use codex_core::protocol::TaskCompleteEvent;
feat: show number of tokens remaining in UI (#1388) When using the OpenAI Responses API, we now record the `usage` field for a `"response.completed"` event, which includes metrics about the number of tokens consumed. We also introduce `openai_model_info.rs`, which includes current data about the most common OpenAI models available via the API (specifically `context_window` and `max_output_tokens`). If Codex does not recognize the model, you can set `model_context_window` and `model_max_output_tokens` explicitly in `config.toml`. When then introduce a new event type to `protocol.rs`, `TokenCount`, which includes the `TokenUsage` for the most recent turn. Finally, we update the TUI to record the running sum of tokens used so the percentage of available context window remaining can be reported via the placeholder text for the composer: ![Screenshot 2025-06-25 at 11 20 55 PM](https://github.com/user-attachments/assets/6fd6982f-7247-4f14-84b2-2e600cb1fd49) We could certainly get much fancier with this (such as reporting the estimated cost of the conversation), but for now, we are just trying to achieve feature parity with the TypeScript CLI. Though arguably this improves upon the TypeScript CLI, as the TypeScript CLI uses heuristics to estimate the number of tokens used rather than using the `usage` information directly: https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16 Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
use codex_core::protocol::TokenUsage;
use codex_core::protocol::TokenUsageInfo;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::UserMessageEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_core::protocol::WebSearchEndEvent;
use codex_protocol::parse_command::ParsedCommand;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;
use crossterm::event::KeyModifiers;
use rand::Rng;
use ratatui::buffer::Buffer;
use ratatui::layout::Constraint;
use ratatui::layout::Layout;
use ratatui::layout::Rect;
use ratatui::widgets::Widget;
use ratatui::widgets::WidgetRef;
use tokio::sync::mpsc::UnboundedSender;
use tracing::debug;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use crate::bottom_pane::BottomPane;
use crate::bottom_pane::BottomPaneParams;
use crate::bottom_pane::CancellationEvent;
use crate::bottom_pane::InputResult;
use crate::bottom_pane::SelectionAction;
use crate::bottom_pane::SelectionItem;
use crate::clipboard_paste::paste_image_to_temp_png;
use crate::get_git_diff::get_git_diff;
use crate::history_cell;
use crate::history_cell::CommandOutput;
use crate::history_cell::ExecCell;
use crate::history_cell::HistoryCell;
use crate::history_cell::PatchEventType;
use crate::slash_command::SlashCommand;
use crate::tui::FrameRequester;
// streaming internals are provided by crate::streaming and crate::markdown_stream
use crate::user_approval_widget::ApprovalRequest;
mod interrupts;
use self::interrupts::InterruptManager;
mod agent;
use self::agent::spawn_agent;
use self::agent::spawn_agent_from_existing;
use crate::streaming::controller::AppEventHistorySink;
use crate::streaming::controller::StreamController;
use codex_common::approval_presets::ApprovalPreset;
use codex_common::approval_presets::builtin_approval_presets;
use codex_common::model_presets::ModelPreset;
use codex_common::model_presets::builtin_model_presets;
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
use codex_core::ConversationManager;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_core::protocol_config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_file_search::FileMatch;
use codex_protocol::mcp_protocol::ConversationId;
// Track information about an in-flight exec command.
struct RunningCommand {
command: Vec<String>,
[1/3] Parse exec commands and format them more nicely in the UI (#2095) # Note for reviewers The bulk of this PR is in in the new file, `parse_command.rs`. This file is designed to be written TDD and implemented with Codex. Do not worry about reviewing the code, just review the unit tests (if you want). If any cases are missing, we'll add more tests and have Codex fix them. I think the best approach will be to land and iterate. I have some follow-ups I want to do after this lands. The next PR after this will let us merge (and dedupe) multiple sequential cells of the same such as multiple read commands. The deduping will also be important because the model often reads the same file multiple times in a row in chunks === This PR formats common commands like reading, formatting, testing, etc more nicely: It tries to extract things like file names, tests and falls back to the cmd if it doesn't. It also only shows stdout/err if the command failed. <img width="770" height="238" alt="CleanShot 2025-08-09 at 16 05 15" src="https://github.com/user-attachments/assets/0ead179a-8910-486b-aa3d-7d26264d751e" /> <img width="348" height="158" alt="CleanShot 2025-08-09 at 16 05 32" src="https://github.com/user-attachments/assets/4302681b-5e87-4ff3-85b4-0252c6c485a9" /> <img width="834" height="324" alt="CleanShot 2025-08-09 at 16 05 56 2" src="https://github.com/user-attachments/assets/09fb3517-7bd6-40f6-a126-4172106b700f" /> Part 2: https://github.com/openai/codex/pull/2097 Part 3: https://github.com/openai/codex/pull/2110
2025-08-11 11:26:15 -07:00
parsed_cmd: Vec<ParsedCommand>,
}
/// Common initialization parameters shared by all `ChatWidget` constructors.
pub(crate) struct ChatWidgetInit {
pub(crate) config: Config,
pub(crate) frame_requester: FrameRequester,
pub(crate) app_event_tx: AppEventSender,
pub(crate) initial_prompt: Option<String>,
pub(crate) initial_images: Vec<PathBuf>,
pub(crate) enhanced_keys_supported: bool,
}
pub(crate) struct ChatWidget {
app_event_tx: AppEventSender,
codex_op_tx: UnboundedSender<Op>,
bottom_pane: BottomPane,
active_exec_cell: Option<ExecCell>,
config: Config,
initial_user_message: Option<UserMessage>,
token_info: Option<TokenUsageInfo>,
// Stream lifecycle controller
stream: StreamController,
running_commands: HashMap<String, RunningCommand>,
task_complete_pending: bool,
// Queue of interruptive UI events deferred during an active write cycle
interrupts: InterruptManager,
// Accumulates the current reasoning block text to extract a header
reasoning_buffer: String,
// Accumulates full reasoning content for transcript-only recording
full_reasoning_buffer: String,
conversation_id: Option<ConversationId>,
frame_requester: FrameRequester,
// Whether to include the initial welcome banner on session configured
show_welcome_banner: bool,
// When resuming an existing session (selected via resume picker), avoid an
// immediate redraw on SessionConfigured to prevent a gratuitous UI flicker.
suppress_session_configured_redraw: bool,
// User messages queued while a turn is in progress
queued_user_messages: VecDeque<UserMessage>,
}
struct UserMessage {
text: String,
image_paths: Vec<PathBuf>,
}
impl From<String> for UserMessage {
fn from(text: String) -> Self {
Self {
text,
image_paths: Vec::new(),
}
}
}
fn create_initial_user_message(text: String, image_paths: Vec<PathBuf>) -> Option<UserMessage> {
if text.is_empty() && image_paths.is_empty() {
None
} else {
Some(UserMessage { text, image_paths })
}
}
impl ChatWidget {
fn flush_answer_stream_with_separator(&mut self) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let _ = self.stream.finalize(true, &sink);
}
// --- Small event handlers ---
fn on_session_configured(&mut self, event: codex_core::protocol::SessionConfiguredEvent) {
self.bottom_pane
.set_history_metadata(event.history_log_id, event.history_entry_count);
self.conversation_id = Some(event.session_id);
let initial_messages = event.initial_messages.clone();
if let Some(messages) = initial_messages {
self.replay_initial_messages(messages);
}
self.add_to_history(history_cell::new_session_info(
&self.config,
event,
self.show_welcome_banner,
));
// Ask codex-core to enumerate custom prompts for this session.
self.submit_op(Op::ListCustomPrompts);
if let Some(user_message) = self.initial_user_message.take() {
self.submit_user_message(user_message);
}
if !self.suppress_session_configured_redraw {
self.request_redraw();
}
}
fn on_agent_message(&mut self, message: String) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let finished = self.stream.apply_final_answer(&message, &sink);
self.handle_if_stream_finished(finished);
self.request_redraw();
}
fn on_agent_message_delta(&mut self, delta: String) {
self.handle_streaming_delta(delta);
}
fn on_agent_reasoning_delta(&mut self, delta: String) {
// For reasoning deltas, do not stream to history. Accumulate the
// current reasoning block and extract the first bold element
// (between **/**) as the chunk header. Show this header as status.
self.reasoning_buffer.push_str(&delta);
if let Some(header) = extract_first_bold(&self.reasoning_buffer) {
// Update the shimmer header to the extracted reasoning chunk header.
self.bottom_pane.update_status_header(header);
} else {
// Fallback while we don't yet have a bold header: leave existing header as-is.
}
self.request_redraw();
}
fn on_agent_reasoning_final(&mut self) {
// At the end of a reasoning block, record transcript-only content.
self.full_reasoning_buffer.push_str(&self.reasoning_buffer);
if !self.full_reasoning_buffer.is_empty() {
for cell in history_cell::new_reasoning_summary_block(
self.full_reasoning_buffer.clone(),
&self.config,
) {
self.add_boxed_history(cell);
}
}
self.reasoning_buffer.clear();
self.full_reasoning_buffer.clear();
self.request_redraw();
}
fn on_reasoning_section_break(&mut self) {
// Start a new reasoning block for header extraction and accumulate transcript.
self.full_reasoning_buffer.push_str(&self.reasoning_buffer);
self.full_reasoning_buffer.push_str("\n\n");
self.reasoning_buffer.clear();
}
// Raw reasoning uses the same flow as summarized reasoning
fn on_task_started(&mut self) {
self.bottom_pane.clear_ctrl_c_quit_hint();
self.bottom_pane.set_task_running(true);
self.stream.reset_headers_for_new_turn();
self.full_reasoning_buffer.clear();
self.reasoning_buffer.clear();
self.request_redraw();
}
fn on_task_complete(&mut self) {
// If a stream is currently active, finalize only that stream to flush any tail
// without emitting stray headers for other streams.
if self.stream.is_write_cycle_active() {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let _ = self.stream.finalize(true, &sink);
}
// Mark task stopped and request redraw now that all content is in history.
self.bottom_pane.set_task_running(false);
self.running_commands.clear();
self.request_redraw();
// If there is a queued user message, send exactly one now to begin the next turn.
self.maybe_send_next_queued_input();
}
pub(crate) fn set_token_info(&mut self, info: Option<TokenUsageInfo>) {
self.bottom_pane.set_token_usage(info.clone());
self.token_info = info;
}
/// Finalize any active exec as failed, push an error message into history,
/// and stop/clear running UI state.
fn finalize_turn_with_error_message(&mut self, message: String) {
// Ensure any spinner is replaced by a red ✗ and flushed into history.
self.finalize_active_exec_cell_as_failed();
// Emit the provided error message/history cell.
self.add_to_history(history_cell::new_error_event(message));
// Reset running state and clear streaming buffers.
self.bottom_pane.set_task_running(false);
self.running_commands.clear();
self.stream.clear_all();
}
fn on_error(&mut self, message: String) {
self.finalize_turn_with_error_message(message);
self.request_redraw();
// After an error ends the turn, try sending the next queued input.
self.maybe_send_next_queued_input();
}
/// Handle a turn aborted due to user interrupt (Esc).
/// When there are queued user messages, restore them into the composer
/// separated by newlines rather than autosubmitting the next one.
fn on_interrupted_turn(&mut self) {
// Finalize, log a gentle prompt, and clear running state.
self.finalize_turn_with_error_message("Tell the model what to do differently".to_owned());
// If any messages were queued during the task, restore them into the composer.
if !self.queued_user_messages.is_empty() {
let combined = self
.queued_user_messages
.iter()
.map(|m| m.text.clone())
.collect::<Vec<_>>()
.join("\n");
self.bottom_pane.set_composer_text(combined);
// Clear the queue and update the status indicator list.
self.queued_user_messages.clear();
self.refresh_queued_user_messages();
}
self.request_redraw();
}
fn on_plan_update(&mut self, update: codex_core::plan_tool::UpdatePlanArgs) {
self.add_to_history(history_cell::new_plan_update(update));
}
fn on_exec_approval_request(&mut self, id: String, ev: ExecApprovalRequestEvent) {
let id2 = id.clone();
let ev2 = ev.clone();
self.defer_or_handle(
|q| q.push_exec_approval(id, ev),
|s| s.handle_exec_approval_now(id2, ev2),
);
}
fn on_apply_patch_approval_request(&mut self, id: String, ev: ApplyPatchApprovalRequestEvent) {
let id2 = id.clone();
let ev2 = ev.clone();
self.defer_or_handle(
|q| q.push_apply_patch_approval(id, ev),
|s| s.handle_apply_patch_approval_now(id2, ev2),
);
}
fn on_exec_command_begin(&mut self, ev: ExecCommandBeginEvent) {
self.flush_answer_stream_with_separator();
let ev2 = ev.clone();
self.defer_or_handle(|q| q.push_exec_begin(ev), |s| s.handle_exec_begin_now(ev2));
}
fn on_exec_command_output_delta(
&mut self,
_ev: codex_core::protocol::ExecCommandOutputDeltaEvent,
) {
// TODO: Handle streaming exec output if/when implemented
}
fn on_patch_apply_begin(&mut self, event: PatchApplyBeginEvent) {
self.add_to_history(history_cell::new_patch_event(
PatchEventType::ApplyBegin {
auto_approved: event.auto_approved,
},
event.changes,
&self.config.cwd,
));
}
fn on_patch_apply_end(&mut self, event: codex_core::protocol::PatchApplyEndEvent) {
let ev2 = event.clone();
self.defer_or_handle(
|q| q.push_patch_end(event),
|s| s.handle_patch_apply_end_now(ev2),
);
}
fn on_exec_command_end(&mut self, ev: ExecCommandEndEvent) {
let ev2 = ev.clone();
self.defer_or_handle(|q| q.push_exec_end(ev), |s| s.handle_exec_end_now(ev2));
}
fn on_mcp_tool_call_begin(&mut self, ev: McpToolCallBeginEvent) {
let ev2 = ev.clone();
self.defer_or_handle(|q| q.push_mcp_begin(ev), |s| s.handle_mcp_begin_now(ev2));
}
fn on_mcp_tool_call_end(&mut self, ev: McpToolCallEndEvent) {
let ev2 = ev.clone();
self.defer_or_handle(|q| q.push_mcp_end(ev), |s| s.handle_mcp_end_now(ev2));
}
fn on_web_search_begin(&mut self, _ev: WebSearchBeginEvent) {
self.flush_answer_stream_with_separator();
}
fn on_web_search_end(&mut self, ev: WebSearchEndEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_web_search_call(format!(
"Searched: {}",
ev.query
)));
}
fn on_get_history_entry_response(
&mut self,
event: codex_core::protocol::GetHistoryEntryResponseEvent,
) {
let codex_core::protocol::GetHistoryEntryResponseEvent {
offset,
log_id,
entry,
} = event;
self.bottom_pane
.on_history_entry_response(log_id, offset, entry.map(|e| e.text));
}
fn on_shutdown_complete(&mut self) {
self.app_event_tx.send(AppEvent::ExitRequest);
}
fn on_turn_diff(&mut self, unified_diff: String) {
debug!("TurnDiffEvent: {unified_diff}");
}
fn on_background_event(&mut self, message: String) {
debug!("BackgroundEvent: {message}");
}
2025-08-21 01:15:24 -07:00
fn on_stream_error(&mut self, message: String) {
// Show stream errors in the transcript so users see retry/backoff info.
self.add_to_history(history_cell::new_stream_error_event(message));
self.request_redraw();
2025-08-21 01:15:24 -07:00
}
/// Periodic tick to commit at most one queued line to history with a small delay,
/// animating the output.
pub(crate) fn on_commit_tick(&mut self) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let finished = self.stream.on_commit_tick(&sink);
self.handle_if_stream_finished(finished);
}
fn is_write_cycle_active(&self) -> bool {
self.stream.is_write_cycle_active()
}
fn flush_interrupt_queue(&mut self) {
let mut mgr = std::mem::take(&mut self.interrupts);
mgr.flush_all(self);
self.interrupts = mgr;
}
#[inline]
fn defer_or_handle(
&mut self,
push: impl FnOnce(&mut InterruptManager),
handle: impl FnOnce(&mut Self),
) {
// Preserve deterministic FIFO across queued interrupts: once anything
// is queued due to an active write cycle, continue queueing until the
// queue is flushed to avoid reordering (e.g., ExecEnd before ExecBegin).
if self.is_write_cycle_active() || !self.interrupts.is_empty() {
push(&mut self.interrupts);
} else {
handle(self);
}
}
#[inline]
fn handle_if_stream_finished(&mut self, finished: bool) {
if finished {
if self.task_complete_pending {
self.bottom_pane.set_task_running(false);
self.task_complete_pending = false;
}
// A completed stream indicates non-exec content was just inserted.
self.flush_interrupt_queue();
}
}
#[inline]
fn handle_streaming_delta(&mut self, delta: String) {
// Before streaming agent content, flush any active exec cell group.
self.flush_active_exec_cell();
let sink = AppEventHistorySink(self.app_event_tx.clone());
self.stream.begin(&sink);
self.stream.push_and_maybe_commit(&delta, &sink);
self.request_redraw();
}
pub(crate) fn handle_exec_end_now(&mut self, ev: ExecCommandEndEvent) {
let running = self.running_commands.remove(&ev.call_id);
let (command, parsed) = match running {
Some(rc) => (rc.command, rc.parsed_cmd),
None => (vec![ev.call_id.clone()], Vec::new()),
};
if self.active_exec_cell.is_none() {
// This should have been created by handle_exec_begin_now, but in case it wasn't,
// create it now.
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.call_id.clone(),
command,
parsed,
));
}
if let Some(cell) = self.active_exec_cell.as_mut() {
cell.complete_call(
&ev.call_id,
CommandOutput {
exit_code: ev.exit_code,
stdout: ev.stdout.clone(),
stderr: ev.stderr.clone(),
formatted_output: ev.formatted_output.clone(),
},
ev.duration,
);
if cell.should_flush() {
self.flush_active_exec_cell();
}
}
}
pub(crate) fn handle_patch_apply_end_now(
&mut self,
event: codex_core::protocol::PatchApplyEndEvent,
) {
// If the patch was successful, just let the "Edited" block stand.
// Otherwise, add a failure block.
if !event.success {
self.add_to_history(history_cell::new_patch_apply_failure(event.stderr));
}
}
pub(crate) fn handle_exec_approval_now(&mut self, id: String, ev: ExecApprovalRequestEvent) {
self.flush_answer_stream_with_separator();
tui: fix approval dialog for large commands (#3087) #### Summary - Emit a “Proposed Command” history cell when an ExecApprovalRequest arrives (parity with proposed patches). - Simplify the approval dialog: show only the reason/instructions; move the command preview into history. - Make approval/abort decision history concise: - Single line snippet; if multiline, show first line + " ...". - Truncate to 80 graphemes with ellipsis for very long commands. #### Details - History - Add `new_proposed_command` to render a header and indented command preview. - Use shared `prefix_lines` helper for first/subsequent line prefixes. - Approval UI - `UserApprovalWidget` no longer renders the command in the modal; shows optional `reason` text only. - Decision history renders an inline, dimmed snippet per rules above. - Tests (snapshot-based) - Proposed/decision flow for short command. - Proposed multi-line + aborted decision snippet with “ ...”. - Very long one-line command -> truncated snippet with “…”. - Updated existing exec approval snapshots and test reasons. <img width="1053" height="704" alt="Screenshot 2025-09-03 at 11 57 35 AM" src="https://github.com/user-attachments/assets/9ed4c316-9daf-4ac1-80ff-7ae1f481dd10" /> after approving: <img width="1053" height="704" alt="Screenshot 2025-09-03 at 11 58 18 AM" src="https://github.com/user-attachments/assets/a44e243f-eb9d-42ea-87f4-171b3fb481e7" /> rejection: <img width="1053" height="207" alt="Screenshot 2025-09-03 at 11 58 45 AM" src="https://github.com/user-attachments/assets/a022664b-ae0e-4b70-a388-509208707934" /> big command: https://github.com/user-attachments/assets/2dd976e5-799f-4af7-9682-a046e66cc494
2025-09-04 16:54:53 -07:00
// Emit the proposed command into history (like proposed patches)
self.add_to_history(history_cell::new_proposed_command(&ev.command));
let request = ApprovalRequest::Exec {
id,
command: ev.command,
reason: ev.reason,
};
self.bottom_pane.push_approval_request(request);
self.request_redraw();
}
pub(crate) fn handle_apply_patch_approval_now(
&mut self,
id: String,
ev: ApplyPatchApprovalRequestEvent,
) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_patch_event(
PatchEventType::ApprovalRequest,
ev.changes.clone(),
&self.config.cwd,
));
let request = ApprovalRequest::ApplyPatch {
id,
reason: ev.reason,
grant_root: ev.grant_root,
};
self.bottom_pane.push_approval_request(request);
self.request_redraw();
}
pub(crate) fn handle_exec_begin_now(&mut self, ev: ExecCommandBeginEvent) {
// Ensure the status indicator is visible while the command runs.
self.running_commands.insert(
ev.call_id.clone(),
RunningCommand {
command: ev.command.clone(),
parsed_cmd: ev.parsed_cmd.clone(),
},
);
if let Some(exec) = &self.active_exec_cell {
if let Some(new_exec) = exec.with_added_call(
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd.clone(),
) {
self.active_exec_cell = Some(new_exec);
} else {
// Make a new cell.
self.flush_active_exec_cell();
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd,
));
}
} else {
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd,
));
}
// Request a redraw so the working header and command list are visible immediately.
self.request_redraw();
}
pub(crate) fn handle_mcp_begin_now(&mut self, ev: McpToolCallBeginEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_active_mcp_tool_call(ev.invocation));
}
pub(crate) fn handle_mcp_end_now(&mut self, ev: McpToolCallEndEvent) {
self.flush_answer_stream_with_separator();
self.add_boxed_history(history_cell::new_completed_mcp_tool_call(
80,
ev.invocation,
ev.duration,
ev.result
.as_ref()
.map(|r| !r.is_error.unwrap_or(false))
.unwrap_or(false),
ev.result,
));
}
fn layout_areas(&self, area: Rect) -> [Rect; 2] {
Layout::vertical([
Constraint::Max(
self.active_exec_cell
.as_ref()
.map_or(0, |c| c.desired_height(area.width) + 1),
),
Constraint::Min(self.bottom_pane.desired_height(area.width)),
])
.areas(area)
}
pub(crate) fn new(
common: ChatWidgetInit,
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
conversation_manager: Arc<ConversationManager>,
) -> Self {
let ChatWidgetInit {
config,
frame_requester,
app_event_tx,
initial_prompt,
initial_images,
enhanced_keys_supported,
} = common;
let mut rng = rand::rng();
let placeholder = EXAMPLE_PROMPTS[rng.random_range(0..EXAMPLE_PROMPTS.len())].to_string();
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
let codex_op_tx = spawn_agent(config.clone(), app_event_tx.clone(), conversation_manager);
Self {
app_event_tx: app_event_tx.clone(),
frame_requester: frame_requester.clone(),
codex_op_tx,
bottom_pane: BottomPane::new(BottomPaneParams {
frame_requester,
app_event_tx,
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
disable_paste_burst: config.disable_paste_burst,
}),
active_exec_cell: None,
config: config.clone(),
initial_user_message: create_initial_user_message(
initial_prompt.unwrap_or_default(),
initial_images,
),
token_info: None,
stream: StreamController::new(config),
running_commands: HashMap::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
conversation_id: None,
queued_user_messages: VecDeque::new(),
show_welcome_banner: true,
suppress_session_configured_redraw: false,
}
}
/// Create a ChatWidget attached to an existing conversation (e.g., a fork).
pub(crate) fn new_from_existing(
common: ChatWidgetInit,
conversation: std::sync::Arc<codex_core::CodexConversation>,
session_configured: codex_core::protocol::SessionConfiguredEvent,
) -> Self {
let ChatWidgetInit {
config,
frame_requester,
app_event_tx,
initial_prompt,
initial_images,
enhanced_keys_supported,
} = common;
let mut rng = rand::rng();
let placeholder = EXAMPLE_PROMPTS[rng.random_range(0..EXAMPLE_PROMPTS.len())].to_string();
let codex_op_tx =
spawn_agent_from_existing(conversation, session_configured, app_event_tx.clone());
Self {
app_event_tx: app_event_tx.clone(),
frame_requester: frame_requester.clone(),
codex_op_tx,
bottom_pane: BottomPane::new(BottomPaneParams {
frame_requester,
app_event_tx,
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
disable_paste_burst: config.disable_paste_burst,
}),
active_exec_cell: None,
config: config.clone(),
initial_user_message: create_initial_user_message(
initial_prompt.unwrap_or_default(),
initial_images,
),
token_info: None,
stream: StreamController::new(config),
running_commands: HashMap::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
conversation_id: None,
queued_user_messages: VecDeque::new(),
show_welcome_banner: false,
suppress_session_configured_redraw: true,
}
}
pub fn desired_height(&self, width: u16) -> u16 {
self.bottom_pane.desired_height(width)
+ self
.active_exec_cell
.as_ref()
.map_or(0, |c| c.desired_height(width) + 1)
}
pub(crate) fn handle_key_event(&mut self, key_event: KeyEvent) {
match key_event {
KeyEvent {
code: KeyCode::Char('c'),
modifiers: crossterm::event::KeyModifiers::CONTROL,
kind: KeyEventKind::Press,
..
} => {
self.on_ctrl_c();
return;
}
KeyEvent {
code: KeyCode::Char('v'),
modifiers: KeyModifiers::CONTROL,
kind: KeyEventKind::Press,
..
} => {
if let Ok((path, info)) = paste_image_to_temp_png() {
self.attach_image(path, info.width, info.height, info.encoded_format.label());
}
return;
}
other if other.kind == KeyEventKind::Press => {
self.bottom_pane.clear_ctrl_c_quit_hint();
}
_ => {}
}
match key_event {
KeyEvent {
code: KeyCode::Up,
modifiers: KeyModifiers::ALT,
kind: KeyEventKind::Press,
..
} if !self.queued_user_messages.is_empty() => {
// Prefer the most recently queued item.
if let Some(user_message) = self.queued_user_messages.pop_back() {
self.bottom_pane.set_composer_text(user_message.text);
self.refresh_queued_user_messages();
self.request_redraw();
}
}
_ => {
match self.bottom_pane.handle_key_event(key_event) {
InputResult::Submitted(text) => {
// If a task is running, queue the user input to be sent after the turn completes.
let user_message = UserMessage {
text,
image_paths: self.bottom_pane.take_recent_submission_images(),
};
if self.bottom_pane.is_task_running() {
self.queued_user_messages.push_back(user_message);
self.refresh_queued_user_messages();
} else {
self.submit_user_message(user_message);
}
}
InputResult::Command(cmd) => {
self.dispatch_command(cmd);
}
InputResult::None => {}
}
}
}
}
pub(crate) fn attach_image(
&mut self,
path: PathBuf,
width: u32,
height: u32,
format_label: &str,
) {
tracing::info!(
"attach_image path={path:?} width={width} height={height} format={format_label}",
);
self.bottom_pane
.attach_image(path, width, height, format_label);
self.request_redraw();
}
fn dispatch_command(&mut self, cmd: SlashCommand) {
if !cmd.available_during_task() && self.bottom_pane.is_task_running() {
let message = format!(
"'/{}' is disabled while a task is in progress.",
cmd.command()
);
self.add_to_history(history_cell::new_error_event(message));
self.request_redraw();
return;
}
match cmd {
SlashCommand::New => {
self.app_event_tx.send(AppEvent::NewSession);
}
SlashCommand::Init => {
const INIT_PROMPT: &str = include_str!("../prompt_for_init_command.md");
self.submit_text_message(INIT_PROMPT.to_string());
}
SlashCommand::Compact => {
self.clear_token_usage();
self.app_event_tx.send(AppEvent::CodexOp(Op::Compact));
}
SlashCommand::Model => {
self.open_model_popup();
}
SlashCommand::Approvals => {
self.open_approvals_popup();
}
SlashCommand::Quit => {
self.app_event_tx.send(AppEvent::ExitRequest);
}
SlashCommand::Logout => {
if let Err(e) = codex_core::auth::logout(&self.config.codex_home) {
tracing::error!("failed to logout: {e}");
}
self.app_event_tx.send(AppEvent::ExitRequest);
}
SlashCommand::Diff => {
self.add_diff_in_progress();
let tx = self.app_event_tx.clone();
tokio::spawn(async move {
let text = match get_git_diff().await {
Ok((is_git_repo, diff_text)) => {
if is_git_repo {
diff_text
} else {
"`/diff` — _not inside a git repository_".to_string()
}
}
Err(e) => format!("Failed to compute diff: {e}"),
};
tx.send(AppEvent::DiffResult(text));
});
}
SlashCommand::Mention => {
self.insert_str("@");
}
SlashCommand::Status => {
self.add_status_output();
}
SlashCommand::Mcp => {
self.add_mcp_output();
}
#[cfg(debug_assertions)]
SlashCommand::TestApproval => {
use codex_core::protocol::EventMsg;
use std::collections::HashMap;
use codex_core::protocol::ApplyPatchApprovalRequestEvent;
use codex_core::protocol::FileChange;
self.app_event_tx.send(AppEvent::CodexEvent(Event {
id: "1".to_string(),
// msg: EventMsg::ExecApprovalRequest(ExecApprovalRequestEvent {
// call_id: "1".to_string(),
// command: vec!["git".into(), "apply".into()],
// cwd: self.config.cwd.clone(),
// reason: Some("test".to_string()),
// }),
msg: EventMsg::ApplyPatchApprovalRequest(ApplyPatchApprovalRequestEvent {
call_id: "1".to_string(),
changes: HashMap::from([
(
PathBuf::from("/tmp/test.txt"),
FileChange::Add {
content: "test".to_string(),
},
),
(
PathBuf::from("/tmp/test2.txt"),
FileChange::Update {
unified_diff: "+test\n-test2".to_string(),
move_path: None,
},
),
]),
reason: None,
grant_root: Some(PathBuf::from("/tmp")),
}),
}));
}
}
}
pub(crate) fn handle_paste(&mut self, text: String) {
self.bottom_pane.handle_paste(text);
}
// Returns true if caller should skip rendering this frame (a future frame is scheduled).
pub(crate) fn handle_paste_burst_tick(&mut self, frame_requester: FrameRequester) -> bool {
if self.bottom_pane.flush_paste_burst_if_due() {
// A paste just flushed; request an immediate redraw and skip this frame.
self.request_redraw();
true
} else if self.bottom_pane.is_in_paste_burst() {
// While capturing a burst, schedule a follow-up tick and skip this frame
// to avoid redundant renders between ticks.
frame_requester.schedule_frame_in(
crate::bottom_pane::ChatComposer::recommended_paste_flush_delay(),
);
true
} else {
false
}
}
fn flush_active_exec_cell(&mut self) {
if let Some(active) = self.active_exec_cell.take() {
self.app_event_tx
.send(AppEvent::InsertHistoryCell(Box::new(active)));
}
}
fn add_to_history(&mut self, cell: impl HistoryCell + 'static) {
self.add_boxed_history(Box::new(cell));
}
fn add_boxed_history(&mut self, cell: Box<dyn HistoryCell>) {
if !cell.display_lines(u16::MAX).is_empty() {
// Only break exec grouping if the cell renders visible lines.
self.flush_active_exec_cell();
}
self.app_event_tx.send(AppEvent::InsertHistoryCell(cell));
}
fn submit_user_message(&mut self, user_message: UserMessage) {
let UserMessage { text, image_paths } = user_message;
let mut items: Vec<InputItem> = Vec::new();
if !text.is_empty() {
items.push(InputItem::Text { text: text.clone() });
}
for path in image_paths {
items.push(InputItem::LocalImage { path });
}
if items.is_empty() {
return;
}
self.codex_op_tx
.send(Op::UserInput { items })
.unwrap_or_else(|e| {
tracing::error!("failed to send message: {e}");
});
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
// Persist the text to cross-session message history.
if !text.is_empty() {
self.codex_op_tx
.send(Op::AddToHistory { text: text.clone() })
.unwrap_or_else(|e| {
tracing::error!("failed to send AddHistory op: {e}");
});
}
// Only show the text portion in conversation history.
if !text.is_empty() {
self.add_to_history(history_cell::new_user_prompt(text));
}
}
/// Replay a subset of initial events into the UI to seed the transcript when
/// resuming an existing session. This approximates the live event flow and
/// is intentionally conservative: only safe-to-replay items are rendered to
/// avoid triggering side effects. Event ids are passed as `None` to
/// distinguish replayed events from live ones.
fn replay_initial_messages(&mut self, events: Vec<EventMsg>) {
for msg in events {
if matches!(msg, EventMsg::SessionConfigured(_)) {
continue;
}
// `id: None` indicates a synthetic/fake id coming from replay.
self.dispatch_event_msg(None, msg, true);
}
}
pub(crate) fn handle_codex_event(&mut self, event: Event) {
let Event { id, msg } = event;
self.dispatch_event_msg(Some(id), msg, false);
}
/// Dispatch a protocol `EventMsg` to the appropriate handler.
///
/// `id` is `Some` for live events and `None` for replayed events from
/// `replay_initial_messages()`. Callers should treat `None` as a "fake" id
/// that must not be used to correlate follow-up actions.
fn dispatch_event_msg(&mut self, id: Option<String>, msg: EventMsg, from_replay: bool) {
match msg {
EventMsg::AgentMessageDelta(_)
| EventMsg::AgentReasoningDelta(_)
| EventMsg::ExecCommandOutputDelta(_) => {}
_ => {
tracing::trace!("handle_codex_event: {:?}", msg);
}
}
match msg {
EventMsg::SessionConfigured(e) => self.on_session_configured(e),
EventMsg::AgentMessage(AgentMessageEvent { message }) => self.on_agent_message(message),
EventMsg::AgentMessageDelta(AgentMessageDeltaEvent { delta }) => {
self.on_agent_message_delta(delta)
}
EventMsg::AgentReasoningDelta(AgentReasoningDeltaEvent { delta })
| EventMsg::AgentReasoningRawContentDelta(AgentReasoningRawContentDeltaEvent {
delta,
}) => self.on_agent_reasoning_delta(delta),
EventMsg::AgentReasoning(AgentReasoningEvent { .. })
| EventMsg::AgentReasoningRawContent(AgentReasoningRawContentEvent { .. }) => {
self.on_agent_reasoning_final()
}
EventMsg::AgentReasoningSectionBreak(_) => self.on_reasoning_section_break(),
EventMsg::TaskStarted(_) => self.on_task_started(),
EventMsg::TaskComplete(TaskCompleteEvent { .. }) => self.on_task_complete(),
EventMsg::TokenCount(ev) => self.set_token_info(ev.info),
EventMsg::Error(ErrorEvent { message }) => self.on_error(message),
EventMsg::TurnAborted(ev) => match ev.reason {
TurnAbortReason::Interrupted => {
self.on_interrupted_turn();
}
TurnAbortReason::Replaced => {
self.on_error("Turn aborted: replaced by a new task".to_owned())
}
},
EventMsg::PlanUpdate(update) => self.on_plan_update(update),
EventMsg::ExecApprovalRequest(ev) => {
// For replayed events, synthesize an empty id (these should not occur).
self.on_exec_approval_request(id.unwrap_or_default(), ev)
}
EventMsg::ApplyPatchApprovalRequest(ev) => {
self.on_apply_patch_approval_request(id.unwrap_or_default(), ev)
}
EventMsg::ExecCommandBegin(ev) => self.on_exec_command_begin(ev),
EventMsg::ExecCommandOutputDelta(delta) => self.on_exec_command_output_delta(delta),
EventMsg::PatchApplyBegin(ev) => self.on_patch_apply_begin(ev),
EventMsg::PatchApplyEnd(ev) => self.on_patch_apply_end(ev),
EventMsg::ExecCommandEnd(ev) => self.on_exec_command_end(ev),
EventMsg::McpToolCallBegin(ev) => self.on_mcp_tool_call_begin(ev),
EventMsg::McpToolCallEnd(ev) => self.on_mcp_tool_call_end(ev),
EventMsg::WebSearchBegin(ev) => self.on_web_search_begin(ev),
EventMsg::WebSearchEnd(ev) => self.on_web_search_end(ev),
EventMsg::GetHistoryEntryResponse(ev) => self.on_get_history_entry_response(ev),
EventMsg::McpListToolsResponse(ev) => self.on_list_mcp_tools(ev),
EventMsg::ListCustomPromptsResponse(ev) => self.on_list_custom_prompts(ev),
EventMsg::ShutdownComplete => self.on_shutdown_complete(),
EventMsg::TurnDiff(TurnDiffEvent { unified_diff }) => self.on_turn_diff(unified_diff),
EventMsg::BackgroundEvent(BackgroundEventEvent { message }) => {
self.on_background_event(message)
}
2025-08-21 01:15:24 -07:00
EventMsg::StreamError(StreamErrorEvent { message }) => self.on_stream_error(message),
EventMsg::UserMessage(ev) => {
if from_replay {
self.on_user_message_event(ev);
}
}
EventMsg::ConversationPath(ev) => {
self.app_event_tx
.send(crate::app_event::AppEvent::ConversationHistory(ev));
}
}
}
fn on_user_message_event(&mut self, event: UserMessageEvent) {
match event.kind {
Some(InputMessageKind::EnvironmentContext)
| Some(InputMessageKind::UserInstructions) => {
// Skip XMLwrapped context blocks in the transcript.
}
Some(InputMessageKind::Plain) | None => {
let message = event.message.trim();
if !message.is_empty() {
self.add_to_history(history_cell::new_user_prompt(message.to_string()));
}
}
}
}
fn request_redraw(&mut self) {
self.frame_requester.schedule_frame();
}
/// Mark the active exec cell as failed (✗) and flush it into history.
fn finalize_active_exec_cell_as_failed(&mut self) {
if let Some(cell) = self.active_exec_cell.take() {
let cell = cell.into_failed();
// Insert finalized exec into history and keep grouping consistent.
self.add_to_history(cell);
}
}
// If idle and there are queued inputs, submit exactly one to start the next turn.
fn maybe_send_next_queued_input(&mut self) {
if self.bottom_pane.is_task_running() {
return;
}
if let Some(user_message) = self.queued_user_messages.pop_front() {
self.submit_user_message(user_message);
}
// Update the list to reflect the remaining queued messages (if any).
self.refresh_queued_user_messages();
}
/// Rebuild and update the queued user messages from the current queue.
fn refresh_queued_user_messages(&mut self) {
let messages: Vec<String> = self
.queued_user_messages
.iter()
.map(|m| m.text.clone())
.collect();
self.bottom_pane.set_queued_user_messages(messages);
}
pub(crate) fn add_diff_in_progress(&mut self) {
self.request_redraw();
}
pub(crate) fn on_diff_complete(&mut self) {
self.request_redraw();
}
pub(crate) fn add_status_output(&mut self) {
let default_usage;
let usage_ref = if let Some(ti) = &self.token_info {
&ti.total_token_usage
} else {
default_usage = TokenUsage::default();
&default_usage
};
self.add_to_history(history_cell::new_status_output(
&self.config,
usage_ref,
&self.conversation_id,
));
}
/// Open a popup to choose the model preset (model + reasoning effort).
pub(crate) fn open_model_popup(&mut self) {
let current_model = self.config.model.clone();
let current_effort = self.config.model_reasoning_effort;
let presets: &[ModelPreset] = builtin_model_presets();
let mut items: Vec<SelectionItem> = Vec::new();
for preset in presets.iter() {
let name = preset.label.to_string();
let description = Some(preset.description.to_string());
let is_current = preset.model == current_model && preset.effort == current_effort;
let model_slug = preset.model.to_string();
let effort = preset.effort;
let current_model = current_model.clone();
let actions: Vec<SelectionAction> = vec![Box::new(move |tx| {
tx.send(AppEvent::CodexOp(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
model: Some(model_slug.clone()),
effort: Some(effort),
summary: None,
}));
tx.send(AppEvent::UpdateModel(model_slug.clone()));
tx.send(AppEvent::UpdateReasoningEffort(effort));
tracing::info!(
"New model: {}, New effort: {}, Current model: {}, Current effort: {}",
model_slug.clone(),
effort,
current_model,
current_effort
);
})];
items.push(SelectionItem {
name,
description,
is_current,
actions,
});
}
self.bottom_pane.show_selection_view(
"Select model and reasoning level".to_string(),
Some("Switch between OpenAI models for this and future Codex CLI session".to_string()),
Some("Press Enter to confirm or Esc to go back".to_string()),
items,
);
}
/// Open a popup to choose the approvals mode (ask for approval policy + sandbox policy).
pub(crate) fn open_approvals_popup(&mut self) {
let current_approval = self.config.approval_policy;
let current_sandbox = self.config.sandbox_policy.clone();
let mut items: Vec<SelectionItem> = Vec::new();
let presets: Vec<ApprovalPreset> = builtin_approval_presets();
for preset in presets.into_iter() {
let is_current =
current_approval == preset.approval && current_sandbox == preset.sandbox;
let approval = preset.approval;
let sandbox = preset.sandbox.clone();
let name = preset.label.to_string();
let description = Some(preset.description.to_string());
let actions: Vec<SelectionAction> = vec![Box::new(move |tx| {
tx.send(AppEvent::CodexOp(Op::OverrideTurnContext {
cwd: None,
approval_policy: Some(approval),
sandbox_policy: Some(sandbox.clone()),
model: None,
effort: None,
summary: None,
}));
tx.send(AppEvent::UpdateAskForApprovalPolicy(approval));
tx.send(AppEvent::UpdateSandboxPolicy(sandbox.clone()));
})];
items.push(SelectionItem {
name,
description,
is_current,
actions,
});
}
self.bottom_pane.show_selection_view(
"Select Approval Mode".to_string(),
None,
Some("Press Enter to confirm or Esc to go back".to_string()),
items,
);
}
/// Set the approval policy in the widget's config copy.
pub(crate) fn set_approval_policy(&mut self, policy: AskForApproval) {
self.config.approval_policy = policy;
}
/// Set the sandbox policy in the widget's config copy.
pub(crate) fn set_sandbox_policy(&mut self, policy: SandboxPolicy) {
self.config.sandbox_policy = policy;
}
/// Set the reasoning effort in the widget's config copy.
pub(crate) fn set_reasoning_effort(&mut self, effort: ReasoningEffortConfig) {
self.config.model_reasoning_effort = effort;
}
/// Set the model in the widget's config copy.
pub(crate) fn set_model(&mut self, model: String) {
self.config.model = model;
}
pub(crate) fn add_mcp_output(&mut self) {
if self.config.mcp_servers.is_empty() {
self.add_to_history(history_cell::empty_mcp_output());
} else {
self.submit_op(Op::ListMcpTools);
}
}
feat: add support for @ to do file search (#1401) Introduces support for `@` to trigger a fuzzy-filename search in the composer. Under the hood, this leverages https://crates.io/crates/nucleo-matcher to do the fuzzy matching and https://crates.io/crates/ignore to build up the list of file candidates (so that it respects `.gitignore`). For simplicity (at least for now), we do not do any caching between searches like VS Code does for its file search: https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218 Because we do not do any caching, I saw queries take up to three seconds on large repositories with hundreds of thousands of files. To that end, we do not perform searches synchronously on each keystroke, but instead dispatch an event to do the search on a background thread that asynchronously reports back to the UI when the results are available. This is largely handled by the `FileSearchManager` introduced in this PR, which also has logic for debouncing requests so there is at most one search in flight at a time. While we could potentially polish and tune this feature further, it may already be overengineered for how it will be used, in practice, so we can improve things going forward if it turns out that this is not "good enough" in the wild. Note this feature does not work like `@` in the TypeScript CLI, which was more like directory-based tab completion. In the Rust CLI, `@` triggers a full-repo fuzzy-filename search. Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
/// Forward file-search results to the bottom pane.
pub(crate) fn apply_file_search_result(&mut self, query: String, matches: Vec<FileMatch>) {
feat: add support for @ to do file search (#1401) Introduces support for `@` to trigger a fuzzy-filename search in the composer. Under the hood, this leverages https://crates.io/crates/nucleo-matcher to do the fuzzy matching and https://crates.io/crates/ignore to build up the list of file candidates (so that it respects `.gitignore`). For simplicity (at least for now), we do not do any caching between searches like VS Code does for its file search: https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218 Because we do not do any caching, I saw queries take up to three seconds on large repositories with hundreds of thousands of files. To that end, we do not perform searches synchronously on each keystroke, but instead dispatch an event to do the search on a background thread that asynchronously reports back to the UI when the results are available. This is largely handled by the `FileSearchManager` introduced in this PR, which also has logic for debouncing requests so there is at most one search in flight at a time. While we could potentially polish and tune this feature further, it may already be overengineered for how it will be used, in practice, so we can improve things going forward if it turns out that this is not "good enough" in the wild. Note this feature does not work like `@` in the TypeScript CLI, which was more like directory-based tab completion. In the Rust CLI, `@` triggers a full-repo fuzzy-filename search. Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
self.bottom_pane.on_file_search_result(query, matches);
}
/// Handle Ctrl-C key press.
fn on_ctrl_c(&mut self) {
if self.bottom_pane.on_ctrl_c() == CancellationEvent::Handled {
return;
}
if self.bottom_pane.is_task_running() {
self.bottom_pane.show_ctrl_c_quit_hint();
self.submit_op(Op::Interrupt);
return;
}
self.submit_op(Op::Shutdown);
}
pub(crate) fn composer_is_empty(&self) -> bool {
self.bottom_pane.composer_is_empty()
}
/// True when the UI is in the regular composer state with no running task,
/// no modal overlay (e.g. approvals or status indicator), and no composer popups.
/// In this state Esc-Esc backtracking is enabled.
pub(crate) fn is_normal_backtrack_mode(&self) -> bool {
self.bottom_pane.is_normal_backtrack_mode()
}
pub(crate) fn insert_str(&mut self, text: &str) {
self.bottom_pane.insert_str(text);
}
/// Replace the composer content with the provided text and reset cursor.
pub(crate) fn set_composer_text(&mut self, text: String) {
self.bottom_pane.set_composer_text(text);
}
pub(crate) fn show_esc_backtrack_hint(&mut self) {
self.bottom_pane.show_esc_backtrack_hint();
}
pub(crate) fn clear_esc_backtrack_hint(&mut self) {
self.bottom_pane.clear_esc_backtrack_hint();
}
/// Forward an `Op` directly to codex.
pub(crate) fn submit_op(&self, op: Op) {
// Record outbound operation for session replay fidelity.
crate::session_log::log_outbound_op(&op);
if let Err(e) = self.codex_op_tx.send(op) {
tracing::error!("failed to submit op: {e}");
}
}
fn on_list_mcp_tools(&mut self, ev: McpListToolsResponseEvent) {
self.add_to_history(history_cell::new_mcp_tools_output(&self.config, ev.tools));
}
fn on_list_custom_prompts(&mut self, ev: ListCustomPromptsResponseEvent) {
let len = ev.custom_prompts.len();
debug!("received {len} custom prompts");
// Forward to bottom pane so the slash popup can show them now.
self.bottom_pane.set_custom_prompts(ev.custom_prompts);
}
/// Programmatically submit a user text message as if typed in the
/// composer. The text will be added to conversation history and sent to
/// the agent.
pub(crate) fn submit_text_message(&mut self, text: String) {
if text.is_empty() {
return;
}
self.submit_user_message(text.into());
}
pub(crate) fn token_usage(&self) -> TokenUsage {
self.token_info
.as_ref()
.map(|ti| ti.total_token_usage.clone())
.unwrap_or_default()
}
pub(crate) fn conversation_id(&self) -> Option<ConversationId> {
self.conversation_id
}
/// Return a reference to the widget's current config (includes any
/// runtime overrides applied via TUI, e.g., model or approval policy).
pub(crate) fn config_ref(&self) -> &Config {
&self.config
}
pub(crate) fn clear_token_usage(&mut self) {
self.token_info = None;
self.bottom_pane.set_token_usage(None);
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let [_, bottom_pane_area] = self.layout_areas(area);
self.bottom_pane.cursor_pos(bottom_pane_area)
}
}
impl WidgetRef for &ChatWidget {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let [active_cell_area, bottom_pane_area] = self.layout_areas(area);
(&self.bottom_pane).render(bottom_pane_area, buf);
if !active_cell_area.is_empty()
&& let Some(cell) = &self.active_exec_cell
{
let mut active_cell_area = active_cell_area;
active_cell_area.y = active_cell_area.y.saturating_add(1);
active_cell_area.height -= 1;
cell.render_ref(active_cell_area, buf);
}
}
}
feat: show number of tokens remaining in UI (#1388) When using the OpenAI Responses API, we now record the `usage` field for a `"response.completed"` event, which includes metrics about the number of tokens consumed. We also introduce `openai_model_info.rs`, which includes current data about the most common OpenAI models available via the API (specifically `context_window` and `max_output_tokens`). If Codex does not recognize the model, you can set `model_context_window` and `model_max_output_tokens` explicitly in `config.toml`. When then introduce a new event type to `protocol.rs`, `TokenCount`, which includes the `TokenUsage` for the most recent turn. Finally, we update the TUI to record the running sum of tokens used so the percentage of available context window remaining can be reported via the placeholder text for the composer: ![Screenshot 2025-06-25 at 11 20 55 PM](https://github.com/user-attachments/assets/6fd6982f-7247-4f14-84b2-2e600cb1fd49) We could certainly get much fancier with this (such as reporting the estimated cost of the conversation), but for now, we are just trying to achieve feature parity with the TypeScript CLI. Though arguably this improves upon the TypeScript CLI, as the TypeScript CLI uses heuristics to estimate the number of tokens used rather than using the `usage` information directly: https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16 Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
const EXAMPLE_PROMPTS: [&str; 6] = [
"Explain this codebase",
"Summarize recent commits",
"Implement {feature}",
"Find and fix a bug in @filename",
"Write tests for @filename",
"Improve documentation in @filename",
];
// Extract the first bold (Markdown) element in the form **...** from `s`.
// Returns the inner text if found; otherwise `None`.
fn extract_first_bold(s: &str) -> Option<String> {
let bytes = s.as_bytes();
let mut i = 0usize;
while i + 1 < bytes.len() {
if bytes[i] == b'*' && bytes[i + 1] == b'*' {
let start = i + 2;
let mut j = start;
while j + 1 < bytes.len() {
if bytes[j] == b'*' && bytes[j + 1] == b'*' {
// Found closing **
let inner = &s[start..j];
let trimmed = inner.trim();
if !trimmed.is_empty() {
return Some(trimmed.to_string());
} else {
return None;
}
}
j += 1;
}
// No closing; stop searching (wait for more deltas)
return None;
}
i += 1;
}
None
}
#[cfg(test)]
mod tests;