Files
llmx/codex-rs/tui/src/app_event.rs

91 lines
2.7 KiB
Rust
Raw Normal View History

use std::path::PathBuf;
use codex_common::model_presets::ModelPreset;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::Event;
use codex_file_search::FileMatch;
use crate::bottom_pane::ApprovalRequest;
use crate::history_cell::HistoryCell;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_core::protocol_config_types::ReasoningEffort;
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
#[allow(clippy::large_enum_variant)]
#[derive(Debug)]
pub(crate) enum AppEvent {
CodexEvent(Event),
/// Start a new session.
NewSession,
/// Request to exit the application gracefully.
ExitRequest,
/// Forward an `Op` to the Agent. Using an `AppEvent` for this avoids
/// bubbling channels through layers of widgets.
CodexOp(codex_core::protocol::Op),
feat: add support for @ to do file search (#1401) Introduces support for `@` to trigger a fuzzy-filename search in the composer. Under the hood, this leverages https://crates.io/crates/nucleo-matcher to do the fuzzy matching and https://crates.io/crates/ignore to build up the list of file candidates (so that it respects `.gitignore`). For simplicity (at least for now), we do not do any caching between searches like VS Code does for its file search: https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218 Because we do not do any caching, I saw queries take up to three seconds on large repositories with hundreds of thousands of files. To that end, we do not perform searches synchronously on each keystroke, but instead dispatch an event to do the search on a background thread that asynchronously reports back to the UI when the results are available. This is largely handled by the `FileSearchManager` introduced in this PR, which also has logic for debouncing requests so there is at most one search in flight at a time. While we could potentially polish and tune this feature further, it may already be overengineered for how it will be used, in practice, so we can improve things going forward if it turns out that this is not "good enough" in the wild. Note this feature does not work like `@` in the TypeScript CLI, which was more like directory-based tab completion. In the Rust CLI, `@` triggers a full-repo fuzzy-filename search. Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
/// Kick off an asynchronous file search for the given query (text after
/// the `@`). Previous searches may be cancelled by the app layer so there
/// is at most one in-flight search.
StartFileSearch(String),
/// Result of a completed asynchronous file search. The `query` echoes the
/// original search term so the UI can decide whether the results are
/// still relevant.
FileSearchResult {
query: String,
matches: Vec<FileMatch>,
feat: add support for @ to do file search (#1401) Introduces support for `@` to trigger a fuzzy-filename search in the composer. Under the hood, this leverages https://crates.io/crates/nucleo-matcher to do the fuzzy matching and https://crates.io/crates/ignore to build up the list of file candidates (so that it respects `.gitignore`). For simplicity (at least for now), we do not do any caching between searches like VS Code does for its file search: https://github.com/microsoft/vscode/blob/1d89ed699b2e924d418c856318a3e12bca67ff3a/src/vs/workbench/services/search/node/rawSearchService.ts#L212-L218 Because we do not do any caching, I saw queries take up to three seconds on large repositories with hundreds of thousands of files. To that end, we do not perform searches synchronously on each keystroke, but instead dispatch an event to do the search on a background thread that asynchronously reports back to the UI when the results are available. This is largely handled by the `FileSearchManager` introduced in this PR, which also has logic for debouncing requests so there is at most one search in flight at a time. While we could potentially polish and tune this feature further, it may already be overengineered for how it will be used, in practice, so we can improve things going forward if it turns out that this is not "good enough" in the wild. Note this feature does not work like `@` in the TypeScript CLI, which was more like directory-based tab completion. In the Rust CLI, `@` triggers a full-repo fuzzy-filename search. Fixes https://github.com/openai/codex/issues/1261.
2025-06-28 13:47:42 -07:00
},
/// Result of computing a `/diff` command.
DiffResult(String),
InsertHistoryCell(Box<dyn HistoryCell>),
StartCommitAnimation,
StopCommitAnimation,
CommitTick,
/// Update the current reasoning effort in the running app and widget.
UpdateReasoningEffort(Option<ReasoningEffort>),
/// Update the current model slug in the running app and widget.
UpdateModel(String),
/// Persist the selected model and reasoning effort to the appropriate config.
PersistModelSelection {
model: String,
effort: Option<ReasoningEffort>,
},
/// Open the reasoning selection popup after picking a model.
OpenReasoningPopup {
model: String,
presets: Vec<ModelPreset>,
},
/// Update the current approval policy in the running app and widget.
UpdateAskForApprovalPolicy(AskForApproval),
/// Update the current sandbox policy in the running app and widget.
UpdateSandboxPolicy(SandboxPolicy),
/// Forwarded conversation history snapshot from the current conversation.
ConversationHistory(ConversationPathResponseEvent),
/// Open the branch picker option from the review popup.
OpenReviewBranchPicker(PathBuf),
/// Open the commit picker option from the review popup.
OpenReviewCommitPicker(PathBuf),
/// Open the custom prompt option from the review popup.
OpenReviewCustomPrompt,
/// Open the approval popup.
FullScreenApprovalRequest(ApprovalRequest),
}