Files
llmx/codex-rs/tui/src/history_cell.rs

435 lines
15 KiB
Rust
Raw Normal View History

use codex_ansi_escape::ansi_escape_line;
use codex_common::elapsed::format_duration;
use codex_core::config::Config;
use codex_core::protocol::FileChange;
use codex_core::protocol::SessionConfiguredEvent;
use ratatui::prelude::*;
use ratatui::style::Color;
use ratatui::style::Modifier;
use ratatui::style::Style;
use ratatui::text::Line as RtLine;
use ratatui::text::Span as RtSpan;
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use std::time::Instant;
use crate::exec_command::escape_command;
feat: introduce the use of tui-markdown (#851) This introduces the use of the `tui-markdown` crate to parse an assistant message as Markdown and style it using ANSI for a better user experience. As shown in the screenshot below, it has support for syntax highlighting for _tagged_ fenced code blocks: <img width="907" alt="image" src="https://github.com/user-attachments/assets/900dc229-80bb-46e8-b1bb-efee4c70ba3c" /> That said, `tui-markdown` is not as configurable (or stylish!) as https://www.npmjs.com/package/marked-terminal, which is what we use in the TypeScript CLI. In particular: * The styles are hardcoded and `tui_markdown::from_str()` does not take any options whatsoever. It uses "bold white" for inline code style which does not stand out as much as the yellow used by `marked-terminal`: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L464 I asked Codex to take a first pass at this and it came up with: https://github.com/joshka/tui-markdown/pull/80 * If a fenced code block is not tagged, then it does not get highlighted. I would rather add some logic here: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L262 that uses something like https://pypi.org/project/guesslang/ to examine the value of `text` and try to use the appropriate syntax highlighter. * When we have a fenced code block, we do not want to show the opening and closing triple backticks in the output. To unblock ourselves, we might want to bundle our own fork of `tui-markdown` temporarily until we figure out what the shape of the API should be and then try to upstream it.
2025-05-07 10:46:32 -07:00
use crate::markdown::append_markdown;
pub(crate) struct CommandOutput {
pub(crate) exit_code: i32,
pub(crate) stdout: String,
pub(crate) stderr: String,
pub(crate) duration: Duration,
}
pub(crate) enum PatchEventType {
ApprovalRequest,
ApplyBegin { auto_approved: bool },
}
/// Represents an event to display in the conversation history. Returns its
/// `Vec<Line<'static>>` representation to make it easier to display in a
/// scrollable list.
pub(crate) enum HistoryCell {
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
/// Welcome message.
WelcomeMessage { lines: Vec<Line<'static>> },
/// Message from the user.
UserPrompt { lines: Vec<Line<'static>> },
/// Message from the agent.
AgentMessage { lines: Vec<Line<'static>> },
/// Reasoning event from the agent.
AgentReasoning { lines: Vec<Line<'static>> },
/// An exec tool call that has not finished yet.
ActiveExecCommand {
call_id: String,
/// The shell command, escaped and formatted.
command: String,
start: Instant,
lines: Vec<Line<'static>>,
},
/// Completed exec tool call.
CompletedExecCommand { lines: Vec<Line<'static>> },
/// An MCP tool call that has not finished yet.
ActiveMcpToolCall {
call_id: String,
/// `server.tool` fully-qualified name so we can show a concise label
fq_tool_name: String,
/// Formatted invocation that mirrors the `$ cmd ...` style of exec
/// commands. We keep this around so the completed state can reuse the
/// exact same text without re-formatting.
invocation: String,
start: Instant,
lines: Vec<Line<'static>>,
},
/// Completed MCP tool call.
CompletedMcpToolCall { lines: Vec<Line<'static>> },
/// Background event
BackgroundEvent { lines: Vec<Line<'static>> },
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
/// Error event from the backend.
ErrorEvent { lines: Vec<Line<'static>> },
/// Info describing the newlyinitialized session.
SessionInfo { lines: Vec<Line<'static>> },
/// A pending code patch that is awaiting user approval. Mirrors the
/// behaviour of `ActiveExecCommand` so the user sees *what* patch the
/// model wants to apply before being prompted to approve or deny it.
PendingPatch {
/// Identifier so that a future `PatchApplyEnd` can update the entry
/// with the final status (not yet implemented).
lines: Vec<Line<'static>>,
},
}
const TOOL_CALL_MAX_LINES: usize = 5;
impl HistoryCell {
pub(crate) fn new_session_info(
config: &Config,
event: SessionConfiguredEvent,
is_first_event: bool,
) -> Self {
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
let SessionConfiguredEvent {
model,
session_id,
history_log_id: _,
history_entry_count: _,
} = event;
if is_first_event {
let mut lines: Vec<Line<'static>> = vec![
Line::from(vec![
"OpenAI ".into(),
"Codex".bold(),
" (research preview)".dim(),
]),
Line::from(""),
Line::from(vec![
"codex session".magenta().bold(),
" ".into(),
session_id.to_string().dim(),
]),
];
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
let entries = vec![
("workdir", config.cwd.display().to_string()),
("model", config.model.clone()),
("provider", config.model_provider_id.clone()),
("approval", format!("{:?}", config.approval_policy)),
("sandbox", format!("{:?}", config.sandbox_policy)),
];
for (key, value) in entries {
lines.push(Line::from(vec![format!("{key}: ").bold(), value.into()]));
}
lines.push(Line::from(""));
HistoryCell::WelcomeMessage { lines }
} else if config.model == model {
HistoryCell::SessionInfo { lines: vec![] }
} else {
let lines = vec![
Line::from("model changed:".magenta().bold()),
Line::from(format!("requested: {}", config.model)),
Line::from(format!("used: {}", model)),
Line::from(""),
];
HistoryCell::SessionInfo { lines }
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
}
}
pub(crate) fn new_user_prompt(message: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("user".cyan().bold()));
lines.extend(message.lines().map(|l| Line::from(l.to_string())));
lines.push(Line::from(""));
HistoryCell::UserPrompt { lines }
}
pub(crate) fn new_agent_message(message: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("codex".magenta().bold()));
feat: introduce the use of tui-markdown (#851) This introduces the use of the `tui-markdown` crate to parse an assistant message as Markdown and style it using ANSI for a better user experience. As shown in the screenshot below, it has support for syntax highlighting for _tagged_ fenced code blocks: <img width="907" alt="image" src="https://github.com/user-attachments/assets/900dc229-80bb-46e8-b1bb-efee4c70ba3c" /> That said, `tui-markdown` is not as configurable (or stylish!) as https://www.npmjs.com/package/marked-terminal, which is what we use in the TypeScript CLI. In particular: * The styles are hardcoded and `tui_markdown::from_str()` does not take any options whatsoever. It uses "bold white" for inline code style which does not stand out as much as the yellow used by `marked-terminal`: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L464 I asked Codex to take a first pass at this and it came up with: https://github.com/joshka/tui-markdown/pull/80 * If a fenced code block is not tagged, then it does not get highlighted. I would rather add some logic here: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L262 that uses something like https://pypi.org/project/guesslang/ to examine the value of `text` and try to use the appropriate syntax highlighter. * When we have a fenced code block, we do not want to show the opening and closing triple backticks in the output. To unblock ourselves, we might want to bundle our own fork of `tui-markdown` temporarily until we figure out what the shape of the API should be and then try to upstream it.
2025-05-07 10:46:32 -07:00
append_markdown(&message, &mut lines);
lines.push(Line::from(""));
HistoryCell::AgentMessage { lines }
}
pub(crate) fn new_agent_reasoning(text: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("thinking".magenta().italic()));
append_markdown(&text, &mut lines);
lines.push(Line::from(""));
HistoryCell::AgentReasoning { lines }
}
pub(crate) fn new_active_exec_command(call_id: String, command: Vec<String>) -> Self {
let command_escaped = escape_command(&command);
let start = Instant::now();
let lines: Vec<Line<'static>> = vec![
Line::from(vec!["command".magenta(), " running...".dim()]),
Line::from(format!("$ {command_escaped}")),
Line::from(""),
];
HistoryCell::ActiveExecCommand {
call_id,
command: command_escaped,
start,
lines,
}
}
pub(crate) fn new_completed_exec_command(command: String, output: CommandOutput) -> Self {
let CommandOutput {
exit_code,
stdout,
stderr,
duration,
} = output;
let mut lines: Vec<Line<'static>> = Vec::new();
// Title depends on whether we have output yet.
let title_line = Line::from(vec![
"command".magenta(),
format!(
" (code: {}, duration: {})",
exit_code,
format_duration(duration)
)
.dim(),
]);
lines.push(title_line);
let src = if exit_code == 0 { stdout } else { stderr };
lines.push(Line::from(format!("$ {command}")));
let mut lines_iter = src.lines();
for raw in lines_iter.by_ref().take(TOOL_CALL_MAX_LINES) {
lines.push(ansi_escape_line(raw).dim());
}
let remaining = lines_iter.count();
if remaining > 0 {
lines.push(Line::from(format!("... {} additional lines", remaining)).dim());
}
lines.push(Line::from(""));
HistoryCell::CompletedExecCommand { lines }
}
pub(crate) fn new_active_mcp_tool_call(
call_id: String,
server: String,
tool: String,
arguments: Option<serde_json::Value>,
) -> Self {
let fq_tool_name = format!("{server}.{tool}");
// Format the arguments as compact JSON so they roughly fit on one
// line. If there are no arguments we keep it empty so the invocation
// mirrors a function-style call.
let args_str = arguments
.as_ref()
.map(|v| {
// Use compact form to keep things short but readable.
serde_json::to_string(v).unwrap_or_else(|_| v.to_string())
})
.unwrap_or_default();
let invocation = if args_str.is_empty() {
format!("{fq_tool_name}()")
} else {
format!("{fq_tool_name}({args_str})")
};
let start = Instant::now();
let title_line = Line::from(vec!["tool".magenta(), " running...".dim()]);
let lines: Vec<Line<'static>> = vec![
title_line,
Line::from(format!("$ {invocation}")),
Line::from(""),
];
HistoryCell::ActiveMcpToolCall {
call_id,
fq_tool_name,
invocation,
start,
lines,
}
}
pub(crate) fn new_completed_mcp_tool_call(
fq_tool_name: String,
invocation: String,
start: Instant,
success: bool,
result: Option<serde_json::Value>,
) -> Self {
let duration = format_duration(start.elapsed());
let status_str = if success { "success" } else { "failed" };
let title_line = Line::from(vec![
"tool".magenta(),
format!(" {fq_tool_name} ({status_str}, duration: {})", duration).dim(),
]);
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(title_line);
lines.push(Line::from(format!("$ {invocation}")));
if let Some(res_val) = result {
let json_pretty =
serde_json::to_string_pretty(&res_val).unwrap_or_else(|_| res_val.to_string());
let mut iter = json_pretty.lines();
for raw in iter.by_ref().take(TOOL_CALL_MAX_LINES) {
lines.push(Line::from(raw.to_string()).dim());
}
let remaining = iter.count();
if remaining > 0 {
lines.push(Line::from(format!("... {} additional lines", remaining)).dim());
}
}
lines.push(Line::from(""));
HistoryCell::CompletedMcpToolCall { lines }
}
pub(crate) fn new_background_event(message: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("event".dim()));
lines.extend(message.lines().map(|l| Line::from(l.to_string()).dim()));
lines.push(Line::from(""));
HistoryCell::BackgroundEvent { lines }
}
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
pub(crate) fn new_error_event(message: String) -> Self {
let lines: Vec<Line<'static>> = vec![
vec!["ERROR: ".red().bold(), message.into()].into(),
"".into(),
];
HistoryCell::ErrorEvent { lines }
}
/// Create a new `PendingPatch` cell that lists the filelevel summary of
/// a proposed patch. The summary lines should already be formatted (e.g.
/// "A path/to/file.rs").
pub(crate) fn new_patch_event(
event_type: PatchEventType,
changes: HashMap<PathBuf, FileChange>,
) -> Self {
let title = match event_type {
PatchEventType::ApprovalRequest => "proposed patch",
PatchEventType::ApplyBegin {
auto_approved: true,
} => "applying patch",
PatchEventType::ApplyBegin {
auto_approved: false,
} => {
let lines = vec![Line::from("patch applied".magenta().bold())];
return Self::PendingPatch { lines };
}
};
let summary_lines = create_diff_summary(changes);
let mut lines: Vec<Line<'static>> = Vec::new();
// Header similar to the command formatter so patches are visually
// distinct while still fitting the overall colour scheme.
lines.push(Line::from(title.magenta().bold()));
for line in summary_lines {
if line.starts_with('+') {
lines.push(line.green().into());
} else if line.starts_with('-') {
lines.push(line.red().into());
} else if let Some(space_idx) = line.find(' ') {
let kind_owned = line[..space_idx].to_string();
let rest_owned = line[space_idx + 1..].to_string();
let style_for = |fg: Color| Style::default().fg(fg).add_modifier(Modifier::BOLD);
let styled_kind = match kind_owned.as_str() {
"A" => RtSpan::styled(kind_owned.clone(), style_for(Color::Green)),
"D" => RtSpan::styled(kind_owned.clone(), style_for(Color::Red)),
"M" => RtSpan::styled(kind_owned.clone(), style_for(Color::Yellow)),
"R" | "C" => RtSpan::styled(kind_owned.clone(), style_for(Color::Cyan)),
_ => RtSpan::raw(kind_owned.clone()),
};
let styled_line =
RtLine::from(vec![styled_kind, RtSpan::raw(" "), RtSpan::raw(rest_owned)]);
lines.push(styled_line);
} else {
lines.push(Line::from(line));
}
}
lines.push(Line::from(""));
HistoryCell::PendingPatch { lines }
}
pub(crate) fn lines(&self) -> &Vec<Line<'static>> {
match self {
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
HistoryCell::WelcomeMessage { lines, .. }
| HistoryCell::UserPrompt { lines, .. }
| HistoryCell::AgentMessage { lines, .. }
| HistoryCell::AgentReasoning { lines, .. }
| HistoryCell::BackgroundEvent { lines, .. }
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
| HistoryCell::ErrorEvent { lines, .. }
| HistoryCell::SessionInfo { lines, .. }
| HistoryCell::ActiveExecCommand { lines, .. }
| HistoryCell::CompletedExecCommand { lines, .. }
| HistoryCell::ActiveMcpToolCall { lines, .. }
| HistoryCell::CompletedMcpToolCall { lines, .. }
| HistoryCell::PendingPatch { lines, .. } => lines,
}
}
}
fn create_diff_summary(changes: HashMap<PathBuf, FileChange>) -> Vec<String> {
// Build a concise, humanreadable summary list similar to the
// `git status` short format so the user can reason about the
// patch without scrolling.
let mut summaries: Vec<String> = Vec::new();
for (path, change) in &changes {
use codex_core::protocol::FileChange::*;
match change {
Add { content } => {
let added = content.lines().count();
summaries.push(format!("A {} (+{added})", path.display()));
}
Delete => {
summaries.push(format!("D {}", path.display()));
}
Update {
unified_diff,
move_path,
} => {
if let Some(new_path) = move_path {
summaries.push(format!("R {}{}", path.display(), new_path.display(),));
} else {
summaries.push(format!("M {}", path.display(),));
}
summaries.extend(unified_diff.lines().map(|s| s.to_string()));
}
}
}
summaries
}