Files
llmx/codex-rs/tui/src/history_cell.rs

422 lines
14 KiB
Rust
Raw Normal View History

use codex_ansi_escape::ansi_escape_line;
use codex_common::elapsed::format_duration;
use codex_core::config::Config;
use codex_core::protocol::FileChange;
use ratatui::prelude::*;
use ratatui::style::Color;
use ratatui::style::Modifier;
use ratatui::style::Style;
use ratatui::text::Line as RtLine;
use ratatui::text::Span as RtSpan;
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use std::time::Instant;
use crate::exec_command::escape_command;
feat: introduce the use of tui-markdown (#851) This introduces the use of the `tui-markdown` crate to parse an assistant message as Markdown and style it using ANSI for a better user experience. As shown in the screenshot below, it has support for syntax highlighting for _tagged_ fenced code blocks: <img width="907" alt="image" src="https://github.com/user-attachments/assets/900dc229-80bb-46e8-b1bb-efee4c70ba3c" /> That said, `tui-markdown` is not as configurable (or stylish!) as https://www.npmjs.com/package/marked-terminal, which is what we use in the TypeScript CLI. In particular: * The styles are hardcoded and `tui_markdown::from_str()` does not take any options whatsoever. It uses "bold white" for inline code style which does not stand out as much as the yellow used by `marked-terminal`: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L464 I asked Codex to take a first pass at this and it came up with: https://github.com/joshka/tui-markdown/pull/80 * If a fenced code block is not tagged, then it does not get highlighted. I would rather add some logic here: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L262 that uses something like https://pypi.org/project/guesslang/ to examine the value of `text` and try to use the appropriate syntax highlighter. * When we have a fenced code block, we do not want to show the opening and closing triple backticks in the output. To unblock ourselves, we might want to bundle our own fork of `tui-markdown` temporarily until we figure out what the shape of the API should be and then try to upstream it.
2025-05-07 10:46:32 -07:00
use crate::markdown::append_markdown;
pub(crate) struct CommandOutput {
pub(crate) exit_code: i32,
pub(crate) stdout: String,
pub(crate) stderr: String,
pub(crate) duration: Duration,
}
pub(crate) enum PatchEventType {
ApprovalRequest,
ApplyBegin { auto_approved: bool },
}
/// Represents an event to display in the conversation history. Returns its
/// `Vec<Line<'static>>` representation to make it easier to display in a
/// scrollable list.
pub(crate) enum HistoryCell {
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
/// Welcome message.
WelcomeMessage { lines: Vec<Line<'static>> },
/// Message from the user.
UserPrompt { lines: Vec<Line<'static>> },
/// Message from the agent.
AgentMessage { lines: Vec<Line<'static>> },
/// Reasoning event from the agent.
AgentReasoning { lines: Vec<Line<'static>> },
/// An exec tool call that has not finished yet.
ActiveExecCommand {
call_id: String,
/// The shell command, escaped and formatted.
command: String,
start: Instant,
lines: Vec<Line<'static>>,
},
/// Completed exec tool call.
CompletedExecCommand { lines: Vec<Line<'static>> },
/// An MCP tool call that has not finished yet.
ActiveMcpToolCall {
call_id: String,
/// `server.tool` fully-qualified name so we can show a concise label
fq_tool_name: String,
/// Formatted invocation that mirrors the `$ cmd ...` style of exec
/// commands. We keep this around so the completed state can reuse the
/// exact same text without re-formatting.
invocation: String,
start: Instant,
lines: Vec<Line<'static>>,
},
/// Completed MCP tool call.
CompletedMcpToolCall { lines: Vec<Line<'static>> },
/// Background event
BackgroundEvent { lines: Vec<Line<'static>> },
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
/// Error event from the backend.
ErrorEvent { lines: Vec<Line<'static>> },
/// Info describing the newlyinitialized session.
SessionInfo { lines: Vec<Line<'static>> },
/// A pending code patch that is awaiting user approval. Mirrors the
/// behaviour of `ActiveExecCommand` so the user sees *what* patch the
/// model wants to apply before being prompted to approve or deny it.
PendingPatch {
/// Identifier so that a future `PatchApplyEnd` can update the entry
/// with the final status (not yet implemented).
lines: Vec<Line<'static>>,
},
}
const TOOL_CALL_MAX_LINES: usize = 5;
impl HistoryCell {
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
pub(crate) fn new_welcome_message(config: &Config) -> Self {
let mut lines: Vec<Line<'static>> = vec![
Line::from(vec![
"OpenAI ".into(),
"Codex".bold(),
" (research preview)".dim(),
]),
Line::from(""),
Line::from("codex session:".magenta().bold()),
];
let entries = vec![
("workdir", config.cwd.display().to_string()),
("model", config.model.clone()),
("provider", config.model_provider_id.clone()),
("approval", format!("{:?}", config.approval_policy)),
("sandbox", format!("{:?}", config.sandbox_policy)),
];
for (key, value) in entries {
lines.push(Line::from(vec![format!("{key}: ").bold(), value.into()]));
}
lines.push(Line::from(""));
HistoryCell::WelcomeMessage { lines }
}
pub(crate) fn new_user_prompt(message: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("user".cyan().bold()));
lines.extend(message.lines().map(|l| Line::from(l.to_string())));
lines.push(Line::from(""));
HistoryCell::UserPrompt { lines }
}
pub(crate) fn new_agent_message(message: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("codex".magenta().bold()));
feat: introduce the use of tui-markdown (#851) This introduces the use of the `tui-markdown` crate to parse an assistant message as Markdown and style it using ANSI for a better user experience. As shown in the screenshot below, it has support for syntax highlighting for _tagged_ fenced code blocks: <img width="907" alt="image" src="https://github.com/user-attachments/assets/900dc229-80bb-46e8-b1bb-efee4c70ba3c" /> That said, `tui-markdown` is not as configurable (or stylish!) as https://www.npmjs.com/package/marked-terminal, which is what we use in the TypeScript CLI. In particular: * The styles are hardcoded and `tui_markdown::from_str()` does not take any options whatsoever. It uses "bold white" for inline code style which does not stand out as much as the yellow used by `marked-terminal`: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L464 I asked Codex to take a first pass at this and it came up with: https://github.com/joshka/tui-markdown/pull/80 * If a fenced code block is not tagged, then it does not get highlighted. I would rather add some logic here: https://github.com/joshka/tui-markdown/blob/65402cbda70325f34e7ddf6fe1ec629bcd9459cf/tui-markdown/src/lib.rs#L262 that uses something like https://pypi.org/project/guesslang/ to examine the value of `text` and try to use the appropriate syntax highlighter. * When we have a fenced code block, we do not want to show the opening and closing triple backticks in the output. To unblock ourselves, we might want to bundle our own fork of `tui-markdown` temporarily until we figure out what the shape of the API should be and then try to upstream it.
2025-05-07 10:46:32 -07:00
append_markdown(&message, &mut lines);
lines.push(Line::from(""));
HistoryCell::AgentMessage { lines }
}
pub(crate) fn new_agent_reasoning(text: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("codex reasoning".magenta().italic()));
append_markdown(&text, &mut lines);
lines.push(Line::from(""));
HistoryCell::AgentReasoning { lines }
}
pub(crate) fn new_active_exec_command(call_id: String, command: Vec<String>) -> Self {
let command_escaped = escape_command(&command);
let start = Instant::now();
let lines: Vec<Line<'static>> = vec![
Line::from(vec!["command".magenta(), " running...".dim()]),
Line::from(format!("$ {command_escaped}")),
Line::from(""),
];
HistoryCell::ActiveExecCommand {
call_id,
command: command_escaped,
start,
lines,
}
}
pub(crate) fn new_completed_exec_command(command: String, output: CommandOutput) -> Self {
let CommandOutput {
exit_code,
stdout,
stderr,
duration,
} = output;
let mut lines: Vec<Line<'static>> = Vec::new();
// Title depends on whether we have output yet.
let title_line = Line::from(vec![
"command".magenta(),
format!(
" (code: {}, duration: {})",
exit_code,
format_duration(duration)
)
.dim(),
]);
lines.push(title_line);
let src = if exit_code == 0 { stdout } else { stderr };
lines.push(Line::from(format!("$ {command}")));
let mut lines_iter = src.lines();
for raw in lines_iter.by_ref().take(TOOL_CALL_MAX_LINES) {
lines.push(ansi_escape_line(raw).dim());
}
let remaining = lines_iter.count();
if remaining > 0 {
lines.push(Line::from(format!("... {} additional lines", remaining)).dim());
}
lines.push(Line::from(""));
HistoryCell::CompletedExecCommand { lines }
}
pub(crate) fn new_active_mcp_tool_call(
call_id: String,
server: String,
tool: String,
arguments: Option<serde_json::Value>,
) -> Self {
let fq_tool_name = format!("{server}.{tool}");
// Format the arguments as compact JSON so they roughly fit on one
// line. If there are no arguments we keep it empty so the invocation
// mirrors a function-style call.
let args_str = arguments
.as_ref()
.map(|v| {
// Use compact form to keep things short but readable.
serde_json::to_string(v).unwrap_or_else(|_| v.to_string())
})
.unwrap_or_default();
let invocation = if args_str.is_empty() {
format!("{fq_tool_name}()")
} else {
format!("{fq_tool_name}({args_str})")
};
let start = Instant::now();
let title_line = Line::from(vec!["tool".magenta(), " running...".dim()]);
let lines: Vec<Line<'static>> = vec![
title_line,
Line::from(format!("$ {invocation}")),
Line::from(""),
];
HistoryCell::ActiveMcpToolCall {
call_id,
fq_tool_name,
invocation,
start,
lines,
}
}
pub(crate) fn new_completed_mcp_tool_call(
fq_tool_name: String,
invocation: String,
start: Instant,
success: bool,
result: Option<serde_json::Value>,
) -> Self {
let duration = format_duration(start.elapsed());
let status_str = if success { "success" } else { "failed" };
let title_line = Line::from(vec![
"tool".magenta(),
format!(" {fq_tool_name} ({status_str}, duration: {})", duration).dim(),
]);
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(title_line);
lines.push(Line::from(format!("$ {invocation}")));
if let Some(res_val) = result {
let json_pretty =
serde_json::to_string_pretty(&res_val).unwrap_or_else(|_| res_val.to_string());
let mut iter = json_pretty.lines();
for raw in iter.by_ref().take(TOOL_CALL_MAX_LINES) {
lines.push(Line::from(raw.to_string()).dim());
}
let remaining = iter.count();
if remaining > 0 {
lines.push(Line::from(format!("... {} additional lines", remaining)).dim());
}
}
lines.push(Line::from(""));
HistoryCell::CompletedMcpToolCall { lines }
}
pub(crate) fn new_background_event(message: String) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("event".dim()));
lines.extend(message.lines().map(|l| Line::from(l.to_string()).dim()));
lines.push(Line::from(""));
HistoryCell::BackgroundEvent { lines }
}
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
pub(crate) fn new_error_event(message: String) -> Self {
let lines: Vec<Line<'static>> = vec![
vec!["ERROR: ".red().bold(), message.into()].into(),
"".into(),
];
HistoryCell::ErrorEvent { lines }
}
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
pub(crate) fn new_session_info(config: &Config, model: String) -> Self {
if config.model == model {
HistoryCell::SessionInfo { lines: vec![] }
} else {
let lines = vec![
Line::from("model changed:".magenta().bold()),
Line::from(format!("requested: {}", config.model)),
Line::from(format!("used: {}", model)),
Line::from(""),
];
HistoryCell::SessionInfo { lines }
}
}
/// Create a new `PendingPatch` cell that lists the filelevel summary of
/// a proposed patch. The summary lines should already be formatted (e.g.
/// "A path/to/file.rs").
pub(crate) fn new_patch_event(
event_type: PatchEventType,
changes: HashMap<PathBuf, FileChange>,
) -> Self {
let title = match event_type {
PatchEventType::ApprovalRequest => "proposed patch",
PatchEventType::ApplyBegin {
auto_approved: true,
} => "applying patch",
PatchEventType::ApplyBegin {
auto_approved: false,
} => {
let lines = vec![Line::from("patch applied".magenta().bold())];
return Self::PendingPatch { lines };
}
};
let summary_lines = create_diff_summary(changes);
let mut lines: Vec<Line<'static>> = Vec::new();
// Header similar to the command formatter so patches are visually
// distinct while still fitting the overall colour scheme.
lines.push(Line::from(title.magenta().bold()));
for line in summary_lines {
if line.starts_with('+') {
lines.push(line.green().into());
} else if line.starts_with('-') {
lines.push(line.red().into());
} else if let Some(space_idx) = line.find(' ') {
let kind_owned = line[..space_idx].to_string();
let rest_owned = line[space_idx + 1..].to_string();
let style_for = |fg: Color| Style::default().fg(fg).add_modifier(Modifier::BOLD);
let styled_kind = match kind_owned.as_str() {
"A" => RtSpan::styled(kind_owned.clone(), style_for(Color::Green)),
"D" => RtSpan::styled(kind_owned.clone(), style_for(Color::Red)),
"M" => RtSpan::styled(kind_owned.clone(), style_for(Color::Yellow)),
"R" | "C" => RtSpan::styled(kind_owned.clone(), style_for(Color::Cyan)),
_ => RtSpan::raw(kind_owned.clone()),
};
let styled_line =
RtLine::from(vec![styled_kind, RtSpan::raw(" "), RtSpan::raw(rest_owned)]);
lines.push(styled_line);
} else {
lines.push(Line::from(line));
}
}
lines.push(Line::from(""));
HistoryCell::PendingPatch { lines }
}
pub(crate) fn lines(&self) -> &Vec<Line<'static>> {
match self {
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
HistoryCell::WelcomeMessage { lines, .. }
| HistoryCell::UserPrompt { lines, .. }
| HistoryCell::AgentMessage { lines, .. }
| HistoryCell::AgentReasoning { lines, .. }
| HistoryCell::BackgroundEvent { lines, .. }
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
| HistoryCell::ErrorEvent { lines, .. }
| HistoryCell::SessionInfo { lines, .. }
| HistoryCell::ActiveExecCommand { lines, .. }
| HistoryCell::CompletedExecCommand { lines, .. }
| HistoryCell::ActiveMcpToolCall { lines, .. }
| HistoryCell::CompletedMcpToolCall { lines, .. }
| HistoryCell::PendingPatch { lines, .. } => lines,
}
}
}
fn create_diff_summary(changes: HashMap<PathBuf, FileChange>) -> Vec<String> {
// Build a concise, humanreadable summary list similar to the
// `git status` short format so the user can reason about the
// patch without scrolling.
let mut summaries: Vec<String> = Vec::new();
for (path, change) in &changes {
use codex_core::protocol::FileChange::*;
match change {
Add { content } => {
let added = content.lines().count();
summaries.push(format!("A {} (+{added})", path.display()));
}
Delete => {
summaries.push(format!("D {}", path.display()));
}
Update {
unified_diff,
move_path,
} => {
if let Some(new_path) = move_path {
summaries.push(format!("R {}{}", path.display(), new_path.display(),));
} else {
summaries.push(format!("M {}", path.display(),));
}
summaries.extend(unified_diff.lines().map(|s| s.to_string()));
}
}
}
summaries
}