feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration

This release represents a comprehensive transformation of the codebase from Codex to LLMX,
enhanced with LiteLLM integration to support 100+ LLM providers through a unified API.

## Major Changes

### Phase 1: Repository & Infrastructure Setup
- Established new repository structure and branching strategy
- Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md)
- Set up development environment and tooling configuration

### Phase 2: Rust Workspace Transformation
- Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates)
- Updated package names, binary names, and workspace members
- Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs
- Updated all internal references, imports, and type names
- Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/
- Fixed all Rust compilation errors after mass rename

### Phase 3: LiteLLM Integration
- Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.)
- Implemented OpenAI-compatible Chat Completions API support
- Added model family detection and provider-specific handling
- Updated authentication to support LiteLLM API keys
- Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL
- Added LLMX_API_KEY for unified authentication
- Enhanced error handling for Chat Completions API responses
- Implemented fallback mechanisms between Responses API and Chat Completions API

### Phase 4: TypeScript/Node.js Components
- Renamed npm package: @codex/codex-cli → @valknar/llmx
- Updated TypeScript SDK to use new LLMX APIs and endpoints
- Fixed all TypeScript compilation and linting errors
- Updated SDK tests to support both API backends
- Enhanced mock server to handle multiple API formats
- Updated build scripts for cross-platform packaging

### Phase 5: Configuration & Documentation
- Updated all configuration files to use LLMX naming
- Rewrote README and documentation for LLMX branding
- Updated config paths: ~/.codex/ → ~/.llmx/
- Added comprehensive LiteLLM setup guide
- Updated all user-facing strings and help text
- Created release plan and migration documentation

### Phase 6: Testing & Validation
- Fixed all Rust tests for new naming scheme
- Updated snapshot tests in TUI (36 frame files)
- Fixed authentication storage tests
- Updated Chat Completions payload and SSE tests
- Fixed SDK tests for new API endpoints
- Ensured compatibility with Claude Sonnet 4.5 model
- Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL)

### Phase 7: Build & Release Pipeline
- Updated GitHub Actions workflows for LLMX binary names
- Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/
- Updated CI/CD pipelines for new package names
- Made Apple code signing optional in release workflow
- Enhanced npm packaging resilience for partial platform builds
- Added Windows sandbox support to workspace
- Updated dotslash configuration for new binary names

### Phase 8: Final Polish
- Renamed all assets (.github images, labels, templates)
- Updated VSCode and DevContainer configurations
- Fixed all clippy warnings and formatting issues
- Applied cargo fmt and prettier formatting across codebase
- Updated issue templates and pull request templates
- Fixed all remaining UI text references

## Technical Details

**Breaking Changes:**
- Binary name changed from `codex` to `llmx`
- Config directory changed from `~/.codex/` to `~/.llmx/`
- Environment variables renamed (CODEX_* → LLMX_*)
- npm package renamed to `@valknar/llmx`

**New Features:**
- Support for 100+ LLM providers via LiteLLM
- Unified authentication with LLMX_API_KEY
- Enhanced model provider detection and handling
- Improved error handling and fallback mechanisms

**Files Changed:**
- 578 files modified across Rust, TypeScript, and documentation
- 30+ Rust crates renamed and updated
- Complete rebrand of UI, CLI, and documentation
- All tests updated and passing

**Dependencies:**
- Updated Cargo.lock with new package names
- Updated npm dependencies in llmx-cli
- Enhanced OpenAPI models for LLMX backend

This release establishes LLMX as a standalone project with comprehensive LiteLLM
integration, maintaining full backward compatibility with existing functionality
while opening support for a wide ecosystem of LLM providers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Sebastian Krüger <support@pivoine.art>
This commit is contained in:
Sebastian Krüger
2025-11-12 20:40:44 +01:00
parent 052b052832
commit 3c7efc58c8
1248 changed files with 10085 additions and 9580 deletions

View File

@@ -0,0 +1,38 @@
//! Standard type to use with the `--approval-mode` CLI option.
//! Available when the `cli` feature is enabled for the crate.
use clap::ValueEnum;
use llmx_core::protocol::AskForApproval;
#[derive(Clone, Copy, Debug, ValueEnum)]
#[value(rename_all = "kebab-case")]
pub enum ApprovalModeCliArg {
/// Only run "trusted" commands (e.g. ls, cat, sed) without asking for user
/// approval. Will escalate to the user if the model proposes a command that
/// is not in the "trusted" set.
Untrusted,
/// Run all commands without asking for user approval.
/// Only asks for approval if a command fails to execute, in which case it
/// will escalate to the user to ask for un-sandboxed execution.
OnFailure,
/// The model decides when to ask the user for approval.
OnRequest,
/// Never ask for user approval
/// Execution failures are immediately returned to the model.
Never,
}
impl From<ApprovalModeCliArg> for AskForApproval {
fn from(value: ApprovalModeCliArg) -> Self {
match value {
ApprovalModeCliArg::Untrusted => AskForApproval::UnlessTrusted,
ApprovalModeCliArg::OnFailure => AskForApproval::OnFailure,
ApprovalModeCliArg::OnRequest => AskForApproval::OnRequest,
ApprovalModeCliArg::Never => AskForApproval::Never,
}
}
}

View File

@@ -0,0 +1,46 @@
use llmx_core::protocol::AskForApproval;
use llmx_core::protocol::SandboxPolicy;
/// A simple preset pairing an approval policy with a sandbox policy.
#[derive(Debug, Clone)]
pub struct ApprovalPreset {
/// Stable identifier for the preset.
pub id: &'static str,
/// Display label shown in UIs.
pub label: &'static str,
/// Short human description shown next to the label in UIs.
pub description: &'static str,
/// Approval policy to apply.
pub approval: AskForApproval,
/// Sandbox policy to apply.
pub sandbox: SandboxPolicy,
}
/// Built-in list of approval presets that pair approval and sandbox policy.
///
/// Keep this UI-agnostic so it can be reused by both TUI and MCP server.
pub fn builtin_approval_presets() -> Vec<ApprovalPreset> {
vec![
ApprovalPreset {
id: "read-only",
label: "Read Only",
description: "LLMX can read files and answer questions. LLMX requires approval to make edits, run commands, or access network.",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::ReadOnly,
},
ApprovalPreset {
id: "auto",
label: "Auto",
description: "LLMX can read files, make edits, and run commands in the workspace. LLMX requires approval to work outside the workspace or access network.",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::new_workspace_write_policy(),
},
ApprovalPreset {
id: "full-access",
label: "Full Access",
description: "LLMX can read files, make edits, and run commands with network access, without approval. Exercise caution.",
approval: AskForApproval::Never,
sandbox: SandboxPolicy::DangerFullAccess,
},
]
}

View File

@@ -0,0 +1,173 @@
//! Support for `-c key=value` overrides shared across LLMX CLI tools.
//!
//! This module provides a [`CliConfigOverrides`] struct that can be embedded
//! into a `clap`-derived CLI struct using `#[clap(flatten)]`. Each occurrence
//! of `-c key=value` (or `--config key=value`) will be collected as a raw
//! string. Helper methods are provided to convert the raw strings into
//! key/value pairs as well as to apply them onto a mutable
//! `serde_json::Value` representing the configuration tree.
use clap::ArgAction;
use clap::Parser;
use serde::de::Error as SerdeError;
use toml::Value;
/// CLI option that captures arbitrary configuration overrides specified as
/// `-c key=value`. It intentionally keeps both halves **unparsed** so that the
/// calling code can decide how to interpret the right-hand side.
#[derive(Parser, Debug, Default, Clone)]
pub struct CliConfigOverrides {
/// Override a configuration value that would otherwise be loaded from
/// `~/.llmx/config.toml`. Use a dotted path (`foo.bar.baz`) to override
/// nested values. The `value` portion is parsed as TOML. If it fails to
/// parse as TOML, the raw string is used as a literal.
///
/// Examples:
/// - `-c model="o3"`
/// - `-c 'sandbox_permissions=["disk-full-read-access"]'`
/// - `-c shell_environment_policy.inherit=all`
#[arg(
short = 'c',
long = "config",
value_name = "key=value",
action = ArgAction::Append,
global = true,
)]
pub raw_overrides: Vec<String>,
}
impl CliConfigOverrides {
/// Parse the raw strings captured from the CLI into a list of `(path,
/// value)` tuples where `value` is a `serde_json::Value`.
pub fn parse_overrides(&self) -> Result<Vec<(String, Value)>, String> {
self.raw_overrides
.iter()
.map(|s| {
// Only split on the *first* '=' so values are free to contain
// the character.
let mut parts = s.splitn(2, '=');
let key = match parts.next() {
Some(k) => k.trim(),
None => return Err("Override missing key".to_string()),
};
let value_str = parts
.next()
.ok_or_else(|| format!("Invalid override (missing '='): {s}"))?
.trim();
if key.is_empty() {
return Err(format!("Empty key in override: {s}"));
}
// Attempt to parse as TOML. If that fails, treat it as a raw
// string. This allows convenient usage such as
// `-c model=o3` without the quotes.
let value: Value = match parse_toml_value(value_str) {
Ok(v) => v,
Err(_) => {
// Strip leading/trailing quotes if present
let trimmed = value_str.trim().trim_matches(|c| c == '"' || c == '\'');
Value::String(trimmed.to_string())
}
};
Ok((key.to_string(), value))
})
.collect()
}
/// Apply all parsed overrides onto `target`. Intermediate objects will be
/// created as necessary. Values located at the destination path will be
/// replaced.
pub fn apply_on_value(&self, target: &mut Value) -> Result<(), String> {
let overrides = self.parse_overrides()?;
for (path, value) in overrides {
apply_single_override(target, &path, value);
}
Ok(())
}
}
/// Apply a single override onto `root`, creating intermediate objects as
/// necessary.
fn apply_single_override(root: &mut Value, path: &str, value: Value) {
use toml::value::Table;
let parts: Vec<&str> = path.split('.').collect();
let mut current = root;
for (i, part) in parts.iter().enumerate() {
let is_last = i == parts.len() - 1;
if is_last {
match current {
Value::Table(tbl) => {
tbl.insert((*part).to_string(), value);
}
_ => {
let mut tbl = Table::new();
tbl.insert((*part).to_string(), value);
*current = Value::Table(tbl);
}
}
return;
}
// Traverse or create intermediate table.
match current {
Value::Table(tbl) => {
current = tbl
.entry((*part).to_string())
.or_insert_with(|| Value::Table(Table::new()));
}
_ => {
*current = Value::Table(Table::new());
if let Value::Table(tbl) = current {
current = tbl
.entry((*part).to_string())
.or_insert_with(|| Value::Table(Table::new()));
}
}
}
}
}
fn parse_toml_value(raw: &str) -> Result<Value, toml::de::Error> {
let wrapped = format!("_x_ = {raw}");
let table: toml::Table = toml::from_str(&wrapped)?;
table
.get("_x_")
.cloned()
.ok_or_else(|| SerdeError::custom("missing sentinel key"))
}
#[cfg(all(test, feature = "cli"))]
mod tests {
use super::*;
#[test]
fn parses_basic_scalar() {
let v = parse_toml_value("42").expect("parse");
assert_eq!(v.as_integer(), Some(42));
}
#[test]
fn fails_on_unquoted_string() {
assert!(parse_toml_value("hello").is_err());
}
#[test]
fn parses_array() {
let v = parse_toml_value("[1, 2, 3]").expect("parse");
let arr = v.as_array().expect("array");
assert_eq!(arr.len(), 3);
}
#[test]
fn parses_inline_table() {
let v = parse_toml_value("{a = 1, b = 2}").expect("parse");
let tbl = v.as_table().expect("table");
assert_eq!(tbl.get("a").unwrap().as_integer(), Some(1));
assert_eq!(tbl.get("b").unwrap().as_integer(), Some(2));
}
}

View File

@@ -0,0 +1,32 @@
use llmx_core::WireApi;
use llmx_core::config::Config;
use crate::sandbox_summary::summarize_sandbox_policy;
/// Build a list of key/value pairs summarizing the effective configuration.
pub fn create_config_summary_entries(config: &Config) -> Vec<(&'static str, String)> {
let mut entries = vec![
("workdir", config.cwd.display().to_string()),
("model", config.model.clone()),
("provider", config.model_provider_id.clone()),
("approval", config.approval_policy.to_string()),
("sandbox", summarize_sandbox_policy(&config.sandbox_policy)),
];
if config.model_provider.wire_api == WireApi::Responses
&& config.model_family.supports_reasoning_summaries
{
entries.push((
"reasoning effort",
config
.model_reasoning_effort
.map(|effort| effort.to_string())
.unwrap_or_else(|| "none".to_string()),
));
entries.push((
"reasoning summaries",
config.model_reasoning_summary.to_string(),
));
}
entries
}

View File

@@ -0,0 +1,78 @@
use std::time::Duration;
use std::time::Instant;
/// Returns a string representing the elapsed time since `start_time` like
/// "1m 15s" or "1.50s".
pub fn format_elapsed(start_time: Instant) -> String {
format_duration(start_time.elapsed())
}
/// Convert a [`std::time::Duration`] into a human-readable, compact string.
///
/// Formatting rules:
/// * < 1 s -> "{milli}ms"
/// * < 60 s -> "{sec:.2}s" (two decimal places)
/// * >= 60 s -> "{min}m {sec:02}s"
pub fn format_duration(duration: Duration) -> String {
let millis = duration.as_millis() as i64;
format_elapsed_millis(millis)
}
fn format_elapsed_millis(millis: i64) -> String {
if millis < 1000 {
format!("{millis}ms")
} else if millis < 60_000 {
format!("{:.2}s", millis as f64 / 1000.0)
} else {
let minutes = millis / 60_000;
let seconds = (millis % 60_000) / 1000;
format!("{minutes}m {seconds:02}s")
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_format_duration_subsecond() {
// Durations < 1s should be rendered in milliseconds with no decimals.
let dur = Duration::from_millis(250);
assert_eq!(format_duration(dur), "250ms");
// Exactly zero should still work.
let dur_zero = Duration::from_millis(0);
assert_eq!(format_duration(dur_zero), "0ms");
}
#[test]
fn test_format_duration_seconds() {
// Durations between 1s (inclusive) and 60s (exclusive) should be
// printed with 2-decimal-place seconds.
let dur = Duration::from_millis(1_500); // 1.5s
assert_eq!(format_duration(dur), "1.50s");
// 59.999s rounds to 60.00s
let dur2 = Duration::from_millis(59_999);
assert_eq!(format_duration(dur2), "60.00s");
}
#[test]
fn test_format_duration_minutes() {
// Durations ≥ 1 minute should be printed mmss.
let dur = Duration::from_millis(75_000); // 1m15s
assert_eq!(format_duration(dur), "1m 15s");
let dur_exact = Duration::from_millis(60_000); // 1m0s
assert_eq!(format_duration(dur_exact), "1m 00s");
let dur_long = Duration::from_millis(3_601_000);
assert_eq!(format_duration(dur_long), "60m 01s");
}
#[test]
fn test_format_duration_one_hour_has_space() {
let dur_hour = Duration::from_millis(3_600_000);
assert_eq!(format_duration(dur_hour), "60m 00s");
}
}

View File

@@ -0,0 +1,62 @@
use std::collections::HashMap;
pub fn format_env_display(env: Option<&HashMap<String, String>>, env_vars: &[String]) -> String {
let mut parts: Vec<String> = Vec::new();
if let Some(map) = env {
let mut pairs: Vec<_> = map.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
parts.extend(pairs.into_iter().map(|(key, _)| format!("{key}=*****")));
}
if !env_vars.is_empty() {
parts.extend(env_vars.iter().map(|var| format!("{var}=*****")));
}
if parts.is_empty() {
"-".to_string()
} else {
parts.join(", ")
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn returns_dash_when_empty() {
assert_eq!(format_env_display(None, &[]), "-");
let empty_map = HashMap::new();
assert_eq!(format_env_display(Some(&empty_map), &[]), "-");
}
#[test]
fn formats_sorted_env_pairs() {
let mut env = HashMap::new();
env.insert("B".to_string(), "two".to_string());
env.insert("A".to_string(), "one".to_string());
assert_eq!(format_env_display(Some(&env), &[]), "A=*****, B=*****");
}
#[test]
fn formats_env_vars_with_dollar_prefix() {
let vars = vec!["TOKEN".to_string(), "PATH".to_string()];
assert_eq!(format_env_display(None, &vars), "TOKEN=*****, PATH=*****");
}
#[test]
fn combines_env_pairs_and_vars() {
let mut env = HashMap::new();
env.insert("HOME".to_string(), "/tmp".to_string());
let vars = vec!["TOKEN".to_string()];
assert_eq!(
format_env_display(Some(&env), &vars),
"HOME=*****, TOKEN=*****"
);
}
}

View File

@@ -0,0 +1,177 @@
/// Simple case-insensitive subsequence matcher used for fuzzy filtering.
///
/// Returns the indices (character positions) of the matched characters in the
/// ORIGINAL `haystack` string and a score where smaller is better.
///
/// Unicode correctness: we perform the match on a lowercased copy of the
/// haystack and needle but maintain a mapping from each character in the
/// lowercased haystack back to the original character index in `haystack`.
/// This ensures the returned indices can be safely used with
/// `str::chars().enumerate()` consumers for highlighting, even when
/// lowercasing expands certain characters (e.g., ß → ss, İ → i̇).
pub fn fuzzy_match(haystack: &str, needle: &str) -> Option<(Vec<usize>, i32)> {
if needle.is_empty() {
return Some((Vec::new(), i32::MAX));
}
let mut lowered_chars: Vec<char> = Vec::new();
let mut lowered_to_orig_char_idx: Vec<usize> = Vec::new();
for (orig_idx, ch) in haystack.chars().enumerate() {
for lc in ch.to_lowercase() {
lowered_chars.push(lc);
lowered_to_orig_char_idx.push(orig_idx);
}
}
let lowered_needle: Vec<char> = needle.to_lowercase().chars().collect();
let mut result_orig_indices: Vec<usize> = Vec::with_capacity(lowered_needle.len());
let mut last_lower_pos: Option<usize> = None;
let mut cur = 0usize;
for &nc in lowered_needle.iter() {
let mut found_at: Option<usize> = None;
while cur < lowered_chars.len() {
if lowered_chars[cur] == nc {
found_at = Some(cur);
cur += 1;
break;
}
cur += 1;
}
let pos = found_at?;
result_orig_indices.push(lowered_to_orig_char_idx[pos]);
last_lower_pos = Some(pos);
}
let first_lower_pos = if result_orig_indices.is_empty() {
0usize
} else {
let target_orig = result_orig_indices[0];
lowered_to_orig_char_idx
.iter()
.position(|&oi| oi == target_orig)
.unwrap_or(0)
};
// last defaults to first for single-hit; score = extra span between first/last hit
// minus needle len (≥0).
// Strongly reward prefix matches by subtracting 100 when the first hit is at index 0.
let last_lower_pos = last_lower_pos.unwrap_or(first_lower_pos);
let window =
(last_lower_pos as i32 - first_lower_pos as i32 + 1) - (lowered_needle.len() as i32);
let mut score = window.max(0);
if first_lower_pos == 0 {
score -= 100;
}
result_orig_indices.sort_unstable();
result_orig_indices.dedup();
Some((result_orig_indices, score))
}
/// Convenience wrapper to get only the indices for a fuzzy match.
pub fn fuzzy_indices(haystack: &str, needle: &str) -> Option<Vec<usize>> {
fuzzy_match(haystack, needle).map(|(mut idx, _)| {
idx.sort_unstable();
idx.dedup();
idx
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn ascii_basic_indices() {
let (idx, score) = match fuzzy_match("hello", "hl") {
Some(v) => v,
None => panic!("expected a match"),
};
assert_eq!(idx, vec![0, 2]);
// 'h' at 0, 'l' at 2 -> window 1; start-of-string bonus applies (-100)
assert_eq!(score, -99);
}
#[test]
fn unicode_dotted_i_istanbul_highlighting() {
let (idx, score) = match fuzzy_match("İstanbul", "is") {
Some(v) => v,
None => panic!("expected a match"),
};
assert_eq!(idx, vec![0, 1]);
// Matches at lowered positions 0 and 2 -> window 1; start-of-string bonus applies
assert_eq!(score, -99);
}
#[test]
fn unicode_german_sharp_s_casefold() {
assert!(fuzzy_match("straße", "strasse").is_none());
}
#[test]
fn prefer_contiguous_match_over_spread() {
let (_idx_a, score_a) = match fuzzy_match("abc", "abc") {
Some(v) => v,
None => panic!("expected a match"),
};
let (_idx_b, score_b) = match fuzzy_match("a-b-c", "abc") {
Some(v) => v,
None => panic!("expected a match"),
};
// Contiguous window -> 0; start-of-string bonus -> -100
assert_eq!(score_a, -100);
// Spread over 5 chars for 3-letter needle -> window 2; with bonus -> -98
assert_eq!(score_b, -98);
assert!(score_a < score_b);
}
#[test]
fn start_of_string_bonus_applies() {
let (_idx_a, score_a) = match fuzzy_match("file_name", "file") {
Some(v) => v,
None => panic!("expected a match"),
};
let (_idx_b, score_b) = match fuzzy_match("my_file_name", "file") {
Some(v) => v,
None => panic!("expected a match"),
};
// Start-of-string contiguous -> window 0; bonus -> -100
assert_eq!(score_a, -100);
// Non-prefix contiguous -> window 0; no bonus -> 0
assert_eq!(score_b, 0);
assert!(score_a < score_b);
}
#[test]
fn empty_needle_matches_with_max_score_and_no_indices() {
let (idx, score) = match fuzzy_match("anything", "") {
Some(v) => v,
None => panic!("empty needle should match"),
};
assert!(idx.is_empty());
assert_eq!(score, i32::MAX);
}
#[test]
fn case_insensitive_matching_basic() {
let (idx, score) = match fuzzy_match("FooBar", "foO") {
Some(v) => v,
None => panic!("expected a match"),
};
assert_eq!(idx, vec![0, 1, 2]);
// Contiguous prefix match (case-insensitive) -> window 0 with bonus
assert_eq!(score, -100);
}
#[test]
fn indices_are_deduped_for_multichar_lowercase_expansion() {
let needle = "\u{0069}\u{0307}"; // "i" + combining dot above
let (idx, score) = match fuzzy_match("İ", needle) {
Some(v) => v,
None => panic!("expected a match"),
};
assert_eq!(idx, vec![0]);
// Lowercasing 'İ' expands to two chars; contiguous prefix -> window 0 with bonus
assert_eq!(score, -100);
}
}

39
llmx-rs/common/src/lib.rs Normal file
View File

@@ -0,0 +1,39 @@
#[cfg(feature = "cli")]
mod approval_mode_cli_arg;
#[cfg(feature = "elapsed")]
pub mod elapsed;
#[cfg(feature = "cli")]
pub use approval_mode_cli_arg::ApprovalModeCliArg;
#[cfg(feature = "cli")]
mod sandbox_mode_cli_arg;
#[cfg(feature = "cli")]
pub use sandbox_mode_cli_arg::SandboxModeCliArg;
#[cfg(feature = "cli")]
pub mod format_env_display;
#[cfg(any(feature = "cli", test))]
mod config_override;
#[cfg(feature = "cli")]
pub use config_override::CliConfigOverrides;
mod sandbox_summary;
#[cfg(feature = "sandbox_summary")]
pub use sandbox_summary::summarize_sandbox_policy;
mod config_summary;
pub use config_summary::create_config_summary_entries;
// Shared fuzzy matcher (used by TUI selection popups and other UI filtering)
pub mod fuzzy_match;
// Shared model presets used by TUI and MCP server
pub mod model_presets;
// Shared approval presets (AskForApproval + Sandbox) used by TUI and MCP server
// Not to be confused with AskForApproval, which we should probably rename to EscalationPolicy.
pub mod approval_presets;

View File

@@ -0,0 +1,119 @@
use llmx_app_server_protocol::AuthMode;
use llmx_core::protocol_config_types::ReasoningEffort;
/// A reasoning effort option that can be surfaced for a model.
#[derive(Debug, Clone, Copy)]
pub struct ReasoningEffortPreset {
/// Effort level that the model supports.
pub effort: ReasoningEffort,
/// Short human description shown next to the effort in UIs.
pub description: &'static str,
}
/// Metadata describing a Llmx-supported model.
#[derive(Debug, Clone, Copy)]
pub struct ModelPreset {
/// Stable identifier for the preset.
pub id: &'static str,
/// Model slug (e.g., "gpt-5").
pub model: &'static str,
/// Display name shown in UIs.
pub display_name: &'static str,
/// Short human description shown in UIs.
pub description: &'static str,
/// Reasoning effort applied when none is explicitly chosen.
pub default_reasoning_effort: ReasoningEffort,
/// Supported reasoning effort options.
pub supported_reasoning_efforts: &'static [ReasoningEffortPreset],
/// Whether this is the default model for new users.
pub is_default: bool,
}
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "gpt-5-llmx",
model: "gpt-5-llmx",
display_name: "gpt-5-llmx",
description: "Optimized for llmx.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Fastest responses with limited reasoning",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: true,
},
ModelPreset {
id: "gpt-5-llmx-mini",
model: "gpt-5-llmx-mini",
display_name: "gpt-5-llmx-mini",
description: "Optimized for llmx. Cheaper, faster, but less capable.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
},
ModelPreset {
id: "gpt-5",
model: "gpt-5",
display_name: "gpt-5",
description: "Broad world knowledge with strong general reasoning.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Minimal,
description: "Fastest responses with little reasoning",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Balances speed with some reasoning; useful for straightforward queries and short explanations",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Provides a solid balance of reasoning depth and latency for general-purpose tasks",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
},
];
pub fn builtin_model_presets(auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
let allow_llmx_mini = matches!(auth_mode, Some(AuthMode::ChatGPT));
PRESETS
.iter()
.filter(|preset| allow_llmx_mini || preset.id != "gpt-5-llmx-mini")
.copied()
.collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn only_one_default_model_is_configured() {
let default_models = PRESETS.iter().filter(|preset| preset.is_default).count();
assert!(default_models == 1);
}
}

View File

@@ -0,0 +1,28 @@
//! Standard type to use with the `--sandbox` (`-s`) CLI option.
//!
//! This mirrors the variants of [`llmx_core::protocol::SandboxPolicy`], but
//! without any of the associated data so it can be expressed as a simple flag
//! on the command-line. Users that need to tweak the advanced options for
//! `workspace-write` can continue to do so via `-c` overrides or their
//! `config.toml`.
use clap::ValueEnum;
use llmx_protocol::config_types::SandboxMode;
#[derive(Clone, Copy, Debug, ValueEnum)]
#[value(rename_all = "kebab-case")]
pub enum SandboxModeCliArg {
ReadOnly,
WorkspaceWrite,
DangerFullAccess,
}
impl From<SandboxModeCliArg> for SandboxMode {
fn from(value: SandboxModeCliArg) -> Self {
match value {
SandboxModeCliArg::ReadOnly => SandboxMode::ReadOnly,
SandboxModeCliArg::WorkspaceWrite => SandboxMode::WorkspaceWrite,
SandboxModeCliArg::DangerFullAccess => SandboxMode::DangerFullAccess,
}
}
}

View File

@@ -0,0 +1,36 @@
use llmx_core::protocol::SandboxPolicy;
pub fn summarize_sandbox_policy(sandbox_policy: &SandboxPolicy) -> String {
match sandbox_policy {
SandboxPolicy::DangerFullAccess => "danger-full-access".to_string(),
SandboxPolicy::ReadOnly => "read-only".to_string(),
SandboxPolicy::WorkspaceWrite {
writable_roots,
network_access,
exclude_tmpdir_env_var,
exclude_slash_tmp,
} => {
let mut summary = "workspace-write".to_string();
let mut writable_entries = Vec::<String>::new();
writable_entries.push("workdir".to_string());
if !*exclude_slash_tmp {
writable_entries.push("/tmp".to_string());
}
if !*exclude_tmpdir_env_var {
writable_entries.push("$TMPDIR".to_string());
}
writable_entries.extend(
writable_roots
.iter()
.map(|p| p.to_string_lossy().to_string()),
);
summary.push_str(&format!(" [{}]", writable_entries.join(", ")));
if *network_access {
summary.push_str(" (network access enabled)");
}
summary
}
}
}