This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
148 lines
5.2 KiB
Rust
148 lines
5.2 KiB
Rust
use std::collections::HashMap;
|
|
use std::io;
|
|
use std::io::Write;
|
|
|
|
/// Events emitted while pulling a model from Ollama.
|
|
#[derive(Debug, Clone)]
|
|
pub enum PullEvent {
|
|
/// A human-readable status message (e.g., "verifying", "writing").
|
|
Status(String),
|
|
/// Byte-level progress update for a specific layer digest.
|
|
ChunkProgress {
|
|
digest: String,
|
|
total: Option<u64>,
|
|
completed: Option<u64>,
|
|
},
|
|
/// The pull finished successfully.
|
|
Success,
|
|
|
|
/// Error event with a message.
|
|
Error(String),
|
|
}
|
|
|
|
/// A simple observer for pull progress events. Implementations decide how to
|
|
/// render progress (CLI, TUI, logs, ...).
|
|
pub trait PullProgressReporter {
|
|
fn on_event(&mut self, event: &PullEvent) -> io::Result<()>;
|
|
}
|
|
|
|
/// A minimal CLI reporter that writes inline progress to stderr.
|
|
pub struct CliProgressReporter {
|
|
printed_header: bool,
|
|
last_line_len: usize,
|
|
last_completed_sum: u64,
|
|
last_instant: std::time::Instant,
|
|
totals_by_digest: HashMap<String, (u64, u64)>,
|
|
}
|
|
|
|
impl Default for CliProgressReporter {
|
|
fn default() -> Self {
|
|
Self::new()
|
|
}
|
|
}
|
|
|
|
impl CliProgressReporter {
|
|
pub fn new() -> Self {
|
|
Self {
|
|
printed_header: false,
|
|
last_line_len: 0,
|
|
last_completed_sum: 0,
|
|
last_instant: std::time::Instant::now(),
|
|
totals_by_digest: HashMap::new(),
|
|
}
|
|
}
|
|
}
|
|
|
|
impl PullProgressReporter for CliProgressReporter {
|
|
fn on_event(&mut self, event: &PullEvent) -> io::Result<()> {
|
|
let mut out = std::io::stderr();
|
|
match event {
|
|
PullEvent::Status(status) => {
|
|
// Avoid noisy manifest messages; otherwise show status inline.
|
|
if status.eq_ignore_ascii_case("pulling manifest") {
|
|
return Ok(());
|
|
}
|
|
let pad = self.last_line_len.saturating_sub(status.len());
|
|
let line = format!("\r{status}{}", " ".repeat(pad));
|
|
self.last_line_len = status.len();
|
|
out.write_all(line.as_bytes())?;
|
|
out.flush()
|
|
}
|
|
PullEvent::ChunkProgress {
|
|
digest,
|
|
total,
|
|
completed,
|
|
} => {
|
|
if let Some(t) = *total {
|
|
self.totals_by_digest
|
|
.entry(digest.clone())
|
|
.or_insert((0, 0))
|
|
.0 = t;
|
|
}
|
|
if let Some(c) = *completed {
|
|
self.totals_by_digest
|
|
.entry(digest.clone())
|
|
.or_insert((0, 0))
|
|
.1 = c;
|
|
}
|
|
|
|
let (sum_total, sum_completed) = self
|
|
.totals_by_digest
|
|
.values()
|
|
.fold((0u64, 0u64), |acc, (t, c)| (acc.0 + *t, acc.1 + *c));
|
|
if sum_total > 0 {
|
|
if !self.printed_header {
|
|
let gb = (sum_total as f64) / (1024.0 * 1024.0 * 1024.0);
|
|
let header = format!("Downloading model: total {gb:.2} GB\n");
|
|
out.write_all(b"\r\x1b[2K")?;
|
|
out.write_all(header.as_bytes())?;
|
|
self.printed_header = true;
|
|
}
|
|
let now = std::time::Instant::now();
|
|
let dt = now
|
|
.duration_since(self.last_instant)
|
|
.as_secs_f64()
|
|
.max(0.001);
|
|
let dbytes = sum_completed.saturating_sub(self.last_completed_sum) as f64;
|
|
let speed_mb_s = dbytes / (1024.0 * 1024.0) / dt;
|
|
self.last_completed_sum = sum_completed;
|
|
self.last_instant = now;
|
|
|
|
let done_gb = (sum_completed as f64) / (1024.0 * 1024.0 * 1024.0);
|
|
let total_gb = (sum_total as f64) / (1024.0 * 1024.0 * 1024.0);
|
|
let pct = (sum_completed as f64) * 100.0 / (sum_total as f64);
|
|
let text =
|
|
format!("{done_gb:.2}/{total_gb:.2} GB ({pct:.1}%) {speed_mb_s:.1} MB/s");
|
|
let pad = self.last_line_len.saturating_sub(text.len());
|
|
let line = format!("\r{text}{}", " ".repeat(pad));
|
|
self.last_line_len = text.len();
|
|
out.write_all(line.as_bytes())?;
|
|
out.flush()
|
|
} else {
|
|
Ok(())
|
|
}
|
|
}
|
|
PullEvent::Error(_) => {
|
|
// This will be handled by the caller, so we don't do anything
|
|
// here or the error will be printed twice.
|
|
Ok(())
|
|
}
|
|
PullEvent::Success => {
|
|
out.write_all(b"\n")?;
|
|
out.flush()
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
/// For now the TUI reporter delegates to the CLI reporter. This keeps UI and
|
|
/// CLI behavior aligned until a dedicated TUI integration is implemented.
|
|
#[derive(Default)]
|
|
pub struct TuiProgressReporter(CliProgressReporter);
|
|
|
|
impl PullProgressReporter for TuiProgressReporter {
|
|
fn on_event(&mut self, event: &PullEvent) -> io::Result<()> {
|
|
self.0.on_event(event)
|
|
}
|
|
}
|