This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
154 lines
4.8 KiB
Rust
154 lines
4.8 KiB
Rust
#![cfg(feature = "vt100-tests")]
|
||
#![expect(clippy::expect_used)]
|
||
|
||
use crate::test_backend::VT100Backend;
|
||
use ratatui::layout::Rect;
|
||
use ratatui::style::Stylize;
|
||
use ratatui::text::Line;
|
||
|
||
// Small helper macro to assert a collection contains an item with a clearer
|
||
// failure message.
|
||
macro_rules! assert_contains {
|
||
($collection:expr, $item:expr $(,)?) => {
|
||
assert!(
|
||
$collection.contains(&$item),
|
||
"Expected {:?} to contain {:?}",
|
||
$collection,
|
||
$item
|
||
);
|
||
};
|
||
($collection:expr, $item:expr, $($arg:tt)+) => {
|
||
assert!($collection.contains(&$item), $($arg)+);
|
||
};
|
||
}
|
||
|
||
struct TestScenario {
|
||
term: llmx_tui::custom_terminal::Terminal<VT100Backend>,
|
||
}
|
||
|
||
impl TestScenario {
|
||
fn new(width: u16, height: u16, viewport: Rect) -> Self {
|
||
let backend = VT100Backend::new(width, height);
|
||
let mut term = llmx_tui::custom_terminal::Terminal::with_options(backend)
|
||
.expect("failed to construct terminal");
|
||
term.set_viewport_area(viewport);
|
||
Self { term }
|
||
}
|
||
|
||
fn run_insert(&mut self, lines: Vec<Line<'static>>) {
|
||
llmx_tui::insert_history::insert_history_lines(&mut self.term, lines)
|
||
.expect("Failed to insert history lines in test");
|
||
}
|
||
}
|
||
|
||
#[test]
|
||
fn basic_insertion_no_wrap() {
|
||
// Screen of 20x6; viewport is the last row (height=1 at y=5)
|
||
let area = Rect::new(0, 5, 20, 1);
|
||
let mut scenario = TestScenario::new(20, 6, area);
|
||
|
||
let lines = vec!["first".into(), "second".into()];
|
||
scenario.run_insert(lines);
|
||
let rows = scenario.term.backend().vt100().screen().contents();
|
||
assert_contains!(rows, String::from("first"));
|
||
assert_contains!(rows, String::from("second"));
|
||
}
|
||
|
||
#[test]
|
||
fn long_token_wraps() {
|
||
let area = Rect::new(0, 5, 20, 1);
|
||
let mut scenario = TestScenario::new(20, 6, area);
|
||
|
||
let long = "A".repeat(45); // > 2 lines at width 20
|
||
let lines = vec![long.clone().into()];
|
||
scenario.run_insert(lines);
|
||
let screen = scenario.term.backend().vt100().screen();
|
||
|
||
// Count total A's on the screen
|
||
let mut count_a = 0usize;
|
||
for row in 0..6 {
|
||
for col in 0..20 {
|
||
if let Some(cell) = screen.cell(row, col)
|
||
&& let Some(ch) = cell.contents().chars().next()
|
||
&& ch == 'A'
|
||
{
|
||
count_a += 1;
|
||
}
|
||
}
|
||
}
|
||
|
||
assert_eq!(
|
||
count_a,
|
||
long.len(),
|
||
"wrapped content did not preserve all characters"
|
||
);
|
||
}
|
||
|
||
#[test]
|
||
fn emoji_and_cjk() {
|
||
let area = Rect::new(0, 5, 20, 1);
|
||
let mut scenario = TestScenario::new(20, 6, area);
|
||
|
||
let text = String::from("😀😀😀😀😀 你好世界");
|
||
let lines = vec![text.clone().into()];
|
||
scenario.run_insert(lines);
|
||
let rows = scenario.term.backend().vt100().screen().contents();
|
||
for ch in text.chars().filter(|c| !c.is_whitespace()) {
|
||
assert!(
|
||
rows.contains(ch),
|
||
"missing character {ch:?} in reconstructed screen"
|
||
);
|
||
}
|
||
}
|
||
|
||
#[test]
|
||
fn mixed_ansi_spans() {
|
||
let area = Rect::new(0, 5, 20, 1);
|
||
let mut scenario = TestScenario::new(20, 6, area);
|
||
|
||
let line = vec!["red".red(), "+plain".into()].into();
|
||
scenario.run_insert(vec![line]);
|
||
let rows = scenario.term.backend().vt100().screen().contents();
|
||
assert_contains!(rows, String::from("red+plain"));
|
||
}
|
||
|
||
#[test]
|
||
fn cursor_restoration() {
|
||
let area = Rect::new(0, 5, 20, 1);
|
||
let mut scenario = TestScenario::new(20, 6, area);
|
||
|
||
let lines = vec!["x".into()];
|
||
scenario.run_insert(lines);
|
||
assert_eq!(scenario.term.last_known_cursor_pos, (0, 0).into());
|
||
}
|
||
|
||
#[test]
|
||
fn word_wrap_no_mid_word_split() {
|
||
// Screen of 40x10; viewport is the last row
|
||
let area = Rect::new(0, 9, 40, 1);
|
||
let mut scenario = TestScenario::new(40, 10, area);
|
||
|
||
let sample = "Years passed, and Willowmere thrived in peace and friendship. Mira’s herb garden flourished with both ordinary and enchanted plants, and travelers spoke of the kindness of the woman who tended them.";
|
||
scenario.run_insert(vec![sample.into()]);
|
||
let joined = scenario.term.backend().vt100().screen().contents();
|
||
assert!(
|
||
!joined.contains("bo\nth"),
|
||
"word 'both' should not be split across lines:\n{joined}"
|
||
);
|
||
}
|
||
|
||
#[test]
|
||
fn em_dash_and_space_word_wrap() {
|
||
// Repro from report: ensure we break before "inside", not mid-word.
|
||
let area = Rect::new(0, 9, 40, 1);
|
||
let mut scenario = TestScenario::new(40, 10, area);
|
||
|
||
let sample = "Mara found an old key on the shore. Curious, she opened a tarnished box half-buried in sand—and inside lay a single, glowing seed.";
|
||
scenario.run_insert(vec![sample.into()]);
|
||
let joined = scenario.term.backend().vt100().screen().contents();
|
||
assert!(
|
||
!joined.contains("insi\nde"),
|
||
"word 'inside' should not be split across lines:\n{joined}"
|
||
);
|
||
}
|