Files
llmx/llmx-rs/core/tests/suite/live_cli.rs
Sebastian Krüger 3c7efc58c8 feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration
This release represents a comprehensive transformation of the codebase from Codex to LLMX,
enhanced with LiteLLM integration to support 100+ LLM providers through a unified API.

## Major Changes

### Phase 1: Repository & Infrastructure Setup
- Established new repository structure and branching strategy
- Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md)
- Set up development environment and tooling configuration

### Phase 2: Rust Workspace Transformation
- Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates)
- Updated package names, binary names, and workspace members
- Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs
- Updated all internal references, imports, and type names
- Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/
- Fixed all Rust compilation errors after mass rename

### Phase 3: LiteLLM Integration
- Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.)
- Implemented OpenAI-compatible Chat Completions API support
- Added model family detection and provider-specific handling
- Updated authentication to support LiteLLM API keys
- Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL
- Added LLMX_API_KEY for unified authentication
- Enhanced error handling for Chat Completions API responses
- Implemented fallback mechanisms between Responses API and Chat Completions API

### Phase 4: TypeScript/Node.js Components
- Renamed npm package: @codex/codex-cli → @valknar/llmx
- Updated TypeScript SDK to use new LLMX APIs and endpoints
- Fixed all TypeScript compilation and linting errors
- Updated SDK tests to support both API backends
- Enhanced mock server to handle multiple API formats
- Updated build scripts for cross-platform packaging

### Phase 5: Configuration & Documentation
- Updated all configuration files to use LLMX naming
- Rewrote README and documentation for LLMX branding
- Updated config paths: ~/.codex/ → ~/.llmx/
- Added comprehensive LiteLLM setup guide
- Updated all user-facing strings and help text
- Created release plan and migration documentation

### Phase 6: Testing & Validation
- Fixed all Rust tests for new naming scheme
- Updated snapshot tests in TUI (36 frame files)
- Fixed authentication storage tests
- Updated Chat Completions payload and SSE tests
- Fixed SDK tests for new API endpoints
- Ensured compatibility with Claude Sonnet 4.5 model
- Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL)

### Phase 7: Build & Release Pipeline
- Updated GitHub Actions workflows for LLMX binary names
- Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/
- Updated CI/CD pipelines for new package names
- Made Apple code signing optional in release workflow
- Enhanced npm packaging resilience for partial platform builds
- Added Windows sandbox support to workspace
- Updated dotslash configuration for new binary names

### Phase 8: Final Polish
- Renamed all assets (.github images, labels, templates)
- Updated VSCode and DevContainer configurations
- Fixed all clippy warnings and formatting issues
- Applied cargo fmt and prettier formatting across codebase
- Updated issue templates and pull request templates
- Fixed all remaining UI text references

## Technical Details

**Breaking Changes:**
- Binary name changed from `codex` to `llmx`
- Config directory changed from `~/.codex/` to `~/.llmx/`
- Environment variables renamed (CODEX_* → LLMX_*)
- npm package renamed to `@valknar/llmx`

**New Features:**
- Support for 100+ LLM providers via LiteLLM
- Unified authentication with LLMX_API_KEY
- Enhanced model provider detection and handling
- Improved error handling and fallback mechanisms

**Files Changed:**
- 578 files modified across Rust, TypeScript, and documentation
- 30+ Rust crates renamed and updated
- Complete rebrand of UI, CLI, and documentation
- All tests updated and passing

**Dependencies:**
- Updated Cargo.lock with new package names
- Updated npm dependencies in llmx-cli
- Enhanced OpenAPI models for LLMX backend

This release establishes LLMX as a standalone project with comprehensive LiteLLM
integration, maintaining full backward compatibility with existing functionality
while opening support for a wide ecosystem of LLM providers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Sebastian Krüger <support@pivoine.art>
2025-11-12 20:40:44 +01:00

149 lines
5.0 KiB
Rust
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
#![expect(clippy::expect_used)]
//! Optional smoke tests that hit the real OpenAI /v1/responses endpoint. They are `#[ignore]` by
//! default so CI stays deterministic and free. Developers can run them locally with
//! `cargo test --test live_cli -- --ignored` provided they set a valid `OPENAI_API_KEY`.
use assert_cmd::prelude::*;
use predicates::prelude::*;
use std::process::Command;
use std::process::Stdio;
use tempfile::TempDir;
fn require_api_key() -> String {
std::env::var("OPENAI_API_KEY")
.expect("OPENAI_API_KEY env var not set — skip running live tests")
}
/// Helper that spawns the binary inside a TempDir with minimal flags. Returns (Assert, TempDir).
fn run_live(prompt: &str) -> (assert_cmd::assert::Assert, TempDir) {
#![expect(clippy::unwrap_used)]
use std::io::Read;
use std::io::Write;
use std::thread;
let dir = TempDir::new().unwrap();
// Build a plain `std::process::Command` so we have full control over the underlying stdio
// handles. `assert_cmd`s own `Command` wrapper always forces stdout/stderr to be piped
// internally which prevents us from streaming them live to the terminal (see its `spawn`
// implementation). Instead we configure the std `Command` ourselves, then later hand the
// resulting `Output` to `assert_cmd` for the familiar assertions.
let mut cmd = Command::cargo_bin("llmx-rs").unwrap();
cmd.current_dir(dir.path());
cmd.env("OPENAI_API_KEY", require_api_key());
// We want three things at once:
// 1. live streaming of the childs stdout/stderr while the test is running
// 2. captured output so we can keep using assert_cmds `Assert` helpers
// 3. crossplatform behavior (best effort)
//
// To get that we:
// • set both stdout and stderr to `piped()` so we can read them programmatically
// • spawn a thread for each stream that copies bytes into two sinks:
// the parent process stdout/stderr for live visibility
// an inmemory buffer so we can pass it to `assert_cmd` later
// Pass the prompt through the `--` separator so the CLI knows when user input ends.
cmd.arg("--allow-no-git-exec")
.arg("-v")
.arg("--")
.arg(prompt);
cmd.stdin(Stdio::piped());
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
let mut child = cmd.spawn().expect("failed to spawn llmx-rs");
// Send the terminating newline so Session::run exits after the first turn.
child
.stdin
.as_mut()
.expect("child stdin unavailable")
.write_all(b"\n")
.expect("failed to write to child stdin");
// Helper that tees a ChildStdout/ChildStderr into both the parents stdio and a Vec<u8>.
fn tee<R: Read + Send + 'static>(
mut reader: R,
mut writer: impl Write + Send + 'static,
) -> thread::JoinHandle<Vec<u8>> {
thread::spawn(move || {
let mut buf = Vec::new();
let mut chunk = [0u8; 4096];
loop {
match reader.read(&mut chunk) {
Ok(0) => break,
Ok(n) => {
writer.write_all(&chunk[..n]).ok();
writer.flush().ok();
buf.extend_from_slice(&chunk[..n]);
}
Err(_) => break,
}
}
buf
})
}
let stdout_handle = tee(
child.stdout.take().expect("child stdout"),
std::io::stdout(),
);
let stderr_handle = tee(
child.stderr.take().expect("child stderr"),
std::io::stderr(),
);
let status = child.wait().expect("failed to wait on child");
let stdout = stdout_handle.join().expect("stdout thread panicked");
let stderr = stderr_handle.join().expect("stderr thread panicked");
let output = std::process::Output {
status,
stdout,
stderr,
};
(output.assert(), dir)
}
#[ignore]
#[test]
fn live_create_file_hello_txt() {
if std::env::var("OPENAI_API_KEY").is_err() {
eprintln!("skipping live_create_file_hello_txt OPENAI_API_KEY not set");
return;
}
let (assert, dir) = run_live(
"Use the shell tool with the apply_patch command to create a file named hello.txt containing the text 'hello'.",
);
assert.success();
let path = dir.path().join("hello.txt");
assert!(path.exists(), "hello.txt was not created by the model");
let contents = std::fs::read_to_string(path).unwrap();
assert_eq!(contents.trim(), "hello");
}
#[ignore]
#[test]
fn live_print_working_directory() {
if std::env::var("OPENAI_API_KEY").is_err() {
eprintln!("skipping live_print_working_directory OPENAI_API_KEY not set");
return;
}
let (assert, dir) = run_live("Print the current working directory using the shell function.");
assert
.success()
.stdout(predicate::str::contains(dir.path().to_string_lossy()));
}