This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
363 lines
11 KiB
Rust
363 lines
11 KiB
Rust
#![expect(clippy::expect_used)]
|
||
|
||
use tempfile::TempDir;
|
||
|
||
use llmx_core::LlmxConversation;
|
||
use llmx_core::config::Config;
|
||
use llmx_core::config::ConfigOverrides;
|
||
use llmx_core::config::ConfigToml;
|
||
use regex_lite::Regex;
|
||
|
||
#[cfg(target_os = "linux")]
|
||
use assert_cmd::cargo::cargo_bin;
|
||
|
||
pub mod responses;
|
||
pub mod test_llmx;
|
||
pub mod test_llmx_exec;
|
||
|
||
#[track_caller]
|
||
pub fn assert_regex_match<'s>(pattern: &str, actual: &'s str) -> regex_lite::Captures<'s> {
|
||
let regex = Regex::new(pattern).unwrap_or_else(|err| {
|
||
panic!("failed to compile regex {pattern:?}: {err}");
|
||
});
|
||
regex
|
||
.captures(actual)
|
||
.unwrap_or_else(|| panic!("regex {pattern:?} did not match {actual:?}"))
|
||
}
|
||
|
||
/// Returns a default `Config` whose on-disk state is confined to the provided
|
||
/// temporary directory. Using a per-test directory keeps tests hermetic and
|
||
/// avoids clobbering a developer’s real `~/.llmx`.
|
||
pub fn load_default_config_for_test(llmx_home: &TempDir) -> Config {
|
||
Config::load_from_base_config_with_overrides(
|
||
ConfigToml::default(),
|
||
default_test_overrides(),
|
||
llmx_home.path().to_path_buf(),
|
||
)
|
||
.expect("defaults for test should always succeed")
|
||
}
|
||
|
||
#[cfg(target_os = "linux")]
|
||
fn default_test_overrides() -> ConfigOverrides {
|
||
ConfigOverrides {
|
||
llmx_linux_sandbox_exe: Some(cargo_bin("llmx-linux-sandbox")),
|
||
..ConfigOverrides::default()
|
||
}
|
||
}
|
||
|
||
#[cfg(not(target_os = "linux"))]
|
||
fn default_test_overrides() -> ConfigOverrides {
|
||
ConfigOverrides::default()
|
||
}
|
||
|
||
/// Builds an SSE stream body from a JSON fixture.
|
||
///
|
||
/// The fixture must contain an array of objects where each object represents a
|
||
/// single SSE event with at least a `type` field matching the `event:` value.
|
||
/// Additional fields become the JSON payload for the `data:` line. An object
|
||
/// with only a `type` field results in an event with no `data:` section. This
|
||
/// makes it trivial to extend the fixtures as OpenAI adds new event kinds or
|
||
/// fields.
|
||
pub fn load_sse_fixture(path: impl AsRef<std::path::Path>) -> String {
|
||
let events: Vec<serde_json::Value> =
|
||
serde_json::from_reader(std::fs::File::open(path).expect("read fixture"))
|
||
.expect("parse JSON fixture");
|
||
events
|
||
.into_iter()
|
||
.map(|e| {
|
||
let kind = e
|
||
.get("type")
|
||
.and_then(|v| v.as_str())
|
||
.expect("fixture event missing type");
|
||
if e.as_object().map(|o| o.len() == 1).unwrap_or(false) {
|
||
format!("event: {kind}\n\n")
|
||
} else {
|
||
format!("event: {kind}\ndata: {e}\n\n")
|
||
}
|
||
})
|
||
.collect()
|
||
}
|
||
|
||
pub fn load_sse_fixture_with_id_from_str(raw: &str, id: &str) -> String {
|
||
let replaced = raw.replace("__ID__", id);
|
||
let events: Vec<serde_json::Value> =
|
||
serde_json::from_str(&replaced).expect("parse JSON fixture");
|
||
events
|
||
.into_iter()
|
||
.map(|e| {
|
||
let kind = e
|
||
.get("type")
|
||
.and_then(|v| v.as_str())
|
||
.expect("fixture event missing type");
|
||
if e.as_object().map(|o| o.len() == 1).unwrap_or(false) {
|
||
format!("event: {kind}\n\n")
|
||
} else {
|
||
format!("event: {kind}\ndata: {e}\n\n")
|
||
}
|
||
})
|
||
.collect()
|
||
}
|
||
|
||
/// Same as [`load_sse_fixture`], but replaces the placeholder `__ID__` in the
|
||
/// fixture template with the supplied identifier before parsing. This lets a
|
||
/// single JSON template be reused by multiple tests that each need a unique
|
||
/// `response_id`.
|
||
pub fn load_sse_fixture_with_id(path: impl AsRef<std::path::Path>, id: &str) -> String {
|
||
let raw = std::fs::read_to_string(path).expect("read fixture template");
|
||
let replaced = raw.replace("__ID__", id);
|
||
let events: Vec<serde_json::Value> =
|
||
serde_json::from_str(&replaced).expect("parse JSON fixture");
|
||
events
|
||
.into_iter()
|
||
.map(|e| {
|
||
let kind = e
|
||
.get("type")
|
||
.and_then(|v| v.as_str())
|
||
.expect("fixture event missing type");
|
||
if e.as_object().map(|o| o.len() == 1).unwrap_or(false) {
|
||
format!("event: {kind}\n\n")
|
||
} else {
|
||
format!("event: {kind}\ndata: {e}\n\n")
|
||
}
|
||
})
|
||
.collect()
|
||
}
|
||
|
||
pub async fn wait_for_event<F>(
|
||
llmx: &LlmxConversation,
|
||
predicate: F,
|
||
) -> llmx_core::protocol::EventMsg
|
||
where
|
||
F: FnMut(&llmx_core::protocol::EventMsg) -> bool,
|
||
{
|
||
use tokio::time::Duration;
|
||
wait_for_event_with_timeout(llmx, predicate, Duration::from_secs(1)).await
|
||
}
|
||
|
||
pub async fn wait_for_event_match<T, F>(llmx: &LlmxConversation, matcher: F) -> T
|
||
where
|
||
F: Fn(&llmx_core::protocol::EventMsg) -> Option<T>,
|
||
{
|
||
let ev = wait_for_event(llmx, |ev| matcher(ev).is_some()).await;
|
||
matcher(&ev).unwrap()
|
||
}
|
||
|
||
pub async fn wait_for_event_with_timeout<F>(
|
||
llmx: &LlmxConversation,
|
||
mut predicate: F,
|
||
wait_time: tokio::time::Duration,
|
||
) -> llmx_core::protocol::EventMsg
|
||
where
|
||
F: FnMut(&llmx_core::protocol::EventMsg) -> bool,
|
||
{
|
||
use tokio::time::Duration;
|
||
use tokio::time::timeout;
|
||
loop {
|
||
// Allow a bit more time to accommodate async startup work (e.g. config IO, tool discovery)
|
||
let ev = timeout(wait_time.max(Duration::from_secs(5)), llmx.next_event())
|
||
.await
|
||
.expect("timeout waiting for event")
|
||
.expect("stream ended unexpectedly");
|
||
if predicate(&ev.msg) {
|
||
return ev.msg;
|
||
}
|
||
}
|
||
}
|
||
|
||
pub fn sandbox_env_var() -> &'static str {
|
||
llmx_core::spawn::LLMX_SANDBOX_ENV_VAR
|
||
}
|
||
|
||
pub fn sandbox_network_env_var() -> &'static str {
|
||
llmx_core::spawn::LLMX_SANDBOX_NETWORK_DISABLED_ENV_VAR
|
||
}
|
||
|
||
pub mod fs_wait {
|
||
use anyhow::Result;
|
||
use anyhow::anyhow;
|
||
use notify::RecursiveMode;
|
||
use notify::Watcher;
|
||
use std::path::Path;
|
||
use std::path::PathBuf;
|
||
use std::sync::mpsc;
|
||
use std::sync::mpsc::RecvTimeoutError;
|
||
use std::time::Duration;
|
||
use std::time::Instant;
|
||
use tokio::task;
|
||
use walkdir::WalkDir;
|
||
|
||
pub async fn wait_for_path_exists(
|
||
path: impl Into<PathBuf>,
|
||
timeout: Duration,
|
||
) -> Result<PathBuf> {
|
||
let path = path.into();
|
||
task::spawn_blocking(move || wait_for_path_exists_blocking(path, timeout)).await?
|
||
}
|
||
|
||
pub async fn wait_for_matching_file(
|
||
root: impl Into<PathBuf>,
|
||
timeout: Duration,
|
||
predicate: impl FnMut(&Path) -> bool + Send + 'static,
|
||
) -> Result<PathBuf> {
|
||
let root = root.into();
|
||
task::spawn_blocking(move || {
|
||
let mut predicate = predicate;
|
||
blocking_find_matching_file(root, timeout, &mut predicate)
|
||
})
|
||
.await?
|
||
}
|
||
|
||
fn wait_for_path_exists_blocking(path: PathBuf, timeout: Duration) -> Result<PathBuf> {
|
||
if path.exists() {
|
||
return Ok(path);
|
||
}
|
||
|
||
let watch_root = nearest_existing_ancestor(&path);
|
||
let (tx, rx) = mpsc::channel();
|
||
let mut watcher = notify::recommended_watcher(move |res| {
|
||
let _ = tx.send(res);
|
||
})?;
|
||
watcher.watch(&watch_root, RecursiveMode::Recursive)?;
|
||
|
||
let deadline = Instant::now() + timeout;
|
||
loop {
|
||
if path.exists() {
|
||
return Ok(path.clone());
|
||
}
|
||
let now = Instant::now();
|
||
if now >= deadline {
|
||
break;
|
||
}
|
||
let remaining = deadline.saturating_duration_since(now);
|
||
match rx.recv_timeout(remaining) {
|
||
Ok(Ok(_event)) => {
|
||
if path.exists() {
|
||
return Ok(path.clone());
|
||
}
|
||
}
|
||
Ok(Err(err)) => return Err(err.into()),
|
||
Err(RecvTimeoutError::Timeout) => break,
|
||
Err(RecvTimeoutError::Disconnected) => break,
|
||
}
|
||
}
|
||
|
||
if path.exists() {
|
||
Ok(path)
|
||
} else {
|
||
Err(anyhow!("timed out waiting for {path:?}"))
|
||
}
|
||
}
|
||
|
||
fn blocking_find_matching_file(
|
||
root: PathBuf,
|
||
timeout: Duration,
|
||
predicate: &mut impl FnMut(&Path) -> bool,
|
||
) -> Result<PathBuf> {
|
||
let root = wait_for_path_exists_blocking(root, timeout)?;
|
||
|
||
if let Some(found) = scan_for_match(&root, predicate) {
|
||
return Ok(found);
|
||
}
|
||
|
||
let (tx, rx) = mpsc::channel();
|
||
let mut watcher = notify::recommended_watcher(move |res| {
|
||
let _ = tx.send(res);
|
||
})?;
|
||
watcher.watch(&root, RecursiveMode::Recursive)?;
|
||
|
||
let deadline = Instant::now() + timeout;
|
||
|
||
while Instant::now() < deadline {
|
||
let remaining = deadline.saturating_duration_since(Instant::now());
|
||
match rx.recv_timeout(remaining) {
|
||
Ok(Ok(_event)) => {
|
||
if let Some(found) = scan_for_match(&root, predicate) {
|
||
return Ok(found);
|
||
}
|
||
}
|
||
Ok(Err(err)) => return Err(err.into()),
|
||
Err(RecvTimeoutError::Timeout) => break,
|
||
Err(RecvTimeoutError::Disconnected) => break,
|
||
}
|
||
}
|
||
|
||
if let Some(found) = scan_for_match(&root, predicate) {
|
||
Ok(found)
|
||
} else {
|
||
Err(anyhow!("timed out waiting for matching file in {root:?}"))
|
||
}
|
||
}
|
||
|
||
fn scan_for_match(root: &Path, predicate: &mut impl FnMut(&Path) -> bool) -> Option<PathBuf> {
|
||
for entry in WalkDir::new(root).into_iter().filter_map(Result::ok) {
|
||
let path = entry.path();
|
||
if !entry.file_type().is_file() {
|
||
continue;
|
||
}
|
||
if predicate(path) {
|
||
return Some(path.to_path_buf());
|
||
}
|
||
}
|
||
None
|
||
}
|
||
|
||
fn nearest_existing_ancestor(path: &Path) -> PathBuf {
|
||
let mut current = path;
|
||
loop {
|
||
if current.exists() {
|
||
return current.to_path_buf();
|
||
}
|
||
match current.parent() {
|
||
Some(parent) => current = parent,
|
||
None => return PathBuf::from("."),
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
#[macro_export]
|
||
macro_rules! skip_if_sandbox {
|
||
() => {{
|
||
if ::std::env::var($crate::sandbox_env_var())
|
||
== ::core::result::Result::Ok("seatbelt".to_string())
|
||
{
|
||
eprintln!(
|
||
"{} is set to 'seatbelt', skipping test.",
|
||
$crate::sandbox_env_var()
|
||
);
|
||
return;
|
||
}
|
||
}};
|
||
($return_value:expr $(,)?) => {{
|
||
if ::std::env::var($crate::sandbox_env_var())
|
||
== ::core::result::Result::Ok("seatbelt".to_string())
|
||
{
|
||
eprintln!(
|
||
"{} is set to 'seatbelt', skipping test.",
|
||
$crate::sandbox_env_var()
|
||
);
|
||
return $return_value;
|
||
}
|
||
}};
|
||
}
|
||
|
||
#[macro_export]
|
||
macro_rules! skip_if_no_network {
|
||
() => {{
|
||
if ::std::env::var($crate::sandbox_network_env_var()).is_ok() {
|
||
println!(
|
||
"Skipping test because it cannot execute when network is disabled in an LLMX sandbox."
|
||
);
|
||
return;
|
||
}
|
||
}};
|
||
($return_value:expr $(,)?) => {{
|
||
if ::std::env::var($crate::sandbox_network_env_var()).is_ok() {
|
||
println!(
|
||
"Skipping test because it cannot execute when network is disabled in an LLMX sandbox."
|
||
);
|
||
return $return_value;
|
||
}
|
||
}};
|
||
}
|