This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
153 lines
5.2 KiB
Rust
153 lines
5.2 KiB
Rust
use anyhow::Result;
|
|
use app_test_support::McpProcess;
|
|
use app_test_support::to_response;
|
|
use llmx_app_server_protocol::GetUserSavedConfigResponse;
|
|
use llmx_app_server_protocol::JSONRPCResponse;
|
|
use llmx_app_server_protocol::Profile;
|
|
use llmx_app_server_protocol::RequestId;
|
|
use llmx_app_server_protocol::SandboxSettings;
|
|
use llmx_app_server_protocol::Tools;
|
|
use llmx_app_server_protocol::UserSavedConfig;
|
|
use llmx_core::protocol::AskForApproval;
|
|
use llmx_protocol::config_types::ForcedLoginMethod;
|
|
use llmx_protocol::config_types::ReasoningEffort;
|
|
use llmx_protocol::config_types::ReasoningSummary;
|
|
use llmx_protocol::config_types::SandboxMode;
|
|
use llmx_protocol::config_types::Verbosity;
|
|
use pretty_assertions::assert_eq;
|
|
use std::collections::HashMap;
|
|
use std::path::Path;
|
|
use tempfile::TempDir;
|
|
use tokio::time::timeout;
|
|
|
|
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
|
|
|
|
fn create_config_toml(llmx_home: &Path) -> std::io::Result<()> {
|
|
let config_toml = llmx_home.join("config.toml");
|
|
std::fs::write(
|
|
config_toml,
|
|
r#"
|
|
model = "gpt-5-llmx"
|
|
approval_policy = "on-request"
|
|
sandbox_mode = "workspace-write"
|
|
model_reasoning_summary = "detailed"
|
|
model_reasoning_effort = "high"
|
|
model_verbosity = "medium"
|
|
profile = "test"
|
|
forced_chatgpt_workspace_id = "12345678-0000-0000-0000-000000000000"
|
|
forced_login_method = "chatgpt"
|
|
|
|
[sandbox_workspace_write]
|
|
writable_roots = ["/tmp"]
|
|
network_access = true
|
|
exclude_tmpdir_env_var = true
|
|
exclude_slash_tmp = true
|
|
|
|
[tools]
|
|
web_search = false
|
|
view_image = true
|
|
|
|
[profiles.test]
|
|
model = "gpt-4o"
|
|
approval_policy = "on-request"
|
|
model_reasoning_effort = "high"
|
|
model_reasoning_summary = "detailed"
|
|
model_verbosity = "medium"
|
|
model_provider = "openai"
|
|
chatgpt_base_url = "https://api.chatgpt.com"
|
|
"#,
|
|
)
|
|
}
|
|
|
|
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
|
|
async fn get_config_toml_parses_all_fields() -> Result<()> {
|
|
let llmx_home = TempDir::new()?;
|
|
create_config_toml(llmx_home.path())?;
|
|
|
|
let mut mcp = McpProcess::new(llmx_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
|
|
|
let request_id = mcp.send_get_user_saved_config_request().await?;
|
|
let resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
|
)
|
|
.await??;
|
|
|
|
let config: GetUserSavedConfigResponse = to_response(resp)?;
|
|
let expected = GetUserSavedConfigResponse {
|
|
config: UserSavedConfig {
|
|
approval_policy: Some(AskForApproval::OnRequest),
|
|
sandbox_mode: Some(SandboxMode::WorkspaceWrite),
|
|
sandbox_settings: Some(SandboxSettings {
|
|
writable_roots: vec!["/tmp".into()],
|
|
network_access: Some(true),
|
|
exclude_tmpdir_env_var: Some(true),
|
|
exclude_slash_tmp: Some(true),
|
|
}),
|
|
forced_chatgpt_workspace_id: Some("12345678-0000-0000-0000-000000000000".into()),
|
|
forced_login_method: Some(ForcedLoginMethod::Chatgpt),
|
|
model: Some("gpt-5-llmx".into()),
|
|
model_reasoning_effort: Some(ReasoningEffort::High),
|
|
model_reasoning_summary: Some(ReasoningSummary::Detailed),
|
|
model_verbosity: Some(Verbosity::Medium),
|
|
tools: Some(Tools {
|
|
web_search: Some(false),
|
|
view_image: Some(true),
|
|
}),
|
|
profile: Some("test".to_string()),
|
|
profiles: HashMap::from([(
|
|
"test".into(),
|
|
Profile {
|
|
model: Some("gpt-4o".into()),
|
|
approval_policy: Some(AskForApproval::OnRequest),
|
|
model_reasoning_effort: Some(ReasoningEffort::High),
|
|
model_reasoning_summary: Some(ReasoningSummary::Detailed),
|
|
model_verbosity: Some(Verbosity::Medium),
|
|
model_provider: Some("openai".into()),
|
|
chatgpt_base_url: Some("https://api.chatgpt.com".into()),
|
|
},
|
|
)]),
|
|
},
|
|
};
|
|
|
|
assert_eq!(config, expected);
|
|
Ok(())
|
|
}
|
|
|
|
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
|
async fn get_config_toml_empty() -> Result<()> {
|
|
let llmx_home = TempDir::new()?;
|
|
|
|
let mut mcp = McpProcess::new(llmx_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
|
|
|
let request_id = mcp.send_get_user_saved_config_request().await?;
|
|
let resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
|
)
|
|
.await??;
|
|
|
|
let config: GetUserSavedConfigResponse = to_response(resp)?;
|
|
let expected = GetUserSavedConfigResponse {
|
|
config: UserSavedConfig {
|
|
approval_policy: None,
|
|
sandbox_mode: None,
|
|
sandbox_settings: None,
|
|
forced_chatgpt_workspace_id: None,
|
|
forced_login_method: None,
|
|
model: None,
|
|
model_reasoning_effort: None,
|
|
model_reasoning_summary: None,
|
|
model_verbosity: None,
|
|
tools: None,
|
|
profile: None,
|
|
profiles: HashMap::new(),
|
|
},
|
|
};
|
|
|
|
assert_eq!(config, expected);
|
|
Ok(())
|
|
}
|