This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
160 lines
5.7 KiB
Rust
160 lines
5.7 KiB
Rust
use std::path::PathBuf;
|
|
|
|
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
|
|
use crate::llmx_message_processor::LlmxMessageProcessor;
|
|
use crate::outgoing_message::OutgoingMessageSender;
|
|
use llmx_app_server_protocol::ClientInfo;
|
|
use llmx_app_server_protocol::ClientRequest;
|
|
use llmx_app_server_protocol::InitializeResponse;
|
|
|
|
use llmx_app_server_protocol::JSONRPCError;
|
|
use llmx_app_server_protocol::JSONRPCErrorError;
|
|
use llmx_app_server_protocol::JSONRPCNotification;
|
|
use llmx_app_server_protocol::JSONRPCRequest;
|
|
use llmx_app_server_protocol::JSONRPCResponse;
|
|
use llmx_core::AuthManager;
|
|
use llmx_core::ConversationManager;
|
|
use llmx_core::config::Config;
|
|
use llmx_core::default_client::USER_AGENT_SUFFIX;
|
|
use llmx_core::default_client::get_llmx_user_agent;
|
|
use llmx_feedback::LlmxFeedback;
|
|
use llmx_protocol::protocol::SessionSource;
|
|
use std::sync::Arc;
|
|
|
|
pub(crate) struct MessageProcessor {
|
|
outgoing: Arc<OutgoingMessageSender>,
|
|
llmx_message_processor: LlmxMessageProcessor,
|
|
initialized: bool,
|
|
}
|
|
|
|
impl MessageProcessor {
|
|
/// Create a new `MessageProcessor`, retaining a handle to the outgoing
|
|
/// `Sender` so handlers can enqueue messages to be written to stdout.
|
|
pub(crate) fn new(
|
|
outgoing: OutgoingMessageSender,
|
|
llmx_linux_sandbox_exe: Option<PathBuf>,
|
|
config: Arc<Config>,
|
|
feedback: LlmxFeedback,
|
|
) -> Self {
|
|
let outgoing = Arc::new(outgoing);
|
|
let auth_manager = AuthManager::shared(
|
|
config.llmx_home.clone(),
|
|
false,
|
|
config.cli_auth_credentials_store_mode,
|
|
);
|
|
let conversation_manager = Arc::new(ConversationManager::new(
|
|
auth_manager.clone(),
|
|
SessionSource::VSCode,
|
|
));
|
|
let llmx_message_processor = LlmxMessageProcessor::new(
|
|
auth_manager,
|
|
conversation_manager,
|
|
outgoing.clone(),
|
|
llmx_linux_sandbox_exe,
|
|
config,
|
|
feedback,
|
|
);
|
|
|
|
Self {
|
|
outgoing,
|
|
llmx_message_processor,
|
|
initialized: false,
|
|
}
|
|
}
|
|
|
|
pub(crate) async fn process_request(&mut self, request: JSONRPCRequest) {
|
|
let request_id = request.id.clone();
|
|
let request_json = match serde_json::to_value(&request) {
|
|
Ok(request_json) => request_json,
|
|
Err(err) => {
|
|
let error = JSONRPCErrorError {
|
|
code: INVALID_REQUEST_ERROR_CODE,
|
|
message: format!("Invalid request: {err}"),
|
|
data: None,
|
|
};
|
|
self.outgoing.send_error(request_id, error).await;
|
|
return;
|
|
}
|
|
};
|
|
|
|
let llmx_request = match serde_json::from_value::<ClientRequest>(request_json) {
|
|
Ok(llmx_request) => llmx_request,
|
|
Err(err) => {
|
|
let error = JSONRPCErrorError {
|
|
code: INVALID_REQUEST_ERROR_CODE,
|
|
message: format!("Invalid request: {err}"),
|
|
data: None,
|
|
};
|
|
self.outgoing.send_error(request_id, error).await;
|
|
return;
|
|
}
|
|
};
|
|
|
|
match llmx_request {
|
|
// Handle Initialize internally so LlmxMessageProcessor does not have to concern
|
|
// itself with the `initialized` bool.
|
|
ClientRequest::Initialize { request_id, params } => {
|
|
if self.initialized {
|
|
let error = JSONRPCErrorError {
|
|
code: INVALID_REQUEST_ERROR_CODE,
|
|
message: "Already initialized".to_string(),
|
|
data: None,
|
|
};
|
|
self.outgoing.send_error(request_id, error).await;
|
|
return;
|
|
} else {
|
|
let ClientInfo {
|
|
name,
|
|
title: _title,
|
|
version,
|
|
} = params.client_info;
|
|
let user_agent_suffix = format!("{name}; {version}");
|
|
if let Ok(mut suffix) = USER_AGENT_SUFFIX.lock() {
|
|
*suffix = Some(user_agent_suffix);
|
|
}
|
|
|
|
let user_agent = get_llmx_user_agent();
|
|
let response = InitializeResponse { user_agent };
|
|
self.outgoing.send_response(request_id, response).await;
|
|
|
|
self.initialized = true;
|
|
return;
|
|
}
|
|
}
|
|
_ => {
|
|
if !self.initialized {
|
|
let error = JSONRPCErrorError {
|
|
code: INVALID_REQUEST_ERROR_CODE,
|
|
message: "Not initialized".to_string(),
|
|
data: None,
|
|
};
|
|
self.outgoing.send_error(request_id, error).await;
|
|
return;
|
|
}
|
|
}
|
|
}
|
|
|
|
self.llmx_message_processor
|
|
.process_request(llmx_request)
|
|
.await;
|
|
}
|
|
|
|
pub(crate) async fn process_notification(&self, notification: JSONRPCNotification) {
|
|
// Currently, we do not expect to receive any notifications from the
|
|
// client, so we just log them.
|
|
tracing::info!("<- notification: {:?}", notification);
|
|
}
|
|
|
|
/// Handle a standalone JSON-RPC response originating from the peer.
|
|
pub(crate) async fn process_response(&mut self, response: JSONRPCResponse) {
|
|
tracing::info!("<- response: {:?}", response);
|
|
let JSONRPCResponse { id, result, .. } = response;
|
|
self.outgoing.notify_client_response(id, result).await
|
|
}
|
|
|
|
/// Handle an error object received from the peer.
|
|
pub(crate) fn process_error(&mut self, err: JSONRPCError) {
|
|
tracing::error!("<- error: {:?}", err);
|
|
}
|
|
}
|