This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
173 lines
6.3 KiB
Rust
173 lines
6.3 KiB
Rust
#![deny(clippy::print_stdout, clippy::print_stderr)]
|
||
|
||
use llmx_common::CliConfigOverrides;
|
||
use llmx_core::config::Config;
|
||
use llmx_core::config::ConfigOverrides;
|
||
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
|
||
use std::io::ErrorKind;
|
||
use std::io::Result as IoResult;
|
||
use std::path::PathBuf;
|
||
|
||
use crate::message_processor::MessageProcessor;
|
||
use crate::outgoing_message::OutgoingMessage;
|
||
use crate::outgoing_message::OutgoingMessageSender;
|
||
use llmx_app_server_protocol::JSONRPCMessage;
|
||
use llmx_feedback::LlmxFeedback;
|
||
use tokio::io::AsyncBufReadExt;
|
||
use tokio::io::AsyncWriteExt;
|
||
use tokio::io::BufReader;
|
||
use tokio::io::{self};
|
||
use tokio::sync::mpsc;
|
||
use tracing::Level;
|
||
use tracing::debug;
|
||
use tracing::error;
|
||
use tracing::info;
|
||
use tracing_subscriber::EnvFilter;
|
||
use tracing_subscriber::Layer;
|
||
use tracing_subscriber::filter::Targets;
|
||
use tracing_subscriber::layer::SubscriberExt;
|
||
use tracing_subscriber::util::SubscriberInitExt;
|
||
|
||
mod error_code;
|
||
mod fuzzy_file_search;
|
||
mod llmx_message_processor;
|
||
mod message_processor;
|
||
mod models;
|
||
mod outgoing_message;
|
||
|
||
/// Size of the bounded channels used to communicate between tasks. The value
|
||
/// is a balance between throughput and memory usage – 128 messages should be
|
||
/// plenty for an interactive CLI.
|
||
const CHANNEL_CAPACITY: usize = 128;
|
||
|
||
pub async fn run_main(
|
||
llmx_linux_sandbox_exe: Option<PathBuf>,
|
||
cli_config_overrides: CliConfigOverrides,
|
||
) -> IoResult<()> {
|
||
// Set up channels.
|
||
let (incoming_tx, mut incoming_rx) = mpsc::channel::<JSONRPCMessage>(CHANNEL_CAPACITY);
|
||
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
|
||
|
||
// Task: read from stdin, push to `incoming_tx`.
|
||
let stdin_reader_handle = tokio::spawn({
|
||
async move {
|
||
let stdin = io::stdin();
|
||
let reader = BufReader::new(stdin);
|
||
let mut lines = reader.lines();
|
||
|
||
while let Some(line) = lines.next_line().await.unwrap_or_default() {
|
||
match serde_json::from_str::<JSONRPCMessage>(&line) {
|
||
Ok(msg) => {
|
||
if incoming_tx.send(msg).await.is_err() {
|
||
// Receiver gone – nothing left to do.
|
||
break;
|
||
}
|
||
}
|
||
Err(e) => error!("Failed to deserialize JSONRPCMessage: {e}"),
|
||
}
|
||
}
|
||
|
||
debug!("stdin reader finished (EOF)");
|
||
}
|
||
});
|
||
|
||
// Parse CLI overrides once and derive the base Config eagerly so later
|
||
// components do not need to work with raw TOML values.
|
||
let cli_kv_overrides = cli_config_overrides.parse_overrides().map_err(|e| {
|
||
std::io::Error::new(
|
||
ErrorKind::InvalidInput,
|
||
format!("error parsing -c overrides: {e}"),
|
||
)
|
||
})?;
|
||
let config = Config::load_with_cli_overrides(cli_kv_overrides, ConfigOverrides::default())
|
||
.await
|
||
.map_err(|e| {
|
||
std::io::Error::new(ErrorKind::InvalidData, format!("error loading config: {e}"))
|
||
})?;
|
||
|
||
let feedback = LlmxFeedback::new();
|
||
|
||
let otel =
|
||
llmx_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION")).map_err(|e| {
|
||
std::io::Error::new(
|
||
ErrorKind::InvalidData,
|
||
format!("error loading otel config: {e}"),
|
||
)
|
||
})?;
|
||
|
||
// Install a simple subscriber so `tracing` output is visible. Users can
|
||
// control the log level with `RUST_LOG`.
|
||
let stderr_fmt = tracing_subscriber::fmt::layer()
|
||
.with_writer(std::io::stderr)
|
||
.with_filter(EnvFilter::from_default_env());
|
||
|
||
let feedback_layer = tracing_subscriber::fmt::layer()
|
||
.with_writer(feedback.make_writer())
|
||
.with_ansi(false)
|
||
.with_target(false)
|
||
.with_filter(Targets::new().with_default(Level::TRACE));
|
||
|
||
let _ = tracing_subscriber::registry()
|
||
.with(stderr_fmt)
|
||
.with(feedback_layer)
|
||
.with(otel.as_ref().map(|provider| {
|
||
OpenTelemetryTracingBridge::new(&provider.logger).with_filter(
|
||
tracing_subscriber::filter::filter_fn(llmx_core::otel_init::llmx_export_filter),
|
||
)
|
||
}))
|
||
.try_init();
|
||
|
||
// Task: process incoming messages.
|
||
let processor_handle = tokio::spawn({
|
||
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
|
||
let mut processor = MessageProcessor::new(
|
||
outgoing_message_sender,
|
||
llmx_linux_sandbox_exe,
|
||
std::sync::Arc::new(config),
|
||
feedback.clone(),
|
||
);
|
||
async move {
|
||
while let Some(msg) = incoming_rx.recv().await {
|
||
match msg {
|
||
JSONRPCMessage::Request(r) => processor.process_request(r).await,
|
||
JSONRPCMessage::Response(r) => processor.process_response(r).await,
|
||
JSONRPCMessage::Notification(n) => processor.process_notification(n).await,
|
||
JSONRPCMessage::Error(e) => processor.process_error(e),
|
||
}
|
||
}
|
||
|
||
info!("processor task exited (channel closed)");
|
||
}
|
||
});
|
||
|
||
// Task: write outgoing messages to stdout.
|
||
let stdout_writer_handle = tokio::spawn(async move {
|
||
let mut stdout = io::stdout();
|
||
while let Some(outgoing_message) = outgoing_rx.recv().await {
|
||
let Ok(value) = serde_json::to_value(outgoing_message) else {
|
||
error!("Failed to convert OutgoingMessage to JSON value");
|
||
continue;
|
||
};
|
||
match serde_json::to_string(&value) {
|
||
Ok(mut json) => {
|
||
json.push('\n');
|
||
if let Err(e) = stdout.write_all(json.as_bytes()).await {
|
||
error!("Failed to write to stdout: {e}");
|
||
break;
|
||
}
|
||
}
|
||
Err(e) => error!("Failed to serialize JSONRPCMessage: {e}"),
|
||
}
|
||
}
|
||
|
||
info!("stdout writer exited (channel closed)");
|
||
});
|
||
|
||
// Wait for all tasks to finish. The typical exit path is the stdin reader
|
||
// hitting EOF which, once it drops `incoming_tx`, propagates shutdown to
|
||
// the processor and then to the stdout task.
|
||
let _ = tokio::join!(stdin_reader_handle, processor_handle, stdout_writer_handle);
|
||
|
||
Ok(())
|
||
}
|