feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration

This release represents a comprehensive transformation of the codebase from Codex to LLMX,
enhanced with LiteLLM integration to support 100+ LLM providers through a unified API.

## Major Changes

### Phase 1: Repository & Infrastructure Setup
- Established new repository structure and branching strategy
- Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md)
- Set up development environment and tooling configuration

### Phase 2: Rust Workspace Transformation
- Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates)
- Updated package names, binary names, and workspace members
- Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs
- Updated all internal references, imports, and type names
- Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/
- Fixed all Rust compilation errors after mass rename

### Phase 3: LiteLLM Integration
- Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.)
- Implemented OpenAI-compatible Chat Completions API support
- Added model family detection and provider-specific handling
- Updated authentication to support LiteLLM API keys
- Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL
- Added LLMX_API_KEY for unified authentication
- Enhanced error handling for Chat Completions API responses
- Implemented fallback mechanisms between Responses API and Chat Completions API

### Phase 4: TypeScript/Node.js Components
- Renamed npm package: @codex/codex-cli → @valknar/llmx
- Updated TypeScript SDK to use new LLMX APIs and endpoints
- Fixed all TypeScript compilation and linting errors
- Updated SDK tests to support both API backends
- Enhanced mock server to handle multiple API formats
- Updated build scripts for cross-platform packaging

### Phase 5: Configuration & Documentation
- Updated all configuration files to use LLMX naming
- Rewrote README and documentation for LLMX branding
- Updated config paths: ~/.codex/ → ~/.llmx/
- Added comprehensive LiteLLM setup guide
- Updated all user-facing strings and help text
- Created release plan and migration documentation

### Phase 6: Testing & Validation
- Fixed all Rust tests for new naming scheme
- Updated snapshot tests in TUI (36 frame files)
- Fixed authentication storage tests
- Updated Chat Completions payload and SSE tests
- Fixed SDK tests for new API endpoints
- Ensured compatibility with Claude Sonnet 4.5 model
- Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL)

### Phase 7: Build & Release Pipeline
- Updated GitHub Actions workflows for LLMX binary names
- Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/
- Updated CI/CD pipelines for new package names
- Made Apple code signing optional in release workflow
- Enhanced npm packaging resilience for partial platform builds
- Added Windows sandbox support to workspace
- Updated dotslash configuration for new binary names

### Phase 8: Final Polish
- Renamed all assets (.github images, labels, templates)
- Updated VSCode and DevContainer configurations
- Fixed all clippy warnings and formatting issues
- Applied cargo fmt and prettier formatting across codebase
- Updated issue templates and pull request templates
- Fixed all remaining UI text references

## Technical Details

**Breaking Changes:**
- Binary name changed from `codex` to `llmx`
- Config directory changed from `~/.codex/` to `~/.llmx/`
- Environment variables renamed (CODEX_* → LLMX_*)
- npm package renamed to `@valknar/llmx`

**New Features:**
- Support for 100+ LLM providers via LiteLLM
- Unified authentication with LLMX_API_KEY
- Enhanced model provider detection and handling
- Improved error handling and fallback mechanisms

**Files Changed:**
- 578 files modified across Rust, TypeScript, and documentation
- 30+ Rust crates renamed and updated
- Complete rebrand of UI, CLI, and documentation
- All tests updated and passing

**Dependencies:**
- Updated Cargo.lock with new package names
- Updated npm dependencies in llmx-cli
- Enhanced OpenAPI models for LLMX backend

This release establishes LLMX as a standalone project with comprehensive LiteLLM
integration, maintaining full backward compatibility with existing functionality
while opening support for a wide ecosystem of LLM providers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Sebastian Krüger <support@pivoine.art>
This commit is contained in:
Sebastian Krüger
2025-11-12 20:40:44 +01:00
parent 052b052832
commit 3c7efc58c8
1248 changed files with 10085 additions and 9580 deletions

View File

@@ -0,0 +1,347 @@
//! Configuration object accepted by the `llmx` MCP tool-call.
use llmx_core::protocol::AskForApproval;
use llmx_protocol::config_types::SandboxMode;
use llmx_utils_json_to_toml::json_to_toml;
use mcp_types::Tool;
use mcp_types::ToolInputSchema;
use schemars::JsonSchema;
use schemars::r#gen::SchemaSettings;
use serde::Deserialize;
use serde::Serialize;
use std::collections::HashMap;
use std::path::PathBuf;
/// Client-supplied configuration for a `llmx` tool-call.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Default)]
#[serde(rename_all = "kebab-case")]
pub struct LlmxToolCallParam {
/// The *initial user prompt* to start the LLMX conversation.
pub prompt: String,
/// Optional override for the model name (e.g. "o3", "o4-mini").
#[serde(default, skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// Configuration profile from config.toml to specify default options.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub profile: Option<String>,
/// Working directory for the session. If relative, it is resolved against
/// the server process's current working directory.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub cwd: Option<String>,
/// Approval policy for shell commands generated by the model:
/// `untrusted`, `on-failure`, `on-request`, `never`.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub approval_policy: Option<LlmxToolCallApprovalPolicy>,
/// Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub sandbox: Option<LlmxToolCallSandboxMode>,
/// Individual config settings that will override what is in
/// LLMX_HOME/config.toml.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub config: Option<HashMap<String, serde_json::Value>>,
/// The set of instructions to use instead of the default ones.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub base_instructions: Option<String>,
/// Developer instructions that should be injected as a developer role message.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub developer_instructions: Option<String>,
/// Prompt used when compacting the conversation.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub compact_prompt: Option<String>,
}
/// Custom enum mirroring [`AskForApproval`], but has an extra dependency on
/// [`JsonSchema`].
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, PartialEq)]
#[serde(rename_all = "kebab-case")]
pub enum LlmxToolCallApprovalPolicy {
Untrusted,
OnFailure,
OnRequest,
Never,
}
impl From<LlmxToolCallApprovalPolicy> for AskForApproval {
fn from(value: LlmxToolCallApprovalPolicy) -> Self {
match value {
LlmxToolCallApprovalPolicy::Untrusted => AskForApproval::UnlessTrusted,
LlmxToolCallApprovalPolicy::OnFailure => AskForApproval::OnFailure,
LlmxToolCallApprovalPolicy::OnRequest => AskForApproval::OnRequest,
LlmxToolCallApprovalPolicy::Never => AskForApproval::Never,
}
}
}
/// Custom enum mirroring [`SandboxMode`] from config_types.rs, but with
/// `JsonSchema` support.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, PartialEq)]
#[serde(rename_all = "kebab-case")]
pub enum LlmxToolCallSandboxMode {
ReadOnly,
WorkspaceWrite,
DangerFullAccess,
}
impl From<LlmxToolCallSandboxMode> for SandboxMode {
fn from(value: LlmxToolCallSandboxMode) -> Self {
match value {
LlmxToolCallSandboxMode::ReadOnly => SandboxMode::ReadOnly,
LlmxToolCallSandboxMode::WorkspaceWrite => SandboxMode::WorkspaceWrite,
LlmxToolCallSandboxMode::DangerFullAccess => SandboxMode::DangerFullAccess,
}
}
}
/// Builds a `Tool` definition (JSON schema etc.) for the Llmx tool-call.
pub(crate) fn create_tool_for_llmx_tool_call_param() -> Tool {
let schema = SchemaSettings::draft2019_09()
.with(|s| {
s.inline_subschemas = true;
s.option_add_null_type = false;
})
.into_generator()
.into_root_schema_for::<LlmxToolCallParam>();
#[expect(clippy::expect_used)]
let schema_value =
serde_json::to_value(&schema).expect("LLMX tool schema should serialise to JSON");
let tool_input_schema =
serde_json::from_value::<ToolInputSchema>(schema_value).unwrap_or_else(|e| {
panic!("failed to create Tool from schema: {e}");
});
Tool {
name: "llmx".to_string(),
title: Some("LLMX".to_string()),
input_schema: tool_input_schema,
// TODO(mbolin): This should be defined.
output_schema: None,
description: Some(
"Run an LLMX session. Accepts configuration parameters matching the LLMX Config struct.".to_string(),
),
annotations: None,
}
}
impl LlmxToolCallParam {
/// Returns the initial user prompt to start the LLMX conversation and the
/// effective Config object generated from the supplied parameters.
pub async fn into_config(
self,
llmx_linux_sandbox_exe: Option<PathBuf>,
) -> std::io::Result<(String, llmx_core::config::Config)> {
let Self {
prompt,
model,
profile,
cwd,
approval_policy,
sandbox,
config: cli_overrides,
base_instructions,
developer_instructions,
compact_prompt,
} = self;
// Build the `ConfigOverrides` recognized by llmx-core.
let overrides = llmx_core::config::ConfigOverrides {
model,
review_model: None,
config_profile: profile,
cwd: cwd.map(PathBuf::from),
approval_policy: approval_policy.map(Into::into),
sandbox_mode: sandbox.map(Into::into),
model_provider: None,
llmx_linux_sandbox_exe,
base_instructions,
developer_instructions,
compact_prompt,
include_apply_patch_tool: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,
experimental_sandbox_command_assessment: None,
additional_writable_roots: Vec::new(),
};
let cli_overrides = cli_overrides
.unwrap_or_default()
.into_iter()
.map(|(k, v)| (k, json_to_toml(v)))
.collect();
let cfg =
llmx_core::config::Config::load_with_cli_overrides(cli_overrides, overrides).await?;
Ok((prompt, cfg))
}
}
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "camelCase")]
pub struct LlmxToolCallReplyParam {
/// The conversation id for this LLMX session.
pub conversation_id: String,
/// The *next user prompt* to continue the LLMX conversation.
pub prompt: String,
}
/// Builds a `Tool` definition for the `llmx-reply` tool-call.
pub(crate) fn create_tool_for_llmx_tool_call_reply_param() -> Tool {
let schema = SchemaSettings::draft2019_09()
.with(|s| {
s.inline_subschemas = true;
s.option_add_null_type = false;
})
.into_generator()
.into_root_schema_for::<LlmxToolCallReplyParam>();
#[expect(clippy::expect_used)]
let schema_value =
serde_json::to_value(&schema).expect("LLMX reply tool schema should serialise to JSON");
let tool_input_schema =
serde_json::from_value::<ToolInputSchema>(schema_value).unwrap_or_else(|e| {
panic!("failed to create Tool from schema: {e}");
});
Tool {
name: "llmx-reply".to_string(),
title: Some("LLMX Reply".to_string()),
input_schema: tool_input_schema,
output_schema: None,
description: Some(
"Continue an LLMX conversation by providing the conversation id and prompt."
.to_string(),
),
annotations: None,
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
/// We include a test to verify the exact JSON schema as "executable
/// documentation" for the schema. When can track changes to this test as a
/// way to audit changes to the generated schema.
///
/// Seeing the fully expanded schema makes it easier to casually verify that
/// the generated JSON for enum types such as "approval-policy" is compact.
/// Ideally, modelcontextprotocol/inspector would provide a simpler UI for
/// enum fields versus open string fields to take advantage of this.
///
/// As of 2025-05-04, there is an open PR for this:
/// https://github.com/modelcontextprotocol/inspector/pull/196
#[test]
fn verify_llmx_tool_json_schema() {
let tool = create_tool_for_llmx_tool_call_param();
let tool_json = serde_json::to_value(&tool).expect("tool serializes");
let expected_tool_json = serde_json::json!({
"name": "llmx",
"title": "LLMX",
"description": "Run an LLMX session. Accepts configuration parameters matching the LLMX Config struct.",
"inputSchema": {
"type": "object",
"properties": {
"approval-policy": {
"description": "Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `on-request`, `never`.",
"enum": [
"untrusted",
"on-failure",
"on-request",
"never"
],
"type": "string"
},
"sandbox": {
"description": "Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.",
"enum": [
"read-only",
"workspace-write",
"danger-full-access"
],
"type": "string"
},
"config": {
"description": "Individual config settings that will override what is in LLMX_HOME/config.toml.",
"additionalProperties": true,
"type": "object"
},
"cwd": {
"description": "Working directory for the session. If relative, it is resolved against the server process's current working directory.",
"type": "string"
},
"model": {
"description": "Optional override for the model name (e.g. \"o3\", \"o4-mini\").",
"type": "string"
},
"profile": {
"description": "Configuration profile from config.toml to specify default options.",
"type": "string"
},
"prompt": {
"description": "The *initial user prompt* to start the LLMX conversation.",
"type": "string"
},
"base-instructions": {
"description": "The set of instructions to use instead of the default ones.",
"type": "string"
},
"developer-instructions": {
"description": "Developer instructions that should be injected as a developer role message.",
"type": "string"
},
"compact-prompt": {
"description": "Prompt used when compacting the conversation.",
"type": "string"
},
},
"required": [
"prompt"
]
}
});
assert_eq!(expected_tool_json, tool_json);
}
#[test]
fn verify_llmx_tool_reply_json_schema() {
let tool = create_tool_for_llmx_tool_call_reply_param();
let tool_json = serde_json::to_value(&tool).expect("tool serializes");
let expected_tool_json = serde_json::json!({
"description": "Continue an LLMX conversation by providing the conversation id and prompt.",
"inputSchema": {
"properties": {
"conversationId": {
"description": "The conversation id for this LLMX session.",
"type": "string"
},
"prompt": {
"description": "The *next user prompt* to continue the LLMX conversation.",
"type": "string"
},
},
"required": [
"conversationId",
"prompt",
],
"type": "object",
},
"name": "llmx-reply",
"title": "LLMX Reply",
});
assert_eq!(expected_tool_json, tool_json);
}
}