feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration
This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
This commit is contained in:
141
llmx-rs/docs/llmx_mcp_interface.md
Normal file
141
llmx-rs/docs/llmx_mcp_interface.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# LLMX MCP Server Interface [experimental]
|
||||
|
||||
This document describes LLMX’s experimental MCP server interface: a JSON‑RPC API that runs over the Model Context Protocol (MCP) transport to control a local LLMX engine.
|
||||
|
||||
- Status: experimental and subject to change without notice
|
||||
- Server binary: `llmx mcp-server` (or `llmx-mcp-server`)
|
||||
- Transport: standard MCP over stdio (JSON‑RPC 2.0, line‑delimited)
|
||||
|
||||
## Overview
|
||||
|
||||
LLMX exposes a small set of MCP‑compatible methods to create and manage conversations, send user input, receive live events, and handle approval prompts. The types are defined in `protocol/src/mcp_protocol.rs` and re‑used by the MCP server implementation in `mcp-server/`.
|
||||
|
||||
At a glance:
|
||||
|
||||
- Conversations
|
||||
- `newConversation` → start a LLMX session
|
||||
- `sendUserMessage` / `sendUserTurn` → send user input into a conversation
|
||||
- `interruptConversation` → stop the current turn
|
||||
- `listConversations`, `resumeConversation`, `archiveConversation`
|
||||
- Configuration and info
|
||||
- `getUserSavedConfig`, `setDefaultModel`, `getUserAgent`, `userInfo`
|
||||
- `model/list` → enumerate available models and reasoning options
|
||||
- Auth
|
||||
- `account/read`, `account/login/start`, `account/login/cancel`, `account/logout`, `account/rateLimits/read`
|
||||
- notifications: `account/login/completed`, `account/updated`, `account/rateLimits/updated`
|
||||
- Utilities
|
||||
- `gitDiffToRemote`, `execOneOffCommand`
|
||||
- Approvals (server → client requests)
|
||||
- `applyPatchApproval`, `execCommandApproval`
|
||||
- Notifications (server → client)
|
||||
- `loginChatGptComplete`, `authStatusChange`
|
||||
- `llmx/event` stream with agent events
|
||||
|
||||
See code for full type definitions and exact shapes: `protocol/src/mcp_protocol.rs`.
|
||||
|
||||
## Starting the server
|
||||
|
||||
Run LLMX as an MCP server and connect an MCP client:
|
||||
|
||||
```bash
|
||||
llmx mcp-server | your_mcp_client
|
||||
```
|
||||
|
||||
For a simple inspection UI, you can also try:
|
||||
|
||||
```bash
|
||||
npx @modelcontextprotocol/inspector llmx mcp-server
|
||||
```
|
||||
|
||||
Use the separate `llmx mcp` subcommand to manage configured MCP server launchers in `config.toml`.
|
||||
|
||||
## Conversations
|
||||
|
||||
Start a new session with optional overrides:
|
||||
|
||||
Request `newConversation` params (subset):
|
||||
|
||||
- `model`: string model id (e.g. "o3", "gpt-5", "gpt-5-llmx")
|
||||
- `profile`: optional named profile
|
||||
- `cwd`: optional working directory
|
||||
- `approvalPolicy`: `untrusted` | `on-request` | `on-failure` | `never`
|
||||
- `sandbox`: `read-only` | `workspace-write` | `danger-full-access`
|
||||
- `config`: map of additional config overrides
|
||||
- `baseInstructions`: optional instruction override
|
||||
- `compactPrompt`: optional replacement for the default compaction prompt
|
||||
- `includePlanTool` / `includeApplyPatchTool`: booleans
|
||||
|
||||
Response: `{ conversationId, model, reasoningEffort?, rolloutPath }`
|
||||
|
||||
Send input to the active turn:
|
||||
|
||||
- `sendUserMessage` → enqueue items to the conversation
|
||||
- `sendUserTurn` → structured turn with explicit `cwd`, `approvalPolicy`, `sandboxPolicy`, `model`, optional `effort`, and `summary`
|
||||
|
||||
Interrupt a running turn: `interruptConversation`.
|
||||
|
||||
List/resume/archive: `listConversations`, `resumeConversation`, `archiveConversation`.
|
||||
|
||||
## Models
|
||||
|
||||
Fetch the catalog of models available in the current LLMX build with `model/list`. The request accepts optional pagination inputs:
|
||||
|
||||
- `pageSize` – number of models to return (defaults to a server-selected value)
|
||||
- `cursor` – opaque string from the previous response’s `nextCursor`
|
||||
|
||||
Each response yields:
|
||||
|
||||
- `items` – ordered list of models. A model includes:
|
||||
- `id`, `model`, `displayName`, `description`
|
||||
- `supportedReasoningEfforts` – array of objects with:
|
||||
- `reasoningEffort` – one of `minimal|low|medium|high`
|
||||
- `description` – human-friendly label for the effort
|
||||
- `defaultReasoningEffort` – suggested effort for the UI
|
||||
- `isDefault` – whether the model is recommended for most users
|
||||
- `nextCursor` – pass into the next request to continue paging (optional)
|
||||
|
||||
## Event stream
|
||||
|
||||
While a conversation runs, the server sends notifications:
|
||||
|
||||
- `llmx/event` with the serialized LLMX event payload. The shape matches `core/src/protocol.rs`’s `Event` and `EventMsg` types. Some notifications include a `_meta.requestId` to correlate with the originating request.
|
||||
- Auth notifications via method names `loginChatGptComplete` and `authStatusChange`.
|
||||
|
||||
Clients should render events and, when present, surface approval requests (see next section).
|
||||
|
||||
## Approvals (server → client)
|
||||
|
||||
When LLMX needs approval to apply changes or run commands, the server issues JSON‑RPC requests to the client:
|
||||
|
||||
- `applyPatchApproval { conversationId, callId, fileChanges, reason?, grantRoot? }`
|
||||
- `execCommandApproval { conversationId, callId, command, cwd, reason? }`
|
||||
|
||||
The client must reply with `{ decision: "allow" | "deny" }` for each request.
|
||||
|
||||
## Auth helpers
|
||||
|
||||
For the complete request/response shapes and flow examples, see the [“Auth endpoints (v2)” section in the app‑server README](../app-server/README.md#auth-endpoints-v2).
|
||||
|
||||
## Example: start and send a message
|
||||
|
||||
```json
|
||||
{ "jsonrpc": "2.0", "id": 1, "method": "newConversation", "params": { "model": "gpt-5", "approvalPolicy": "on-request" } }
|
||||
```
|
||||
|
||||
Server responds:
|
||||
|
||||
```json
|
||||
{ "jsonrpc": "2.0", "id": 1, "result": { "conversationId": "c7b0…", "model": "gpt-5", "rolloutPath": "/path/to/rollout.jsonl" } }
|
||||
```
|
||||
|
||||
Then send input:
|
||||
|
||||
```json
|
||||
{ "jsonrpc": "2.0", "id": 2, "method": "sendUserMessage", "params": { "conversationId": "c7b0…", "items": [{ "type": "text", "text": "Hello LLMX" }] } }
|
||||
```
|
||||
|
||||
While processing, the server emits `llmx/event` notifications containing agent output, approvals, and status updates.
|
||||
|
||||
## Compatibility and stability
|
||||
|
||||
This interface is experimental. Method names, fields, and event shapes may evolve. For the authoritative schema, consult `protocol/src/mcp_protocol.rs` and the corresponding server wiring in `mcp-server/`.
|
||||
173
llmx-rs/docs/protocol_v1.md
Normal file
173
llmx-rs/docs/protocol_v1.md
Normal file
@@ -0,0 +1,173 @@
|
||||
Overview of Protocol Defined in [protocol.rs](../core/src/protocol.rs) and [agent.rs](../core/src/agent.rs).
|
||||
|
||||
The goal of this document is to define terminology used in the system and explain the expected behavior of the system.
|
||||
|
||||
NOTE: The code might not completely match this spec. There are a few minor changes that need to be made after this spec has been reviewed, which will not alter the existing TUI's functionality.
|
||||
|
||||
## Entities
|
||||
|
||||
These are entities exit on the llmx backend. The intent of this section is to establish vocabulary and construct a shared mental model for the `LLMX` core system.
|
||||
|
||||
0. `Model`
|
||||
- In our case, this is the Responses REST API
|
||||
1. `LLMX`
|
||||
- The core engine of llmx
|
||||
- Runs locally, either in a background thread or separate process
|
||||
- Communicated to via a queue pair – SQ (Submission Queue) / EQ (Event Queue)
|
||||
- Takes user input, makes requests to the `Model`, executes commands and applies patches.
|
||||
2. `Session`
|
||||
- The `LLMX`'s current configuration and state
|
||||
- `LLMX` starts with no `Session`, and it is initialized by `Op::ConfigureSession`, which should be the first message sent by the UI.
|
||||
- The current `Session` can be reconfigured with additional `Op::ConfigureSession` calls.
|
||||
- Any running execution is aborted when the session is reconfigured.
|
||||
3. `Task`
|
||||
- A `Task` is `LLMX` executing work in response to user input.
|
||||
- `Session` has at most one `Task` running at a time.
|
||||
- Receiving `Op::UserInput` starts a `Task`
|
||||
- Consists of a series of `Turn`s
|
||||
- The `Task` executes to until:
|
||||
- The `Model` completes the task and there is no output to feed into an additional `Turn`
|
||||
- Additional `Op::UserInput` aborts the current task and starts a new one
|
||||
- UI interrupts with `Op::Interrupt`
|
||||
- Fatal errors are encountered, eg. `Model` connection exceeding retry limits
|
||||
- Blocked by user approval (executing a command or patch)
|
||||
4. `Turn`
|
||||
- One cycle of iteration in a `Task`, consists of:
|
||||
- A request to the `Model` - (initially) prompt + (optional) `last_response_id`, or (in loop) previous turn output
|
||||
- The `Model` streams responses back in an SSE, which are collected until "completed" message and the SSE terminates
|
||||
- `LLMX` then executes command(s), applies patch(es), and outputs message(s) returned by the `Model`
|
||||
- Pauses to request approval when necessary
|
||||
- The output of one `Turn` is the input to the next `Turn`
|
||||
- A `Turn` yielding no output terminates the `Task`
|
||||
|
||||
The term "UI" is used to refer to the application driving `LLMX`. This may be the CLI / TUI chat-like interface that users operate, or it may be a GUI interface like a VSCode extension. The UI is external to `LLMX`, as `LLMX` is intended to be operated by arbitrary UI implementations.
|
||||
|
||||
When a `Turn` completes, the `response_id` from the `Model`'s final `response.completed` message is stored in the `Session` state to resume the thread given the next `Op::UserInput`. The `response_id` is also returned in the `EventMsg::TurnComplete` to the UI, which can be used to fork the thread from an earlier point by providing it in the `Op::UserInput`.
|
||||
|
||||
Since only 1 `Task` can be run at a time, for parallel tasks it is recommended that a single `LLMX` be run for each thread of work.
|
||||
|
||||
## Interface
|
||||
|
||||
- `LLMX`
|
||||
- Communicates with UI via a `SQ` (Submission Queue) and `EQ` (Event Queue).
|
||||
- `Submission`
|
||||
- These are messages sent on the `SQ` (UI -> `LLMX`)
|
||||
- Has an string ID provided by the UI, referred to as `sub_id`
|
||||
- `Op` refers to the enum of all possible `Submission` payloads
|
||||
- This enum is `non_exhaustive`; variants can be added at future dates
|
||||
- `Event`
|
||||
- These are messages sent on the `EQ` (`LLMX` -> UI)
|
||||
- Each `Event` has a non-unique ID, matching the `sub_id` from the `Op::UserInput` that started the current task.
|
||||
- `EventMsg` refers to the enum of all possible `Event` payloads
|
||||
- This enum is `non_exhaustive`; variants can be added at future dates
|
||||
- It should be expected that new `EventMsg` variants will be added over time to expose more detailed information about the model's actions.
|
||||
|
||||
For complete documentation of the `Op` and `EventMsg` variants, refer to [protocol.rs](../core/src/protocol.rs). Some example payload types:
|
||||
|
||||
- `Op`
|
||||
- `Op::UserInput` – Any input from the user to kick off a `Task`
|
||||
- `Op::Interrupt` – Interrupts a running task
|
||||
- `Op::ExecApproval` – Approve or deny code execution
|
||||
- `EventMsg`
|
||||
- `EventMsg::AgentMessage` – Messages from the `Model`
|
||||
- `EventMsg::ExecApprovalRequest` – Request approval from user to execute a command
|
||||
- `EventMsg::TaskComplete` – A task completed successfully
|
||||
- `EventMsg::Error` – A task stopped with an error
|
||||
- `EventMsg::Warning` – A non-fatal warning that the client should surface to the user
|
||||
- `EventMsg::TurnComplete` – Contains a `response_id` bookmark for last `response_id` executed by the task. This can be used to continue the task at a later point in time, perhaps with additional user input.
|
||||
|
||||
The `response_id` returned from each task matches the OpenAI `response_id` stored in the API's `/responses` endpoint. It can be stored and used in future `Sessions` to resume threads of work.
|
||||
|
||||
## Transport
|
||||
|
||||
Can operate over any transport that supports bi-directional streaming. - cross-thread channels - IPC channels - stdin/stdout - TCP - HTTP2 - gRPC
|
||||
|
||||
Non-framed transports, such as stdin/stdout and TCP, should use newline-delimited JSON in sending messages.
|
||||
|
||||
## Example Flows
|
||||
|
||||
Sequence diagram examples of common interactions. In each diagram, some unimportant events may be eliminated for simplicity.
|
||||
|
||||
### Basic UI Flow
|
||||
|
||||
A single user input, followed by a 2-turn task
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
box UI
|
||||
participant user as User
|
||||
end
|
||||
box Daemon
|
||||
participant llmx as LLMX
|
||||
participant session as Session
|
||||
participant task as Task
|
||||
end
|
||||
box Rest API
|
||||
participant agent as Model
|
||||
end
|
||||
user->>llmx: Op::ConfigureSession
|
||||
llmx-->>session: create session
|
||||
llmx->>user: Event::SessionConfigured
|
||||
user->>session: Op::UserInput
|
||||
session-->>+task: start task
|
||||
task->>user: Event::TaskStarted
|
||||
task->>agent: prompt
|
||||
agent->>task: response (exec)
|
||||
task->>-user: Event::ExecApprovalRequest
|
||||
user->>+task: Op::ExecApproval::Allow
|
||||
task->>user: Event::ExecStart
|
||||
task->>task: exec
|
||||
task->>user: Event::ExecStop
|
||||
task->>user: Event::TurnComplete
|
||||
task->>agent: stdout
|
||||
agent->>task: response (patch)
|
||||
task->>task: apply patch (auto-approved)
|
||||
task->>agent: success
|
||||
agent->>task: response<br/>(msg + completed)
|
||||
task->>user: Event::AgentMessage
|
||||
task->>user: Event::TurnComplete
|
||||
task->>-user: Event::TaskComplete
|
||||
```
|
||||
|
||||
### Task Interrupt
|
||||
|
||||
Interrupting a task and continuing with additional user input.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
box UI
|
||||
participant user as User
|
||||
end
|
||||
box Daemon
|
||||
participant session as Session
|
||||
participant task1 as Task1
|
||||
participant task2 as Task2
|
||||
end
|
||||
box Rest API
|
||||
participant agent as Model
|
||||
end
|
||||
user->>session: Op::UserInput
|
||||
session-->>+task1: start task
|
||||
task1->>user: Event::TaskStarted
|
||||
task1->>agent: prompt
|
||||
agent->>task1: response (exec)
|
||||
task1->>task1: exec (auto-approved)
|
||||
task1->>user: Event::TurnComplete
|
||||
task1->>agent: stdout
|
||||
task1->>agent: response (exec)
|
||||
task1->>task1: exec (auto-approved)
|
||||
user->>task1: Op::Interrupt
|
||||
task1->>-user: Event::Error("interrupted")
|
||||
user->>session: Op::UserInput w/ last_response_id
|
||||
session-->>+task2: start task
|
||||
task2->>user: Event::TaskStarted
|
||||
task2->>agent: prompt + Task1 last_response_id
|
||||
agent->>task2: response (exec)
|
||||
task2->>task2: exec (auto-approve)
|
||||
task2->>user: Event::TurnCompleted
|
||||
task2->>agent: stdout
|
||||
agent->>task2: msg + completed
|
||||
task2->>user: Event::AgentMessage
|
||||
task2->>user: Event::TurnCompleted
|
||||
task2->>-user: Event::TaskCompleted
|
||||
```
|
||||
Reference in New Issue
Block a user