Files
llmx/llmx-rs/docs/protocol_v1.md
Sebastian Krüger 3c7efc58c8 feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration
This release represents a comprehensive transformation of the codebase from Codex to LLMX,
enhanced with LiteLLM integration to support 100+ LLM providers through a unified API.

## Major Changes

### Phase 1: Repository & Infrastructure Setup
- Established new repository structure and branching strategy
- Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md)
- Set up development environment and tooling configuration

### Phase 2: Rust Workspace Transformation
- Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates)
- Updated package names, binary names, and workspace members
- Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs
- Updated all internal references, imports, and type names
- Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/
- Fixed all Rust compilation errors after mass rename

### Phase 3: LiteLLM Integration
- Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.)
- Implemented OpenAI-compatible Chat Completions API support
- Added model family detection and provider-specific handling
- Updated authentication to support LiteLLM API keys
- Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL
- Added LLMX_API_KEY for unified authentication
- Enhanced error handling for Chat Completions API responses
- Implemented fallback mechanisms between Responses API and Chat Completions API

### Phase 4: TypeScript/Node.js Components
- Renamed npm package: @codex/codex-cli → @valknar/llmx
- Updated TypeScript SDK to use new LLMX APIs and endpoints
- Fixed all TypeScript compilation and linting errors
- Updated SDK tests to support both API backends
- Enhanced mock server to handle multiple API formats
- Updated build scripts for cross-platform packaging

### Phase 5: Configuration & Documentation
- Updated all configuration files to use LLMX naming
- Rewrote README and documentation for LLMX branding
- Updated config paths: ~/.codex/ → ~/.llmx/
- Added comprehensive LiteLLM setup guide
- Updated all user-facing strings and help text
- Created release plan and migration documentation

### Phase 6: Testing & Validation
- Fixed all Rust tests for new naming scheme
- Updated snapshot tests in TUI (36 frame files)
- Fixed authentication storage tests
- Updated Chat Completions payload and SSE tests
- Fixed SDK tests for new API endpoints
- Ensured compatibility with Claude Sonnet 4.5 model
- Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL)

### Phase 7: Build & Release Pipeline
- Updated GitHub Actions workflows for LLMX binary names
- Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/
- Updated CI/CD pipelines for new package names
- Made Apple code signing optional in release workflow
- Enhanced npm packaging resilience for partial platform builds
- Added Windows sandbox support to workspace
- Updated dotslash configuration for new binary names

### Phase 8: Final Polish
- Renamed all assets (.github images, labels, templates)
- Updated VSCode and DevContainer configurations
- Fixed all clippy warnings and formatting issues
- Applied cargo fmt and prettier formatting across codebase
- Updated issue templates and pull request templates
- Fixed all remaining UI text references

## Technical Details

**Breaking Changes:**
- Binary name changed from `codex` to `llmx`
- Config directory changed from `~/.codex/` to `~/.llmx/`
- Environment variables renamed (CODEX_* → LLMX_*)
- npm package renamed to `@valknar/llmx`

**New Features:**
- Support for 100+ LLM providers via LiteLLM
- Unified authentication with LLMX_API_KEY
- Enhanced model provider detection and handling
- Improved error handling and fallback mechanisms

**Files Changed:**
- 578 files modified across Rust, TypeScript, and documentation
- 30+ Rust crates renamed and updated
- Complete rebrand of UI, CLI, and documentation
- All tests updated and passing

**Dependencies:**
- Updated Cargo.lock with new package names
- Updated npm dependencies in llmx-cli
- Enhanced OpenAPI models for LLMX backend

This release establishes LLMX as a standalone project with comprehensive LiteLLM
integration, maintaining full backward compatibility with existing functionality
while opening support for a wide ecosystem of LLM providers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Sebastian Krüger <support@pivoine.art>
2025-11-12 20:40:44 +01:00

7.8 KiB
Raw Permalink Blame History

Overview of Protocol Defined in protocol.rs and agent.rs.

The goal of this document is to define terminology used in the system and explain the expected behavior of the system.

NOTE: The code might not completely match this spec. There are a few minor changes that need to be made after this spec has been reviewed, which will not alter the existing TUI's functionality.

Entities

These are entities exit on the llmx backend. The intent of this section is to establish vocabulary and construct a shared mental model for the LLMX core system.

  1. Model
    • In our case, this is the Responses REST API
  2. LLMX
    • The core engine of llmx
    • Runs locally, either in a background thread or separate process
    • Communicated to via a queue pair SQ (Submission Queue) / EQ (Event Queue)
    • Takes user input, makes requests to the Model, executes commands and applies patches.
  3. Session
    • The LLMX's current configuration and state
    • LLMX starts with no Session, and it is initialized by Op::ConfigureSession, which should be the first message sent by the UI.
    • The current Session can be reconfigured with additional Op::ConfigureSession calls.
    • Any running execution is aborted when the session is reconfigured.
  4. Task
    • A Task is LLMX executing work in response to user input.
    • Session has at most one Task running at a time.
    • Receiving Op::UserInput starts a Task
    • Consists of a series of Turns
    • The Task executes to until:
      • The Model completes the task and there is no output to feed into an additional Turn
      • Additional Op::UserInput aborts the current task and starts a new one
      • UI interrupts with Op::Interrupt
      • Fatal errors are encountered, eg. Model connection exceeding retry limits
      • Blocked by user approval (executing a command or patch)
  5. Turn
    • One cycle of iteration in a Task, consists of:
      • A request to the Model - (initially) prompt + (optional) last_response_id, or (in loop) previous turn output
      • The Model streams responses back in an SSE, which are collected until "completed" message and the SSE terminates
      • LLMX then executes command(s), applies patch(es), and outputs message(s) returned by the Model
      • Pauses to request approval when necessary
    • The output of one Turn is the input to the next Turn
    • A Turn yielding no output terminates the Task

The term "UI" is used to refer to the application driving LLMX. This may be the CLI / TUI chat-like interface that users operate, or it may be a GUI interface like a VSCode extension. The UI is external to LLMX, as LLMX is intended to be operated by arbitrary UI implementations.

When a Turn completes, the response_id from the Model's final response.completed message is stored in the Session state to resume the thread given the next Op::UserInput. The response_id is also returned in the EventMsg::TurnComplete to the UI, which can be used to fork the thread from an earlier point by providing it in the Op::UserInput.

Since only 1 Task can be run at a time, for parallel tasks it is recommended that a single LLMX be run for each thread of work.

Interface

  • LLMX
    • Communicates with UI via a SQ (Submission Queue) and EQ (Event Queue).
  • Submission
    • These are messages sent on the SQ (UI -> LLMX)
    • Has an string ID provided by the UI, referred to as sub_id
    • Op refers to the enum of all possible Submission payloads
      • This enum is non_exhaustive; variants can be added at future dates
  • Event
    • These are messages sent on the EQ (LLMX -> UI)
    • Each Event has a non-unique ID, matching the sub_id from the Op::UserInput that started the current task.
    • EventMsg refers to the enum of all possible Event payloads
      • This enum is non_exhaustive; variants can be added at future dates
      • It should be expected that new EventMsg variants will be added over time to expose more detailed information about the model's actions.

For complete documentation of the Op and EventMsg variants, refer to protocol.rs. Some example payload types:

  • Op
    • Op::UserInput Any input from the user to kick off a Task
    • Op::Interrupt Interrupts a running task
    • Op::ExecApproval Approve or deny code execution
  • EventMsg
    • EventMsg::AgentMessage Messages from the Model
    • EventMsg::ExecApprovalRequest Request approval from user to execute a command
    • EventMsg::TaskComplete A task completed successfully
    • EventMsg::Error A task stopped with an error
    • EventMsg::Warning A non-fatal warning that the client should surface to the user
    • EventMsg::TurnComplete Contains a response_id bookmark for last response_id executed by the task. This can be used to continue the task at a later point in time, perhaps with additional user input.

The response_id returned from each task matches the OpenAI response_id stored in the API's /responses endpoint. It can be stored and used in future Sessions to resume threads of work.

Transport

Can operate over any transport that supports bi-directional streaming. - cross-thread channels - IPC channels - stdin/stdout - TCP - HTTP2 - gRPC

Non-framed transports, such as stdin/stdout and TCP, should use newline-delimited JSON in sending messages.

Example Flows

Sequence diagram examples of common interactions. In each diagram, some unimportant events may be eliminated for simplicity.

Basic UI Flow

A single user input, followed by a 2-turn task

sequenceDiagram
    box UI
    participant user as User
    end
    box Daemon
    participant llmx as LLMX
    participant session as Session
    participant task as Task
    end
    box Rest API
    participant agent as Model
    end
    user->>llmx: Op::ConfigureSession
    llmx-->>session: create session
    llmx->>user: Event::SessionConfigured
    user->>session: Op::UserInput
    session-->>+task: start task
    task->>user: Event::TaskStarted
    task->>agent: prompt
    agent->>task: response (exec)
    task->>-user: Event::ExecApprovalRequest
    user->>+task: Op::ExecApproval::Allow
    task->>user: Event::ExecStart
    task->>task: exec
    task->>user: Event::ExecStop
    task->>user: Event::TurnComplete
    task->>agent: stdout
    agent->>task: response (patch)
    task->>task: apply patch (auto-approved)
    task->>agent: success
    agent->>task: response<br/>(msg + completed)
    task->>user: Event::AgentMessage
    task->>user: Event::TurnComplete
    task->>-user: Event::TaskComplete

Task Interrupt

Interrupting a task and continuing with additional user input.

sequenceDiagram
    box UI
    participant user as User
    end
    box Daemon
    participant session as Session
    participant task1 as Task1
    participant task2 as Task2
    end
    box Rest API
    participant agent as Model
    end
    user->>session: Op::UserInput
    session-->>+task1: start task
    task1->>user: Event::TaskStarted
    task1->>agent: prompt
    agent->>task1: response (exec)
    task1->>task1: exec (auto-approved)
    task1->>user: Event::TurnComplete
    task1->>agent: stdout
    task1->>agent: response (exec)
    task1->>task1: exec (auto-approved)
    user->>task1: Op::Interrupt
    task1->>-user: Event::Error("interrupted")
    user->>session: Op::UserInput w/ last_response_id
    session-->>+task2: start task
    task2->>user: Event::TaskStarted
    task2->>agent: prompt + Task1 last_response_id
    agent->>task2: response (exec)
    task2->>task2: exec (auto-approve)
    task2->>user: Event::TurnCompleted
    task2->>agent: stdout
    agent->>task2: msg + completed
    task2->>user: Event::AgentMessage
    task2->>user: Event::TurnCompleted
    task2->>-user: Event::TaskCompleted