This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
3.2 KiB
Authentication
Usage-based billing alternative: Use an OpenAI API key
If you prefer to pay-as-you-go, you can still authenticate with your OpenAI API key:
printenv OPENAI_API_KEY | llmx login --with-api-key
Alternatively, read from a file:
llmx login --with-api-key < my_key.txt
The legacy --api-key flag now exits with an error instructing you to use --with-api-key so that the key never appears in shell history or process listings.
This key must, at minimum, have write access to the Responses API.
Migrating to ChatGPT login from API key
If you've used the LLMX CLI before with usage-based billing via an API key and want to switch to using your ChatGPT plan, follow these steps:
- Update the CLI and ensure
llmx --versionis0.20.0or later - Delete
~/.llmx/auth.json(on Windows:C:\\Users\\USERNAME\\.llmx\\auth.json) - Run
llmx loginagain
Connecting on a "Headless" Machine
Today, the login process entails running a server on localhost:1455. If you are on a "headless" server, such as a Docker container or are ssh'd into a remote machine, loading localhost:1455 in the browser on your local machine will not automatically connect to the webserver running on the headless machine, so you must use one of the following workarounds:
Authenticate locally and copy your credentials to the "headless" machine
The easiest solution is likely to run through the llmx login process on your local machine such that localhost:1455 is accessible in your web browser. When you complete the authentication process, an auth.json file should be available at $LLMX_HOME/auth.json (on Mac/Linux, $LLMX_HOME defaults to ~/.llmx whereas on Windows, it defaults to %USERPROFILE%\\.llmx).
Because the auth.json file is not tied to a specific host, once you complete the authentication flow locally, you can copy the $LLMX_HOME/auth.json file to the headless machine and then llmx should "just work" on that machine. Note to copy a file to a Docker container, you can do:
# substitute MY_CONTAINER with the name or id of your Docker container:
CONTAINER_HOME=$(docker exec MY_CONTAINER printenv HOME)
docker exec MY_CONTAINER mkdir -p "$CONTAINER_HOME/.llmx"
docker cp auth.json MY_CONTAINER:"$CONTAINER_HOME/.llmx/auth.json"
whereas if you are ssh'd into a remote machine, you likely want to use scp:
ssh user@remote 'mkdir -p ~/.llmx'
scp ~/.llmx/auth.json user@remote:~/.llmx/auth.json
or try this one-liner:
ssh user@remote 'mkdir -p ~/.llmx && cat > ~/.llmx/auth.json' < ~/.llmx/auth.json
Connecting through VPS or remote
If you run LLMX on a remote machine (VPS/server) without a local browser, the login helper starts a server on localhost:1455 on the remote host. To complete login in your local browser, forward that port to your machine before starting the login flow:
# From your local machine
ssh -L 1455:localhost:1455 <user>@<remote-host>
Then, in that SSH session, run llmx and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be http://localhost:1455/...) in your local browser. The traffic will be tunneled to the remote server.