Updated all documentation and configuration files: Documentation changes: - Updated README.md to describe LLMX as LiteLLM-powered fork - Updated CLAUDE.md with LiteLLM integration details - Updated 50+ markdown files across docs/, llmx-rs/, llmx-cli/, sdk/ - Changed all references: codex → llmx, Codex → LLMX - Updated package references: @openai/codex → @llmx/llmx - Updated repository URLs: github.com/openai/codex → github.com/valknar/llmx Configuration changes: - Updated .github/dependabot.yaml - Updated .github workflow files - Updated cliff.toml (changelog configuration) - Updated Cargo.toml comments Key branding updates: - Project description: "coding agent from OpenAI" → "coding agent powered by LiteLLM" - Added attribution to original OpenAI Codex project - Documented LiteLLM integration benefits Files changed: 51 files (559 insertions, 559 deletions) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
3.2 KiB
Authentication
Usage-based billing alternative: Use an OpenAI API key
If you prefer to pay-as-you-go, you can still authenticate with your OpenAI API key:
printenv OPENAI_API_KEY | llmx login --with-api-key
Alternatively, read from a file:
llmx login --with-api-key < my_key.txt
The legacy --api-key flag now exits with an error instructing you to use --with-api-key so that the key never appears in shell history or process listings.
This key must, at minimum, have write access to the Responses API.
Migrating to ChatGPT login from API key
If you've used the LLMX CLI before with usage-based billing via an API key and want to switch to using your ChatGPT plan, follow these steps:
- Update the CLI and ensure
llmx --versionis0.20.0or later - Delete
~/.llmx/auth.json(on Windows:C:\\Users\\USERNAME\\.llmx\\auth.json) - Run
llmx loginagain
Connecting on a "Headless" Machine
Today, the login process entails running a server on localhost:1455. If you are on a "headless" server, such as a Docker container or are ssh'd into a remote machine, loading localhost:1455 in the browser on your local machine will not automatically connect to the webserver running on the headless machine, so you must use one of the following workarounds:
Authenticate locally and copy your credentials to the "headless" machine
The easiest solution is likely to run through the llmx login process on your local machine such that localhost:1455 is accessible in your web browser. When you complete the authentication process, an auth.json file should be available at $CODEX_HOME/auth.json (on Mac/Linux, $CODEX_HOME defaults to ~/.llmx whereas on Windows, it defaults to %USERPROFILE%\\.llmx).
Because the auth.json file is not tied to a specific host, once you complete the authentication flow locally, you can copy the $CODEX_HOME/auth.json file to the headless machine and then llmx should "just work" on that machine. Note to copy a file to a Docker container, you can do:
# substitute MY_CONTAINER with the name or id of your Docker container:
CONTAINER_HOME=$(docker exec MY_CONTAINER printenv HOME)
docker exec MY_CONTAINER mkdir -p "$CONTAINER_HOME/.llmx"
docker cp auth.json MY_CONTAINER:"$CONTAINER_HOME/.llmx/auth.json"
whereas if you are ssh'd into a remote machine, you likely want to use scp:
ssh user@remote 'mkdir -p ~/.llmx'
scp ~/.llmx/auth.json user@remote:~/.llmx/auth.json
or try this one-liner:
ssh user@remote 'mkdir -p ~/.llmx && cat > ~/.llmx/auth.json' < ~/.llmx/auth.json
Connecting through VPS or remote
If you run LLMX on a remote machine (VPS/server) without a local browser, the login helper starts a server on localhost:1455 on the remote host. To complete login in your local browser, forward that port to your machine before starting the login flow:
# From your local machine
ssh -L 1455:localhost:1455 <user>@<remote-host>
Then, in that SSH session, run llmx and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be http://localhost:1455/...) in your local browser. The traffic will be tunneled to the remote server.