Phase 5: Configuration & Documentation
Updated all documentation and configuration files: Documentation changes: - Updated README.md to describe LLMX as LiteLLM-powered fork - Updated CLAUDE.md with LiteLLM integration details - Updated 50+ markdown files across docs/, llmx-rs/, llmx-cli/, sdk/ - Changed all references: codex → llmx, Codex → LLMX - Updated package references: @openai/codex → @llmx/llmx - Updated repository URLs: github.com/openai/codex → github.com/valknar/llmx Configuration changes: - Updated .github/dependabot.yaml - Updated .github workflow files - Updated cliff.toml (changelog configuration) - Updated Cargo.toml comments Key branding updates: - Project description: "coding agent from OpenAI" → "coding agent powered by LiteLLM" - Added attribution to original OpenAI Codex project - Documented LiteLLM integration benefits Files changed: 51 files (559 insertions, 559 deletions) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -1,23 +1,23 @@
|
||||
# codex-responses-api-proxy
|
||||
# llmx-responses-api-proxy
|
||||
|
||||
A strict HTTP proxy that only forwards `POST` requests to `/v1/responses` to the OpenAI API (`https://api.openai.com`), injecting the `Authorization: Bearer $OPENAI_API_KEY` header. Everything else is rejected with `403 Forbidden`.
|
||||
|
||||
## Expected Usage
|
||||
|
||||
**IMPORTANT:** `codex-responses-api-proxy` is designed to be run by a privileged user with access to `OPENAI_API_KEY` so that an unprivileged user cannot inspect or tamper with the process. Though if `--http-shutdown` is specified, an unprivileged user _can_ make a `GET` request to `/shutdown` to shutdown the server, as an unprivileged user could not send `SIGTERM` to kill the process.
|
||||
**IMPORTANT:** `llmx-responses-api-proxy` is designed to be run by a privileged user with access to `OPENAI_API_KEY` so that an unprivileged user cannot inspect or tamper with the process. Though if `--http-shutdown` is specified, an unprivileged user _can_ make a `GET` request to `/shutdown` to shutdown the server, as an unprivileged user could not send `SIGTERM` to kill the process.
|
||||
|
||||
A privileged user (i.e., `root` or a user with `sudo`) who has access to `OPENAI_API_KEY` would run the following to start the server, as `codex-responses-api-proxy` reads the auth token from `stdin`:
|
||||
A privileged user (i.e., `root` or a user with `sudo`) who has access to `OPENAI_API_KEY` would run the following to start the server, as `llmx-responses-api-proxy` reads the auth token from `stdin`:
|
||||
|
||||
```shell
|
||||
printenv OPENAI_API_KEY | env -u OPENAI_API_KEY codex-responses-api-proxy --http-shutdown --server-info /tmp/server-info.json
|
||||
printenv OPENAI_API_KEY | env -u OPENAI_API_KEY llmx-responses-api-proxy --http-shutdown --server-info /tmp/server-info.json
|
||||
```
|
||||
|
||||
A non-privileged user would then run Codex as follows, specifying the `model_provider` dynamically:
|
||||
A non-privileged user would then run LLMX as follows, specifying the `model_provider` dynamically:
|
||||
|
||||
```shell
|
||||
PROXY_PORT=$(jq .port /tmp/server-info.json)
|
||||
PROXY_BASE_URL="http://127.0.0.1:${PROXY_PORT}"
|
||||
codex exec -c "model_providers.openai-proxy={ name = 'OpenAI Proxy', base_url = '${PROXY_BASE_URL}/v1', wire_api='responses' }" \
|
||||
llmx exec -c "model_providers.openai-proxy={ name = 'OpenAI Proxy', base_url = '${PROXY_BASE_URL}/v1', wire_api='responses' }" \
|
||||
-c model_provider="openai-proxy" \
|
||||
'Your prompt here'
|
||||
```
|
||||
@@ -30,7 +30,7 @@ curl --fail --silent --show-error "${PROXY_BASE_URL}/shutdown"
|
||||
|
||||
## Behavior
|
||||
|
||||
- Reads the API key from `stdin`. All callers should pipe the key in (for example, `printenv OPENAI_API_KEY | codex-responses-api-proxy`).
|
||||
- Reads the API key from `stdin`. All callers should pipe the key in (for example, `printenv OPENAI_API_KEY | llmx-responses-api-proxy`).
|
||||
- Formats the header value as `Bearer <key>` and attempts to `mlock(2)` the memory holding that header so it is not swapped to disk.
|
||||
- Listens on the provided port or an ephemeral port if `--port` is not specified.
|
||||
- Accepts exactly `POST /v1/responses` (no query string). The request body is forwarded to `https://api.openai.com/v1/responses` with `Authorization: Bearer <key>` set. All original request headers (except any incoming `Authorization`) are forwarded upstream, with `Host` overridden to `api.openai.com`. For other requests, it responds with `403`.
|
||||
@@ -40,19 +40,19 @@ curl --fail --silent --show-error "${PROXY_BASE_URL}/shutdown"
|
||||
## CLI
|
||||
|
||||
```
|
||||
codex-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown] [--upstream-url <URL>]
|
||||
llmx-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown] [--upstream-url <URL>]
|
||||
```
|
||||
|
||||
- `--port <PORT>`: Port to bind on `127.0.0.1`. If omitted, an ephemeral port is chosen.
|
||||
- `--server-info <FILE>`: If set, the proxy writes a single line of JSON with `{ "port": <PORT>, "pid": <PID> }` once listening.
|
||||
- `--http-shutdown`: If set, enables `GET /shutdown` to exit the process with code `0`.
|
||||
- `--upstream-url <URL>`: Absolute URL to forward requests to. Defaults to `https://api.openai.com/v1/responses`.
|
||||
- Authentication is fixed to `Authorization: Bearer <key>` to match the Codex CLI expectations.
|
||||
- Authentication is fixed to `Authorization: Bearer <key>` to match the LLMX CLI expectations.
|
||||
|
||||
For Azure, for example (ensure your deployment accepts `Authorization: Bearer <key>`):
|
||||
|
||||
```shell
|
||||
printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY codex-responses-api-proxy \
|
||||
printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY llmx-responses-api-proxy \
|
||||
--http-shutdown \
|
||||
--server-info /tmp/server-info.json \
|
||||
--upstream-url "https://YOUR_PROJECT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT/responses?api-version=2025-04-01-preview"
|
||||
@@ -67,7 +67,7 @@ printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY codex-responses-api-
|
||||
|
||||
Care is taken to restrict access/copying to the value of `OPENAI_API_KEY` retained in memory:
|
||||
|
||||
- We leverage [`codex_process_hardening`](https://github.com/openai/codex/blob/main/codex-rs/process-hardening/README.md) so `codex-responses-api-proxy` is run with standard process-hardening techniques.
|
||||
- We leverage [`codex_process_hardening`](https://github.com/valknar/llmx/blob/main/llmx-rs/process-hardening/README.md) so `llmx-responses-api-proxy` is run with standard process-hardening techniques.
|
||||
- At startup, we allocate a `1024` byte buffer on the stack and copy `"Bearer "` into the start of the buffer.
|
||||
- We then read from `stdin`, copying the contents into the buffer after `"Bearer "`.
|
||||
- After verifying the key matches `/^[a-zA-Z0-9_-]+$/` (and does not exceed the buffer), we create a `String` from that buffer (so the data is now on the heap).
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# @openai/codex-responses-api-proxy
|
||||
# @llmx/llmx-responses-api-proxy
|
||||
|
||||
<p align="center"><code>npm i -g @openai/codex-responses-api-proxy</code> to install <code>codex-responses-api-proxy</code></p>
|
||||
<p align="center"><code>npm i -g @llmx/llmx-responses-api-proxy</code> to install <code>llmx-responses-api-proxy</code></p>
|
||||
|
||||
This package distributes the prebuilt [Codex Responses API proxy binary](https://github.com/openai/codex/tree/main/codex-rs/responses-api-proxy) for macOS, Linux, and Windows.
|
||||
This package distributes the prebuilt [LLMX Responses API proxy binary](https://github.com/valknar/llmx/tree/main/llmx-rs/responses-api-proxy) for macOS, Linux, and Windows.
|
||||
|
||||
To see available options, run:
|
||||
|
||||
```
|
||||
node ./bin/codex-responses-api-proxy.js --help
|
||||
node ./bin/llmx-responses-api-proxy.js --help
|
||||
```
|
||||
|
||||
Refer to [`codex-rs/responses-api-proxy/README.md`](https://github.com/openai/codex/blob/main/codex-rs/responses-api-proxy/README.md) for detailed documentation.
|
||||
Refer to [`llmx-rs/responses-api-proxy/README.md`](https://github.com/valknar/llmx/blob/main/llmx-rs/responses-api-proxy/README.md) for detailed documentation.
|
||||
|
||||
Reference in New Issue
Block a user