Phase 5: Configuration & Documentation

Updated all documentation and configuration files:

Documentation changes:
- Updated README.md to describe LLMX as LiteLLM-powered fork
- Updated CLAUDE.md with LiteLLM integration details
- Updated 50+ markdown files across docs/, llmx-rs/, llmx-cli/, sdk/
- Changed all references: codex → llmx, Codex → LLMX
- Updated package references: @openai/codex → @llmx/llmx
- Updated repository URLs: github.com/openai/codex → github.com/valknar/llmx

Configuration changes:
- Updated .github/dependabot.yaml
- Updated .github workflow files
- Updated cliff.toml (changelog configuration)
- Updated Cargo.toml comments

Key branding updates:
- Project description: "coding agent from OpenAI" → "coding agent powered by LiteLLM"
- Added attribution to original OpenAI Codex project
- Documented LiteLLM integration benefits

Files changed: 51 files (559 insertions, 559 deletions)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Sebastian Krüger
2025-11-11 14:45:40 +01:00
parent 0c2c36e14e
commit c493ea1347
51 changed files with 559 additions and 559 deletions

View File

@@ -1,18 +1,18 @@
# Containerized Development
We provide the following options to facilitate Codex development in a container. This is particularly useful for verifying the Linux build when working on a macOS host.
We provide the following options to facilitate LLMX development in a container. This is particularly useful for verifying the Linux build when working on a macOS host.
## Docker
To build the Docker image locally for x64 and then run it with the repo mounted under `/workspace`:
```shell
CODEX_DOCKER_IMAGE_NAME=codex-linux-dev
CODEX_DOCKER_IMAGE_NAME=llmx-linux-dev
docker build --platform=linux/amd64 -t "$CODEX_DOCKER_IMAGE_NAME" ./.devcontainer
docker run --platform=linux/amd64 --rm -it -e CARGO_TARGET_DIR=/workspace/codex-rs/target-amd64 -v "$PWD":/workspace -w /workspace/codex-rs "$CODEX_DOCKER_IMAGE_NAME"
docker run --platform=linux/amd64 --rm -it -e CARGO_TARGET_DIR=/workspace/llmx-rs/target-amd64 -v "$PWD":/workspace -w /workspace/llmx-rs "$CODEX_DOCKER_IMAGE_NAME"
```
Note that `/workspace/target` will contain the binaries built for your host platform, so we include `-e CARGO_TARGET_DIR=/workspace/codex-rs/target-amd64` in the `docker run` command so that the binaries built inside your container are written to a separate directory.
Note that `/workspace/target` will contain the binaries built for your host platform, so we include `-e CARGO_TARGET_DIR=/workspace/llmx-rs/target-amd64` in the `docker run` command so that the binaries built inside your container are written to a separate directory.
For arm64, specify `--platform=linux/amd64` instead for both `docker build` and `docker run`.
@@ -20,7 +20,7 @@ Currently, the `Dockerfile` works for both x64 and arm64 Linux, though you need
## VS Code
VS Code recognizes the `devcontainer.json` file and gives you the option to develop Codex in a container. Currently, `devcontainer.json` builds and runs the `arm64` flavor of the container.
VS Code recognizes the `devcontainer.json` file and gives you the option to develop LLMX in a container. Currently, `devcontainer.json` builds and runs the `arm64` flavor of the container.
From the integrated terminal in VS Code, you can build either flavor of the `arm64` build (GNU or musl):

View File

@@ -15,8 +15,8 @@ Things to look out for when doing the review:
## Code Organization
- Each create in the Cargo workspace in `codex-rs` has a specific purpose: make a note if you believe new code is not introduced in the correct crate.
- When possible, try to keep the `core` crate as small as possible. Non-core but shared logic is often a good candidate for `codex-rs/common`.
- Each create in the Cargo workspace in `llmx-rs` has a specific purpose: make a note if you believe new code is not introduced in the correct crate.
- When possible, try to keep the `core` crate as small as possible. Non-core but shared logic is often a good candidate for `llmx-rs/common`.
- Be wary of large files and offer suggestions for how to break things into more reasonably-sized files.
- Rust files should generally be organized such that the public parts of the API appear near the top of the file and helper functions go below. This is analagous to the "inverted pyramid" structure that is favored in journalism.
@@ -131,7 +131,7 @@ fn test_get_latest_messages() {
## Pull Request Body
- If the nature of the change seems to have a visual component (which is often the case for changes to `codex-rs/tui`), recommend including a screenshot or video to demonstrate the change, if appropriate.
- If the nature of the change seems to have a visual component (which is often the case for changes to `llmx-rs/tui`), recommend including a screenshot or video to demonstrate the change, if appropriate.
- References to existing GitHub issues and PRs are encouraged, where appropriate, though you likely do not have network access, so may not be able to help here.
# PR Information

View File

@@ -3,13 +3,13 @@
version: 2
updates:
- package-ecosystem: bun
directory: .github/actions/codex
directory: .github/actions/llmx
schedule:
interval: weekly
- package-ecosystem: cargo
directories:
- codex-rs
- codex-rs/*
- llmx-rs
- llmx-rs/*
schedule:
interval: weekly
- package-ecosystem: devcontainers
@@ -17,7 +17,7 @@ updates:
schedule:
interval: weekly
- package-ecosystem: docker
directory: codex-cli
directory: llmx-cli
schedule:
interval: weekly
- package-ecosystem: github-actions
@@ -25,6 +25,6 @@ updates:
schedule:
interval: weekly
- package-ecosystem: rust-toolchain
directory: codex-rs
directory: llmx-rs
schedule:
interval: weekly

View File

@@ -1,7 +1,7 @@
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
https://github.com/valknar/llmx/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes.

View File

@@ -1,8 +1,8 @@
# Rust/codex-rs
# Rust/llmx-rs
In the codex-rs folder where the rust code lives:
In the llmx-rs folder where the rust code lives:
- Crate names are prefixed with `codex-`. For example, the `core` folder's crate is named `codex-core`
- Crate names are prefixed with `llmx-`. For example, the `core` folder's crate is named `llmx-core`
- When using format! and you can inline variables into {}, always do that.
- Install any commands the repo relies on (for example `just`, `rg`, or `cargo-insta`) if they aren't already available before running instructions here.
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` or `CODEX_SANDBOX_ENV_VAR`.
@@ -15,15 +15,15 @@ In the codex-rs folder where the rust code lives:
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
Run `just fmt` (in `llmx-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `llmx-rs`, run `just fix -p <project>` (in `llmx-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
1. Run the test for the specific project that was changed. For example, if changes were made in `llmx-rs/tui`, run `cargo test -p llmx-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
## TUI style conventions
See `codex-rs/tui/styles.md`.
See `llmx-rs/tui/styles.md`.
## TUI code conventions
@@ -57,16 +57,16 @@ See `codex-rs/tui/styles.md`.
### Snapshot tests
This repo uses snapshot tests (via `insta`), especially in `codex-rs/tui`, to validate rendered output. When UI or text output changes intentionally, update the snapshots as follows:
This repo uses snapshot tests (via `insta`), especially in `llmx-rs/tui`, to validate rendered output. When UI or text output changes intentionally, update the snapshots as follows:
- Run tests to generate any updated snapshots:
- `cargo test -p codex-tui`
- `cargo test -p llmx-tui`
- Check whats pending:
- `cargo insta pending-snapshots -p codex-tui`
- `cargo insta pending-snapshots -p llmx-tui`
- Review changes by reading the generated `*.snap.new` files directly in the repo, or preview a specific file:
- `cargo insta show -p codex-tui path/to/file.snap.new`
- `cargo insta show -p llmx-tui path/to/file.snap.new`
- Only if you intend to accept all new snapshots in this crate, run:
- `cargo insta accept -p codex-tui`
- `cargo insta accept -p llmx-tui`
If you dont have the tool:
@@ -78,7 +78,7 @@ If you dont have the tool:
### Integration tests (core)
- Prefer the utilities in `core_test_support::responses` when writing end-to-end Codex tests.
- Prefer the utilities in `core_test_support::responses` when writing end-to-end LLMX tests.
- All `mount_sse*` helpers return a `ResponseMock`; hold onto it so you can assert against outbound `/responses` POST bodies.
- Use `ResponseMock::single_request()` when a test should only issue one POST, or `ResponseMock::requests()` to inspect every captured `ResponsesRequest`.
@@ -95,7 +95,7 @@ If you dont have the tool:
responses::ev_completed("resp-1"),
])).await;
codex.submit(Op::UserTurn { ... }).await?;
llmx.submit(Op::UserTurn { ... }).await?;
// Assert request body if needed.
let request = mock.single_request();

View File

@@ -1 +1 @@
The changelog can be found on the [releases page](https://github.com/openai/codex/releases).
The changelog can be found on the [releases page](https://github.com/valknar/llmx/releases).

10
PNPM.md
View File

@@ -35,19 +35,19 @@ corepack prepare pnpm@10.8.1 --activate
| Action | Command |
| ------------------------------------------ | ---------------------------------------- |
| Run a command in a specific package | `pnpm --filter @openai/codex run build` |
| Install a dependency in a specific package | `pnpm --filter @openai/codex add lodash` |
| Run a command in a specific package | `pnpm --filter @llmx/llmx run build` |
| Install a dependency in a specific package | `pnpm --filter @llmx/llmx add lodash` |
| Run a command in all packages | `pnpm -r run test` |
## Monorepo structure
```
codex/
llmx/
├── pnpm-workspace.yaml # Workspace configuration
├── .npmrc # pnpm configuration
├── package.json # Root dependencies and scripts
├── codex-cli/ # Main package
│ └── package.json # codex-cli specific dependencies
├── llmx-cli/ # Main package
│ └── package.json # llmx-cli specific dependencies
└── docs/ # Documentation (future package)
```

View File

@@ -1,73 +1,73 @@
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install --cask codex</code></p>
<p align="center"><code>npm i -g @llmx/llmx</code><br />or <code>brew install --cask llmx</code></p>
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
<p align="center"><strong>LLMX CLI</strong> is a coding agent powered by LiteLLM that runs locally on your computer.
</br>
</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href="https://developers.openai.com/codex/ide">install in your IDE</a>
</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href="https://chatgpt.com/codex">chatgpt.com/codex</a></p>
</br>This project is a community fork with enhanced support for multiple LLM providers via LiteLLM.
</br>Original project: <a href="https://github.com/openai/codex">github.com/openai/codex</a></p>
<p align="center">
<img src="./.github/codex-cli-splash.png" alt="Codex CLI splash" width="80%" />
<img src="./.github/llmx-cli-splash.png" alt="LLMX CLI splash" width="80%" />
</p>
---
## Quickstart
### Installing and running Codex CLI
### Installing and running LLMX CLI
Install globally with your preferred package manager. If you use npm:
```shell
npm install -g @openai/codex
npm install -g @llmx/llmx
```
Alternatively, if you use Homebrew:
```shell
brew install --cask codex
brew install --cask llmx
```
Then simply run `codex` to get started:
Then simply run `llmx` to get started:
```shell
codex
llmx
```
If you're running into upgrade issues with Homebrew, see the [FAQ entry on brew upgrade codex](./docs/faq.md#brew-upgrade-codex-isnt-upgrading-me).
If you're running into upgrade issues with Homebrew, see the [FAQ entry on brew upgrade llmx](./docs/faq.md#brew-upgrade-llmx-isnt-upgrading-me).
<details>
<summary>You can also go to the <a href="https://github.com/openai/codex/releases/latest">latest GitHub Release</a> and download the appropriate binary for your platform.</summary>
<summary>You can also go to the <a href="https://github.com/valknar/llmx/releases/latest">latest GitHub Release</a> and download the appropriate binary for your platform.</summary>
Each GitHub Release contains many executables, but in practice, you likely want one of these:
- macOS
- Apple Silicon/arm64: `codex-aarch64-apple-darwin.tar.gz`
- x86_64 (older Mac hardware): `codex-x86_64-apple-darwin.tar.gz`
- Apple Silicon/arm64: `llmx-aarch64-apple-darwin.tar.gz`
- x86_64 (older Mac hardware): `llmx-x86_64-apple-darwin.tar.gz`
- Linux
- x86_64: `codex-x86_64-unknown-linux-musl.tar.gz`
- arm64: `codex-aarch64-unknown-linux-musl.tar.gz`
- x86_64: `llmx-x86_64-unknown-linux-musl.tar.gz`
- arm64: `llmx-aarch64-unknown-linux-musl.tar.gz`
Each archive contains a single entry with the platform baked into the name (e.g., `codex-x86_64-unknown-linux-musl`), so you likely want to rename it to `codex` after extracting it.
Each archive contains a single entry with the platform baked into the name (e.g., `llmx-x86_64-unknown-linux-musl`), so you likely want to rename it to `llmx` after extracting it.
</details>
### Using Codex with your ChatGPT plan
### Using LLMX with your ChatGPT plan
<p align="center">
<img src="./.github/codex-cli-login.png" alt="Codex CLI login" width="80%" />
<img src="./.github/llmx-cli-login.png" alt="LLMX CLI login" width="80%" />
</p>
Run `codex` and select **Sign in with ChatGPT**. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. [Learn more about what's included in your ChatGPT plan](https://help.openai.com/en/articles/11369540-codex-in-chatgpt).
Run `llmx` and select **Sign in with ChatGPT**. We recommend signing into your ChatGPT account to use LLMX as part of your Plus, Pro, Team, Edu, or Enterprise plan. [Learn more about what's included in your ChatGPT plan](https://help.openai.com/en/articles/11369540-llmx-in-chatgpt).
You can also use Codex with an API key, but this requires [additional setup](./docs/authentication.md#usage-based-billing-alternative-use-an-openai-api-key). If you previously used an API key for usage-based billing, see the [migration steps](./docs/authentication.md#migrating-from-usage-based-billing-api-key). If you're having trouble with login, please comment on [this issue](https://github.com/openai/codex/issues/1243).
You can also use LLMX with an API key, but this requires [additional setup](./docs/authentication.md#usage-based-billing-alternative-use-an-openai-api-key). If you previously used an API key for usage-based billing, see the [migration steps](./docs/authentication.md#migrating-from-usage-based-billing-api-key). If you're having trouble with login, please comment on [this issue](https://github.com/valknar/llmx/issues/1243).
### Model Context Protocol (MCP)
Codex can access MCP servers. To configure them, refer to the [config docs](./docs/config.md#mcp_servers).
LLMX can access MCP servers. To configure them, refer to the [config docs](./docs/config.md#mcp_servers).
### Configuration
Codex CLI supports a rich set of configuration options, with preferences stored in `~/.codex/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
LLMX CLI supports a rich set of configuration options, with preferences stored in `~/.llmx/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
---
@@ -86,10 +86,10 @@ Codex CLI supports a rich set of configuration options, with preferences stored
- [**Authentication**](./docs/authentication.md)
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)
- [Login on a "Headless" machine](./docs/authentication.md#connecting-on-a-headless-machine)
- **Automating Codex**
- [GitHub Action](https://github.com/openai/codex-action)
- **Automating LLMX**
- [GitHub Action](https://github.com/valknar/llmx-action)
- [TypeScript SDK](./sdk/typescript/README.md)
- [Non-interactive mode (`codex exec`)](./docs/exec.md)
- [Non-interactive mode (`llmx exec`)](./docs/exec.md)
- [**Advanced**](./docs/advanced.md)
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)

View File

@@ -4,7 +4,7 @@
header = """
# Changelog
You can install any of these versions: `npm install -g @openai/codex@<version>`
You can install any of these versions: `npm install -g @openai/llmx@<version>`
"""
body = """

View File

@@ -4,7 +4,7 @@ _Based on the Apache Software Foundation Individual CLA v 2.2._
By commenting **“I have read the CLA Document and I hereby sign the CLA”**
on a Pull Request, **you (“Contributor”) agree to the following terms** for any
past and future “Contributions” submitted to the **OpenAI Codex CLI project
past and future “Contributions” submitted to the **OpenAI LLMX CLI project
(the “Project”)**.
---

View File

@@ -1,6 +1,6 @@
## Advanced
If you already lean on Codex every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](#model-context-protocol), and script full runs with [`codex exec`](./exec.md). Jump to the section you need and keep building.
If you already lean on LLMX every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](#model-context-protocol), and script full runs with [`llmx exec`](./exec.md). Jump to the section you need and keep building.
## Config quickstart {#config-quickstart}
@@ -8,62 +8,62 @@ Most day-to-day tuning lives in `config.toml`: set approval + sandbox presets, p
## Tracing / verbose logging {#tracing-verbose-logging}
Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
Because LLMX is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.llmx/log/llmx-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
```bash
tail -F ~/.codex/log/codex-tui.log
tail -F ~/.llmx/log/llmx-tui.log
```
By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
By comparison, the non-interactive mode (`llmx exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
## Model Context Protocol (MCP) {#model-context-protocol}
The Codex CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#mcp-integration).
The LLMX CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#mcp-integration).
## Using Codex as an MCP Server {#mcp-server}
## Using LLMX as an MCP Server {#mcp-server}
The Codex CLI can also be run as an MCP _server_ via `codex mcp-server`. For example, you can use `codex mcp-server` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `codex mcp` separately to add/list/get/remove MCP server launchers in your configuration.
The LLMX CLI can also be run as an MCP _server_ via `llmx mcp-server`. For example, you can use `llmx mcp-server` to make LLMX available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `llmx mcp` separately to add/list/get/remove MCP server launchers in your configuration.
### Codex MCP Server Quickstart {#mcp-server-quickstart}
### LLMX MCP Server Quickstart {#mcp-server-quickstart}
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
You can launch a LLMX MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
```bash
npx @modelcontextprotocol/inspector codex mcp-server
npx @modelcontextprotocol/inspector llmx mcp-server
```
Send a `tools/list` request and you will see that there are two tools available:
**`codex`** - Run a Codex session. Accepts configuration parameters matching the Codex Config struct. The `codex` tool takes the following properties:
**`llmx`** - Run a LLMX session. Accepts configuration parameters matching the LLMX Config struct. The `llmx` tool takes the following properties:
| Property | Type | Description |
| ----------------------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **`prompt`** (required) | string | The initial user prompt to start the Codex conversation. |
| **`prompt`** (required) | string | The initial user prompt to start the LLMX conversation. |
| `approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `on-request`, `never`. |
| `base-instructions` | string | The set of instructions to use instead of the default ones. |
| `config` | object | Individual [config settings](https://github.com/openai/codex/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
| `config` | object | Individual [config settings](https://github.com/valknar/llmx/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
| `cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory. |
| `model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`). |
| `profile` | string | Configuration profile from `config.toml` to specify default options. |
| `sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`. |
**`codex-reply`** - Continue a Codex session by providing the conversation id and prompt. The `codex-reply` tool takes the following properties:
**`llmx-reply`** - Continue a LLMX session by providing the conversation id and prompt. The `llmx-reply` tool takes the following properties:
| Property | Type | Description |
| ------------------------------- | ------ | -------------------------------------------------------- |
| **`prompt`** (required) | string | The next user prompt to continue the Codex conversation. |
| **`prompt`** (required) | string | The next user prompt to continue the LLMX conversation. |
| **`conversationId`** (required) | string | The id of the conversation to continue. |
### Trying it Out {#mcp-server-trying-it-out}
> [!TIP]
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
> LLMX often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
Use the MCP inspector and `codex mcp-server` to build a simple tic-tac-toe game with the following settings:
Use the MCP inspector and `llmx mcp-server` to build a simple tic-tac-toe game with the following settings:
**approval-policy:** never
@@ -71,4 +71,4 @@ Use the MCP inspector and `codex mcp-server` to build a simple tic-tac-toe game
**sandbox:** workspace-write
Click "Run Tool" and you should see a list of events emitted from the Codex MCP server as it builds the game.
Click "Run Tool" and you should see a list of events emitted from the LLMX MCP server as it builds the game.

View File

@@ -1,38 +1,38 @@
# AGENTS.md Discovery
Codex uses [`AGENTS.md`](https://agents.md/) files to gather helpful guidance before it starts assisting you. This page explains how those files are discovered and combined, so you can decide where to place your instructions.
LLMX uses [`AGENTS.md`](https://agents.md/) files to gather helpful guidance before it starts assisting you. This page explains how those files are discovered and combined, so you can decide where to place your instructions.
## Global Instructions (`~/.codex`)
## Global Instructions (`~/.llmx`)
- Codex looks for global guidance in your Codex home directory (usually `~/.codex`; set `CODEX_HOME` to change it). For a quick overview, see the [Memory with AGENTS.md section](../docs/getting-started.md#memory-with-agentsmd) in the getting started guide.
- If an `AGENTS.override.md` file exists there, it takes priority. If not, Codex falls back to `AGENTS.md`.
- Only the first non-empty file is used. Other filenames, such as `instructions.md`, have no effect unless Codex is specifically instructed to use them.
- Whatever Codex finds here stays active for the whole session, and Codex combines it with any project-specific instructions it discovers.
- LLMX looks for global guidance in your LLMX home directory (usually `~/.llmx`; set `CODEX_HOME` to change it). For a quick overview, see the [Memory with AGENTS.md section](../docs/getting-started.md#memory-with-agentsmd) in the getting started guide.
- If an `AGENTS.override.md` file exists there, it takes priority. If not, LLMX falls back to `AGENTS.md`.
- Only the first non-empty file is used. Other filenames, such as `instructions.md`, have no effect unless LLMX is specifically instructed to use them.
- Whatever LLMX finds here stays active for the whole session, and LLMX combines it with any project-specific instructions it discovers.
## Project Instructions (per-repository)
When you work inside a project, Codex builds on those global instructions by collecting project docs:
When you work inside a project, LLMX builds on those global instructions by collecting project docs:
- The search starts at the repository root and continues down to your current directory. If a Git root is not found, only the current directory is checked.
- In each directory along that path, Codex looks for `AGENTS.override.md` first, then `AGENTS.md`, and then any fallback names listed in your Codex configuration (see [`project_doc_fallback_filenames`](../docs/config.md#project_doc_fallback_filenames)). At most one file per directory is included.
- In each directory along that path, LLMX looks for `AGENTS.override.md` first, then `AGENTS.md`, and then any fallback names listed in your LLMX configuration (see [`project_doc_fallback_filenames`](../docs/config.md#project_doc_fallback_filenames)). At most one file per directory is included.
- Files are read in order from root to leaf and joined together with blank lines. Empty files are skipped, and very large files are truncated once the combined size reaches 32KiB (the default [`project_doc_max_bytes`](../docs/config.md#project_doc_max_bytes) limit). If you need more space, split guidance across nested directories or raise the limit in your configuration.
## How They Come Together
Before Codex gets to work, the instructions are ingested in precedence order: global guidance from `~/.codex` comes first, then each project doc from the repository root down to your current directory. Guidance in deeper directories overrides earlier layers, so the most specific file controls the final behavior.
Before LLMX gets to work, the instructions are ingested in precedence order: global guidance from `~/.llmx` comes first, then each project doc from the repository root down to your current directory. Guidance in deeper directories overrides earlier layers, so the most specific file controls the final behavior.
### Priority Summary
1. Global `AGENTS.override.md` (if present), otherwise global `AGENTS.md`.
2. For each directory from the repository root to your working directory: `AGENTS.override.md`, then `AGENTS.md`, then configured fallback names.
Only these filenames are considered. To use a different name, add it to the fallback list in your Codex configuration or rename the file accordingly.
Only these filenames are considered. To use a different name, add it to the fallback list in your LLMX configuration or rename the file accordingly.
## Fallback Filenames
Codex can look for additional instruction filenames beyond the two defaults if you add them to `project_doc_fallback_filenames` in your Codex configuration. Each fallback is checked after `AGENTS.override.md` and `AGENTS.md` in every directory along the search path.
LLMX can look for additional instruction filenames beyond the two defaults if you add them to `project_doc_fallback_filenames` in your LLMX configuration. Each fallback is checked after `AGENTS.override.md` and `AGENTS.md` in every directory along the search path.
Example: suppose your configuration lists `["TEAM_GUIDE.md", ".agents.md"]`. Inside each directory Codex will look in this order:
Example: suppose your configuration lists `["TEAM_GUIDE.md", ".agents.md"]`. Inside each directory LLMX will look in this order:
1. `AGENTS.override.md`
2. `AGENTS.md`
@@ -41,7 +41,7 @@ Example: suppose your configuration lists `["TEAM_GUIDE.md", ".agents.md"]`. Ins
If the repository root contains `TEAM_GUIDE.md` and the `backend/` directory contains `AGENTS.override.md`, the overall instructions will combine the root `TEAM_GUIDE.md` (because no override or default file was present there) with the `backend/AGENTS.override.md` file (which takes precedence over the fallback names).
You can configure those fallbacks in `~/.codex/config.toml` (or another profile) like this:
You can configure those fallbacks in `~/.llmx/config.toml` (or another profile) like this:
```toml
project_doc_fallback_filenames = ["TEAM_GUIDE.md", ".agents.md"]

View File

@@ -5,13 +5,13 @@
If you prefer to pay-as-you-go, you can still authenticate with your OpenAI API key:
```shell
printenv OPENAI_API_KEY | codex login --with-api-key
printenv OPENAI_API_KEY | llmx login --with-api-key
```
Alternatively, read from a file:
```shell
codex login --with-api-key < my_key.txt
llmx login --with-api-key < my_key.txt
```
The legacy `--api-key` flag now exits with an error instructing you to use `--with-api-key` so that the key never appears in shell history or process listings.
@@ -20,11 +20,11 @@ This key must, at minimum, have write access to the Responses API.
## Migrating to ChatGPT login from API key
If you've used the Codex CLI before with usage-based billing via an API key and want to switch to using your ChatGPT plan, follow these steps:
If you've used the LLMX CLI before with usage-based billing via an API key and want to switch to using your ChatGPT plan, follow these steps:
1. Update the CLI and ensure `codex --version` is `0.20.0` or later
2. Delete `~/.codex/auth.json` (on Windows: `C:\\Users\\USERNAME\\.codex\\auth.json`)
3. Run `codex login` again
1. Update the CLI and ensure `llmx --version` is `0.20.0` or later
2. Delete `~/.llmx/auth.json` (on Windows: `C:\\Users\\USERNAME\\.llmx\\auth.json`)
3. Run `llmx login` again
## Connecting on a "Headless" Machine
@@ -32,37 +32,37 @@ Today, the login process entails running a server on `localhost:1455`. If you ar
### Authenticate locally and copy your credentials to the "headless" machine
The easiest solution is likely to run through the `codex login` process on your local machine such that `localhost:1455` _is_ accessible in your web browser. When you complete the authentication process, an `auth.json` file should be available at `$CODEX_HOME/auth.json` (on Mac/Linux, `$CODEX_HOME` defaults to `~/.codex` whereas on Windows, it defaults to `%USERPROFILE%\\.codex`).
The easiest solution is likely to run through the `llmx login` process on your local machine such that `localhost:1455` _is_ accessible in your web browser. When you complete the authentication process, an `auth.json` file should be available at `$CODEX_HOME/auth.json` (on Mac/Linux, `$CODEX_HOME` defaults to `~/.llmx` whereas on Windows, it defaults to `%USERPROFILE%\\.llmx`).
Because the `auth.json` file is not tied to a specific host, once you complete the authentication flow locally, you can copy the `$CODEX_HOME/auth.json` file to the headless machine and then `codex` should "just work" on that machine. Note to copy a file to a Docker container, you can do:
Because the `auth.json` file is not tied to a specific host, once you complete the authentication flow locally, you can copy the `$CODEX_HOME/auth.json` file to the headless machine and then `llmx` should "just work" on that machine. Note to copy a file to a Docker container, you can do:
```shell
# substitute MY_CONTAINER with the name or id of your Docker container:
CONTAINER_HOME=$(docker exec MY_CONTAINER printenv HOME)
docker exec MY_CONTAINER mkdir -p "$CONTAINER_HOME/.codex"
docker cp auth.json MY_CONTAINER:"$CONTAINER_HOME/.codex/auth.json"
docker exec MY_CONTAINER mkdir -p "$CONTAINER_HOME/.llmx"
docker cp auth.json MY_CONTAINER:"$CONTAINER_HOME/.llmx/auth.json"
```
whereas if you are `ssh`'d into a remote machine, you likely want to use [`scp`](https://en.wikipedia.org/wiki/Secure_copy_protocol):
```shell
ssh user@remote 'mkdir -p ~/.codex'
scp ~/.codex/auth.json user@remote:~/.codex/auth.json
ssh user@remote 'mkdir -p ~/.llmx'
scp ~/.llmx/auth.json user@remote:~/.llmx/auth.json
```
or try this one-liner:
```shell
ssh user@remote 'mkdir -p ~/.codex && cat > ~/.codex/auth.json' < ~/.codex/auth.json
ssh user@remote 'mkdir -p ~/.llmx && cat > ~/.llmx/auth.json' < ~/.llmx/auth.json
```
### Connecting through VPS or remote
If you run Codex on a remote machine (VPS/server) without a local browser, the login helper starts a server on `localhost:1455` on the remote host. To complete login in your local browser, forward that port to your machine before starting the login flow:
If you run LLMX on a remote machine (VPS/server) without a local browser, the login helper starts a server on `localhost:1455` on the remote host. To complete login in your local browser, forward that port to your machine before starting the login flow:
```bash
# From your local machine
ssh -L 1455:localhost:1455 <user>@<remote-host>
```
Then, in that SSH session, run `codex` and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be `http://localhost:1455/...`) in your local browser. The traffic will be tunneled to the remote server.
Then, in that SSH session, run `llmx` and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be `http://localhost:1455/...`) in your local browser. The traffic will be tunneled to the remote server.

View File

@@ -1,6 +1,6 @@
# Config
Codex configuration gives you fine-grained control over the model, execution environment, and integrations available to the CLI. Use this guide alongside the workflows in [`codex exec`](./exec.md), the guardrails in [Sandbox & approvals](./sandbox.md), and project guidance from [AGENTS.md discovery](./agents_md.md).
LLMX configuration gives you fine-grained control over the model, execution environment, and integrations available to the CLI. Use this guide alongside the workflows in [`llmx exec`](./exec.md), the guardrails in [Sandbox & approvals](./sandbox.md), and project guidance from [AGENTS.md discovery](./agents_md.md).
## Quick navigation
@@ -12,18 +12,18 @@ Codex configuration gives you fine-grained control over the model, execution env
- [Profiles and overrides](#profiles-and-overrides)
- [Reference table](#config-reference)
Codex supports several mechanisms for setting config values:
LLMX supports several mechanisms for setting config values:
- Config-specific command-line flags, such as `--model o3` (highest precedence).
- A generic `-c`/`--config` flag that takes a `key=value` pair, such as `--config model="o3"`.
- The key can contain dots to set a value deeper than the root, e.g. `--config model_providers.openai.wire_api="chat"`.
- For consistency with `config.toml`, values are a string in TOML format rather than JSON format, so use `key='{a = 1, b = 2}'` rather than `key='{"a": 1, "b": 2}'`.
- The quotes around the value are necessary, as without them your shell would split the config argument on spaces, resulting in `codex` receiving `-c key={a` with (invalid) additional arguments `=`, `1,`, `b`, `=`, `2}`.
- The quotes around the value are necessary, as without them your shell would split the config argument on spaces, resulting in `llmx` receiving `-c key={a` with (invalid) additional arguments `=`, `1,`, `b`, `=`, `2}`.
- Values can contain any TOML object, such as `--config shell_environment_policy.include_only='["PATH", "HOME", "USER"]'`.
- If `value` cannot be parsed as a valid TOML value, it is treated as a string value. This means that `-c model='"o3"'` and `-c model=o3` are equivalent.
- In the first case, the value is the TOML string `"o3"`, while in the second the value is `o3`, which is not valid TOML and therefore treated as the TOML string `"o3"`.
- Because quotes are interpreted by one's shell, `-c key="true"` will be correctly interpreted in TOML as `key = true` (a boolean) and not `key = "true"` (a string). If for some reason you needed the string `"true"`, you would need to use `-c key='"true"'` (note the two sets of quotes).
- The `$CODEX_HOME/config.toml` configuration file where the `CODEX_HOME` environment value defaults to `~/.codex`. (Note `CODEX_HOME` will also be where logs and other Codex-related information are stored.)
- The `$CODEX_HOME/config.toml` configuration file where the `CODEX_HOME` environment value defaults to `~/.llmx`. (Note `CODEX_HOME` will also be where logs and other LLMX-related information are stored.)
Both the `--config` flag and the `config.toml` file support the following options:
@@ -61,15 +61,15 @@ Notes:
### model
The model that Codex should use.
The model that LLMX should use.
```toml
model = "gpt-5" # overrides the default ("gpt-5-codex" on macOS/Linux, "gpt-5" on Windows)
model = "gpt-5" # overrides the default ("gpt-5-llmx" on macOS/Linux, "gpt-5" on Windows)
```
### model_providers
This option lets you add to the default set of model providers bundled with Codex. The map key becomes the value you use with `model_provider` to select the provider.
This option lets you add to the default set of model providers bundled with LLMX. The map key becomes the value you use with `model_provider` to select the provider.
> [!NOTE]
> Built-in providers are not overwritten when you reuse their key. Entries you add only take effect when the key is **new**; for example `[model_providers.openai]` leaves the original OpenAI definition untouched. To customize the bundled OpenAI provider, prefer the dedicated knobs (for example the `OPENAI_BASE_URL` environment variable) or register a new provider key and point `model_provider` at it.
@@ -82,13 +82,13 @@ model = "gpt-4o"
model_provider = "openai-chat-completions"
[model_providers.openai-chat-completions]
# Name of the provider that will be displayed in the Codex UI.
# Name of the provider that will be displayed in the LLMX UI.
name = "OpenAI using Chat Completions"
# The path `/chat/completions` will be amended to this URL to make the POST
# request for the chat completions.
base_url = "https://api.openai.com/v1"
# If `env_key` is set, identifies an environment variable that must be set when
# using Codex with this provider. The value of the environment variable must be
# using LLMX with this provider. The value of the environment variable must be
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
env_key = "OPENAI_API_KEY"
# Valid values for wire_api are "chat" and "responses". Defaults to "chat" if omitted.
@@ -98,7 +98,7 @@ wire_api = "chat"
query_params = {}
```
Note this makes it possible to use Codex CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use Codex CLI with Ollama running locally:
Note this makes it possible to use LLMX CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use LLMX CLI with Ollama running locally:
```toml
[model_providers.ollama]
@@ -145,7 +145,7 @@ query_params = { api-version = "2025-04-01-preview" }
wire_api = "responses"
```
Export your key before launching Codex: `export AZURE_OPENAI_API_KEY=…`
Export your key before launching LLMX: `export AZURE_OPENAI_API_KEY=…`
#### Per-provider network tuning
@@ -166,15 +166,15 @@ stream_idle_timeout_ms = 300000 # 5m idle timeout
##### request_max_retries
How many times Codex will retry a failed HTTP request to the model provider. Defaults to `4`.
How many times LLMX will retry a failed HTTP request to the model provider. Defaults to `4`.
##### stream_max_retries
Number of times Codex will attempt to reconnect when a streaming response is interrupted. Defaults to `5`.
Number of times LLMX will attempt to reconnect when a streaming response is interrupted. Defaults to `5`.
##### stream_idle_timeout_ms
How long Codex will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
How long LLMX will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
### model_provider
@@ -191,7 +191,7 @@ model = "mistral"
### model_reasoning_effort
If the selected model is known to support reasoning (for example: `o3`, `o4-mini`, `codex-*`, `gpt-5`, `gpt-5-codex`), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
If the selected model is known to support reasoning (for example: `o3`, `o4-mini`, `llmx-*`, `gpt-5`, `gpt-5-llmx`), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
- `"minimal"`
- `"low"`
@@ -202,7 +202,7 @@ Note: to minimize reasoning, choose `"minimal"`.
### model_reasoning_summary
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"codex"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"llmx"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
- `"auto"` (default)
- `"concise"`
@@ -222,7 +222,7 @@ Controls output length/detail on GPT5 family models when using the Responses
- `"medium"` (default when omitted)
- `"high"`
When set, Codex includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }`.
When set, LLMX includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }`.
Example:
@@ -245,26 +245,26 @@ model_supports_reasoning_summaries = true
The size of the context window for the model, in tokens.
In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use `model_context_window` to tell Codex what value to use to determine how much context is left during a conversation.
In general, LLMX knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the LLMX CLI, then you can use `model_context_window` to tell LLMX what value to use to determine how much context is left during a conversation.
### model_max_output_tokens
This is analogous to `model_context_window`, but for the maximum number of output tokens for the model.
> See also [`codex exec`](./exec.md) to see how these model settings influence non-interactive runs.
> See also [`llmx exec`](./exec.md) to see how these model settings influence non-interactive runs.
## Execution environment
### approval_policy
Determines when the user should be prompted to approve whether Codex can execute a command:
Determines when the user should be prompted to approve whether LLMX can execute a command:
```toml
# Codex has hardcoded logic that defines a set of "trusted" commands.
# Setting the approval_policy to `untrusted` means that Codex will prompt the
# LLMX has hardcoded logic that defines a set of "trusted" commands.
# Setting the approval_policy to `untrusted` means that LLMX will prompt the
# user before running a command not in the "trusted" set.
#
# See https://github.com/openai/codex/issues/1260 for the plan to enable
# See https://github.com/valknar/llmx/issues/1260 for the plan to enable
# end-users to define their own trusted commands.
approval_policy = "untrusted"
```
@@ -272,7 +272,7 @@ approval_policy = "untrusted"
If you want to be notified whenever a command fails, use "on-failure":
```toml
# If the command fails when run in the sandbox, Codex asks for permission to
# If the command fails when run in the sandbox, LLMX asks for permission to
# retry the command outside the sandbox.
approval_policy = "on-failure"
```
@@ -287,14 +287,14 @@ approval_policy = "on-request"
Alternatively, you can have the model run until it is done, and never ask to run a command with escalated permissions:
```toml
# User is never prompted: if the command fails, Codex will automatically try
# User is never prompted: if the command fails, LLMX will automatically try
# something out. Note the `exec` subcommand always uses this mode.
approval_policy = "never"
```
### sandbox_mode
Codex executes model-generated shell commands inside an OS-level sandbox.
LLMX executes model-generated shell commands inside an OS-level sandbox.
In most cases you can pick the desired behaviour with a single option:
@@ -306,9 +306,9 @@ sandbox_mode = "read-only"
The default policy is `read-only`, which means commands can read any file on
disk, but attempts to write a file or access the network will be blocked.
A more relaxed policy is `workspace-write`. When specified, the current working directory for the Codex task will be writable (as well as `$TMPDIR` on macOS). Note that the CLI defaults to using the directory where it was spawned as `cwd`, though this can be overridden using `--cwd/-C`.
A more relaxed policy is `workspace-write`. When specified, the current working directory for the LLMX task will be writable (as well as `$TMPDIR` on macOS). Note that the CLI defaults to using the directory where it was spawned as `cwd`, though this can be overridden using `--cwd/-C`.
On macOS (and soon Linux), all writable roots (including `cwd`) that contain a `.git/` folder _as an immediate child_ will configure the `.git/` folder to be read-only while the rest of the Git repository will be writable. This means that commands like `git commit` will fail, by default (as it entails writing to `.git/`), and will require Codex to ask for permission.
On macOS (and soon Linux), all writable roots (including `cwd`) that contain a `.git/` folder _as an immediate child_ will configure the `.git/` folder to be read-only while the rest of the Git repository will be writable. This means that commands like `git commit` will fail, by default (as it entails writing to `.git/`), and will require LLMX to ask for permission.
```toml
# same as `--sandbox workspace-write`
@@ -316,7 +316,7 @@ sandbox_mode = "workspace-write"
# Extra settings that only apply when `sandbox = "workspace-write"`.
[sandbox_workspace_write]
# By default, the cwd for the Codex session will be writable as well as $TMPDIR
# By default, the cwd for the LLMX session will be writable as well as $TMPDIR
# (if set) and /tmp (if it exists). Setting the respective options to `true`
# will override those defaults.
exclude_tmpdir_env_var = false
@@ -337,9 +337,9 @@ To disable sandboxing altogether, specify `danger-full-access` like so:
sandbox_mode = "danger-full-access"
```
This is reasonable to use if Codex is running in an environment that provides its own sandboxing (such as a Docker container) such that further sandboxing is unnecessary.
This is reasonable to use if LLMX is running in an environment that provides its own sandboxing (such as a Docker container) such that further sandboxing is unnecessary.
Though using this option may also be necessary if you try to use Codex in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
Though using this option may also be necessary if you try to use LLMX in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
### tools.\*
@@ -347,29 +347,29 @@ Use the optional `[tools]` table to toggle built-in tools that the agent may cal
```toml
[tools]
web_search = true # allow Codex to issue first-party web searches without prompting you (deprecated)
web_search = true # allow LLMX to issue first-party web searches without prompting you (deprecated)
view_image = false # disable image uploads (they're enabled by default)
```
`web_search` is deprecated; use the `web_search_request` feature flag instead.
The `view_image` toggle is useful when you want to include screenshots or diagrams from your repo without pasting them manually. Codex still respects sandboxing: it can only attach files inside the workspace roots you allow.
The `view_image` toggle is useful when you want to include screenshots or diagrams from your repo without pasting them manually. LLMX still respects sandboxing: it can only attach files inside the workspace roots you allow.
### approval_presets
Codex provides three main Approval Presets:
LLMX provides three main Approval Presets:
- Read Only: Codex can read files and answer questions; edits, running commands, and network access require approval.
- Auto: Codex can read files, make edits, and run commands in the workspace without approval; asks for approval outside the workspace or for network access.
- Read Only: LLMX can read files and answer questions; edits, running commands, and network access require approval.
- Auto: LLMX can read files, make edits, and run commands in the workspace without approval; asks for approval outside the workspace or for network access.
- Full Access: Full disk and network access without prompts; extremely risky.
You can further customize how Codex runs at the command line using the `--ask-for-approval` and `--sandbox` options.
You can further customize how LLMX runs at the command line using the `--ask-for-approval` and `--sandbox` options.
> See also [Sandbox & approvals](./sandbox.md) for in-depth examples and platform-specific behaviour.
### shell_environment_policy
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment** to those subprocesses. You can tune this behavior via the **`shell_environment_policy`** block in `config.toml`:
LLMX spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment** to those subprocesses. You can tune this behavior via the **`shell_environment_policy`** block in `config.toml`:
```toml
[shell_environment_policy]
@@ -388,7 +388,7 @@ include_only = ["PATH", "HOME"]
| Field | Type | Default | Description |
| ------------------------- | -------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `inherit` | string | `all` | Starting template for the environment:<br>`all` (clone full parent env), `core` (`HOME`, `PATH`, `USER`, …), or `none` (start empty). |
| `ignore_default_excludes` | boolean | `false` | When `false`, Codex removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. |
| `ignore_default_excludes` | boolean | `false` | When `false`, LLMX removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. |
| `exclude` | array<string> | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"`, `"AZURE_*"`. |
| `set` | table<string,string> | `{}` | Explicit key/value overrides or additions always win over inherited values. |
| `include_only` | array<string> | `[]` | If non-empty, a whitelist of patterns; only variables that match _one_ pattern survive the final step. (Generally used with `inherit = "all"`.) |
@@ -413,7 +413,7 @@ Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment,
### mcp_servers
You can configure Codex to use [MCP servers](https://modelcontextprotocol.io/about) to give Codex access to external applications, resources, or services.
You can configure LLMX to use [MCP servers](https://modelcontextprotocol.io/about) to give LLMX access to external applications, resources, or services.
#### Server configuration
@@ -430,7 +430,7 @@ command = "npx"
args = ["-y", "mcp-server"]
# Optional: propagate additional env vars to the MVP server.
# A default whitelist of env vars will be propagated to the MCP server.
# https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/utils.rs#L82
# https://github.com/valknar/llmx/blob/main/llmx-rs/rmcp-client/src/utils.rs#L82
env = { "API_KEY" = "value" }
# or
[mcp_servers.server_name.env]
@@ -444,7 +444,7 @@ cwd = "/Users/<user>/code/my-server"
##### Streamable HTTP
[Streamable HTTP servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http) enable Codex to talk to resources that are accessed via a http url (either on localhost or another domain).
[Streamable HTTP servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http) enable LLMX to talk to resources that are accessed via a http url (either on localhost or another domain).
```toml
[mcp_servers.figma]
@@ -463,7 +463,7 @@ Streamable HTTP connections always use the experimental Rust MCP client under th
experimental_use_rmcp_client = true
```
After enabling it, run `codex mcp login <server-name>` when the server supports OAuth.
After enabling it, run `llmx mcp login <server-name>` when the server supports OAuth.
#### Other configuration options
@@ -480,7 +480,7 @@ enabled_tools = ["search", "summarize"]
disabled_tools = ["search"]
```
When both `enabled_tools` and `disabled_tools` are specified, Codex first restricts the server to the allow-list and then removes any tools that appear in the deny-list.
When both `enabled_tools` and `disabled_tools` are specified, LLMX first restricts the server to the allow-list and then removes any tools that appear in the deny-list.
#### Experimental RMCP client
@@ -497,32 +497,32 @@ experimental_use_rmcp_client = true
```shell
# List all available commands
codex mcp --help
llmx mcp --help
# Add a server (env can be repeated; `--` separates the launcher command)
codex mcp add docs -- docs-server --port 4000
llmx mcp add docs -- docs-server --port 4000
# List configured servers (pretty table or JSON)
codex mcp list
codex mcp list --json
llmx mcp list
llmx mcp list --json
# Show one server (table or JSON)
codex mcp get docs
codex mcp get docs --json
llmx mcp get docs
llmx mcp get docs --json
# Remove a server
codex mcp remove docs
llmx mcp remove docs
# Log in to a streamable HTTP server that supports oauth
codex mcp login SERVER_NAME
llmx mcp login SERVER_NAME
# Log out from a streamable HTTP server that supports oauth
codex mcp logout SERVER_NAME
llmx mcp logout SERVER_NAME
```
### Examples of useful MCPs
There is an ever growing list of useful MCP servers that can be helpful while you are working with Codex.
There is an ever growing list of useful MCP servers that can be helpful while you are working with LLMX.
Some of the most common MCPs we've seen are:
@@ -530,14 +530,14 @@ Some of the most common MCPs we've seen are:
- Figma [Local](https://developers.figma.com/docs/figma-mcp-server/local-server-installation/) and [Remote](https://developers.figma.com/docs/figma-mcp-server/remote-server-installation/) - access to your Figma designs
- [Playwright](https://www.npmjs.com/package/@playwright/mcp) - control and inspect a browser using Playwright
- [Chrome Developer Tools](https://github.com/ChromeDevTools/chrome-devtools-mcp/) — control and inspect a Chrome browser
- [Sentry](https://docs.sentry.io/product/sentry-mcp/#codex) — access to your Sentry logs
- [Sentry](https://docs.sentry.io/product/sentry-mcp/#llmx) — access to your Sentry logs
- [GitHub](https://github.com/github/github-mcp-server) — Control over your GitHub account beyond what git allows (like controlling PRs, issues, etc.)
## Observability and telemetry
### otel
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events** that
LLMX can emit [OpenTelemetry](https://opentelemetry.io/) **log events** that
describe each run: outbound API requests, streamed responses, user input,
tool-approval decisions, and the result of every tool invocation. Export is
**disabled by default** so local runs remain self-contained. Opt in by adding an
@@ -550,7 +550,7 @@ exporter = "none" # defaults to "none"; set to otlp-http or otlp-grpc t
log_user_prompt = false # defaults to false; redact prompt text unless explicitly enabled
```
Codex tags every exported event with `service.name = $ORIGINATOR` (the same
LLMX tags every exported event with `service.name = $ORIGINATOR` (the same
value sent in the `originator` header, `codex_cli_rs` by default), the CLI
version, and an `env` attribute so downstream collectors can distinguish
dev/staging/prod traffic. Only telemetry produced inside the `codex_otel`
@@ -562,10 +562,10 @@ Every event shares a common set of metadata fields: `event.timestamp`,
`conversation.id`, `app.version`, `auth_mode` (when available),
`user.account_id` (when available), `user.email` (when available), `terminal.type`, `model`, and `slug`.
With OTEL enabled Codex emits the following event types (in addition to the
With OTEL enabled LLMX emits the following event types (in addition to the
metadata above):
- `codex.conversation_starts`
- `llmx.conversation_starts`
- `provider_name`
- `reasoning_effort` (optional)
- `reasoning_summary`
@@ -576,12 +576,12 @@ metadata above):
- `sandbox_policy`
- `mcp_servers` (comma-separated list)
- `active_profile` (optional)
- `codex.api_request`
- `llmx.api_request`
- `attempt`
- `duration_ms`
- `http.response.status_code` (optional)
- `error.message` (failures)
- `codex.sse_event`
- `llmx.sse_event`
- `event.kind`
- `duration_ms`
- `error.message` (failures)
@@ -590,15 +590,15 @@ metadata above):
- `cached_token_count` (responses only, optional)
- `reasoning_token_count` (responses only, optional)
- `tool_token_count` (responses only)
- `codex.user_prompt`
- `llmx.user_prompt`
- `prompt_length`
- `prompt` (redacted unless `log_user_prompt = true`)
- `codex.tool_decision`
- `llmx.tool_decision`
- `tool_name`
- `call_id`
- `decision` (`approved`, `approved_for_session`, `denied`, or `abort`)
- `source` (`config` or `user`)
- `codex.tool_result`
- `llmx.tool_result`
- `tool_name`
- `call_id` (optional)
- `arguments` (optional)
@@ -641,14 +641,14 @@ If the exporter is `none` nothing is written anywhere; otherwise you must run or
own collector. All exporters run on a background batch worker that is flushed on
shutdown.
If you build Codex from source the OTEL crate is still behind an `otel` feature
If you build LLMX from source the OTEL crate is still behind an `otel` feature
flag; the official prebuilt binaries ship with the feature enabled. When the
feature is disabled the telemetry hooks become no-ops so the CLI continues to
function without the extra dependencies.
### notify
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
Specify a program that will be executed to get notified about events generated by LLMX. Note that the program will receive the notification argument as a string of JSON, e.g.:
```json
{
@@ -663,7 +663,7 @@ Specify a program that will be executed to get notified about events generated b
The `"type"` property will always be set. Currently, `"agent-turn-complete"` is the only notification type that is supported.
`"thread-id"` contains a string that identifies the Codex session that produced the notification; you can use it to correlate multiple turns that belong to the same task.
`"thread-id"` contains a string that identifies the LLMX session that produced the notification; you can use it to correlate multiple turns that belong to the same task.
`"cwd"` reports the absolute working directory for the session so scripts can disambiguate which project triggered the notification.
@@ -691,9 +691,9 @@ def main() -> int:
case "agent-turn-complete":
assistant_message = notification.get("last-assistant-message")
if assistant_message:
title = f"Codex: {assistant_message}"
title = f"LLMX: {assistant_message}"
else:
title = "Codex: Turn Complete!"
title = "LLMX: Turn Complete!"
input_messages = notification.get("input-messages", [])
message = " ".join(input_messages)
title += message
@@ -711,7 +711,7 @@ def main() -> int:
"-message",
message,
"-group",
"codex-" + thread_id,
"llmx-" + thread_id,
"-ignoreDnD",
"-activate",
"com.googlecode.iterm2",
@@ -725,18 +725,18 @@ if __name__ == "__main__":
sys.exit(main())
```
To have Codex use this script for notifications, you would configure it via `notify` in `~/.codex/config.toml` using the appropriate path to `notify.py` on your computer:
To have LLMX use this script for notifications, you would configure it via `notify` in `~/.llmx/config.toml` using the appropriate path to `notify.py` on your computer:
```toml
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
notify = ["python3", "/Users/mbolin/.llmx/notify.py"]
```
> [!NOTE]
> Use `notify` for automation and integrations: Codex invokes your external program with a single JSON argument for each event, independent of the TUI. If you only want lightweight desktop notifications while using the TUI, prefer `tui.notifications`, which uses terminal escape codes and requires no external program. You can enable both; `tui.notifications` covers inTUI alerts (e.g., approval prompts), while `notify` is best for systemlevel hooks or custom notifiers. Currently, `notify` emits only `agent-turn-complete`, whereas `tui.notifications` supports `agent-turn-complete` and `approval-requested` with optional filtering.
> Use `notify` for automation and integrations: LLMX invokes your external program with a single JSON argument for each event, independent of the TUI. If you only want lightweight desktop notifications while using the TUI, prefer `tui.notifications`, which uses terminal escape codes and requires no external program. You can enable both; `tui.notifications` covers inTUI alerts (e.g., approval prompts), while `notify` is best for systemlevel hooks or custom notifiers. Currently, `notify` emits only `agent-turn-complete`, whereas `tui.notifications` supports `agent-turn-complete` and `approval-requested` with optional filtering.
### hide_agent_reasoning
Codex intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
LLMX intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
Setting `hide_agent_reasoning` to `true` suppresses these events in **both** the TUI as well as the headless `exec` sub-command:
@@ -804,11 +804,11 @@ Users can specify config values at multiple levels. Order of precedence is as fo
1. custom command-line argument, e.g., `--model o3`
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
3. as an entry in `config.toml`, e.g., `model = "o3"`
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `gpt-5-codex`)
4. the default value that comes with LLMX CLI (i.e., LLMX CLI defaults to `gpt-5-llmx`)
### history
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
By default, LLMX CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
To disable this behavior, configure `[history]` as follows:
@@ -831,7 +831,7 @@ Note this is **not** a general editor setting (like `$EDITOR`), as it only accep
- `"cursor"`
- `"none"` to explicitly disable this feature
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
Currently, `"vscode"` is the default, though LLMX does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
### project_doc_max_bytes
@@ -847,7 +847,7 @@ project_doc_fallback_filenames = ["CLAUDE.md", ".exampleagentrules.md"]
We recommend migrating instructions to AGENTS.md; other filenames may reduce model performance.
> See also [AGENTS.md discovery](./agents_md.md) for how Codex locates these files during a session.
> See also [AGENTS.md discovery](./agents_md.md) for how LLMX locates these files during a session.
### tui
@@ -865,7 +865,7 @@ notifications = [ "agent-turn-complete", "approval-requested" ]
```
> [!NOTE]
> Codex emits desktop notifications using terminal escape codes. Not all terminals support these (notably, macOS Terminal.app and VS Code's terminal do not support custom notifications. iTerm2, Ghostty and WezTerm do support these notifications).
> LLMX emits desktop notifications using terminal escape codes. Not all terminals support these (notably, macOS Terminal.app and VS Code's terminal do not support custom notifications. iTerm2, Ghostty and WezTerm do support these notifications).
> [!NOTE] > `tui.notifications` is builtin and limited to the TUI session. For programmatic or crossenvironment notifications—or to integrate with OSspecific notifiers—use the toplevel `notify` option to run an external program that receives event JSON. The two settings are independent and can be used together.
@@ -873,17 +873,17 @@ notifications = [ "agent-turn-complete", "approval-requested" ]
### Forcing a login method
To force users on a given machine to use a specific login method or workspace, use a combination of [managed configurations](https://developers.openai.com/codex/security#managed-configuration) as well as either or both of the following fields:
To force users on a given machine to use a specific login method or workspace, use a combination of [managed configurations](https://developers.openai.com/llmx/security#managed-configuration) as well as either or both of the following fields:
```toml
# Force the user to log in with ChatGPT or via an api key.
forced_login_method = "chatgpt" or "api"
# When logging in with ChatGPT, only the specified workspace ID will be presented during the login
# flow and the id will be validated during the oauth callback as well as every time Codex starts.
# flow and the id will be validated during the oauth callback as well as every time LLMX starts.
forced_chatgpt_workspace_id = "00000000-0000-0000-0000-000000000000"
```
If the active credentials don't match the config, the user will be logged out and Codex will exit.
If the active credentials don't match the config, the user will be logged out and LLMX will exit.
If `forced_chatgpt_workspace_id` is set but `forced_login_method` is not set, API key login will still work.
@@ -907,7 +907,7 @@ Valid values:
| Key | Type / Values | Notes |
| ------------------------------------------------ | ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |
| `model` | string | Model to use (e.g., `gpt-5-codex`). |
| `model` | string | Model to use (e.g., `gpt-5-llmx`). |
| `model_provider` | string | Provider id from `model_providers` (default: `openai`). |
| `model_context_window` | number | Context window tokens. |
| `model_max_output_tokens` | number | Max output tokens. |
@@ -925,7 +925,7 @@ Valid values:
| `mcp_servers.<id>.env` | map<string,string> | MCP server env vars (stdio servers only). |
| `mcp_servers.<id>.url` | string | MCP server url (streamable http servers only). |
| `mcp_servers.<id>.bearer_token_env_var` | string | environment variable containing a bearer token to use for auth (streamable http servers only). |
| `mcp_servers.<id>.enabled` | boolean | When false, Codex skips starting the server (default: true). |
| `mcp_servers.<id>.enabled` | boolean | When false, LLMX skips starting the server (default: true). |
| `mcp_servers.<id>.startup_timeout_sec` | number | Startup timeout in seconds (default: 10). Timeout is applied both for initializing MCP server and initially listing tools. |
| `mcp_servers.<id>.tool_timeout_sec` | number | Per-tool timeout in seconds (default: 60). Accepts fractional values; omit to use the default. |
| `mcp_servers.<id>.enabled_tools` | array<string> | Restrict the server to the listed tool names. |
@@ -960,7 +960,7 @@ Valid values:
| `experimental_use_exec_command_tool` | boolean | Use experimental exec command tool. |
| `projects.<path>.trust_level` | string | Mark project/worktree as trusted (only `"trusted"` is recognized). |
| `tools.web_search` | boolean | Enable web search tool (deprecated) (default: false). |
| `tools.view_image` | boolean | Enable or disable the `view_image` tool so Codex can attach local image files from the workspace (default: true). |
| `forced_login_method` | `chatgpt` \| `api` | Only allow Codex to be used with ChatGPT or API keys. |
| `forced_chatgpt_workspace_id` | string (uuid) | Only allow Codex to be used with the specified ChatGPT workspace. |
| `tools.view_image` | boolean | Enable or disable the `view_image` tool so LLMX can attach local image files from the workspace (default: true). |
| `forced_login_method` | `chatgpt` \| `api` | Only allow LLMX to be used with ChatGPT or API keys. |
| `forced_chatgpt_workspace_id` | string (uuid) | Only allow LLMX to be used with the specified ChatGPT workspace. |
| `cli_auth_credentials_store` | `file` \| `keyring` \| `auto` | Where to store CLI login credentials (default: `file`). |

View File

@@ -18,7 +18,7 @@ If you want to add a new feature or change the behavior of an existing one, plea
1. **Start with an issue.** Open a new one or comment on an existing discussion so we can agree on the solution before code is written.
2. **Add or update tests.** Every new feature or bug-fix should come with test coverage that fails before your change and passes afterwards. 100% coverage is not required, but aim for meaningful assertions.
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`codex --help`), or relevant example projects.
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`llmx --help`), or relevant example projects.
4. **Keep commits atomic.** Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier.
### Opening a pull request
@@ -46,7 +46,7 @@ If you want to add a new feature or change the behavior of an existing one, plea
If you run into problems setting up the project, would like feedback on an idea, or just want to say _hi_ - please open a Discussion or jump into the relevant issue. We are happy to help.
Together we can make Codex CLI an incredible tool. **Happy hacking!** :rocket:
Together we can make LLMX CLI an incredible tool. **Happy hacking!** :rocket:
### Contributor license agreement (CLA)
@@ -71,7 +71,7 @@ No special Git commands, email attachments, or commit footers required.
The **DCO check** blocks merges until every commit in the PR carries the footer (with squash this is just the one).
### Releasing `codex`
### Releasing `llmx`
_For admins only._
@@ -79,16 +79,16 @@ Make sure you are on `main` and have no local changes. Then run:
```shell
VERSION=0.2.0 # Can also be 0.2.0-alpha.1 or any valid Rust version.
./codex-rs/scripts/create_github_release.sh "$VERSION"
./llmx-rs/scripts/create_github_release.sh "$VERSION"
```
This will make a local commit on top of `main` with `version` set to `$VERSION` in `codex-rs/Cargo.toml` (note that on `main`, we leave the version as `version = "0.0.0"`).
This will make a local commit on top of `main` with `version` set to `$VERSION` in `llmx-rs/Cargo.toml` (note that on `main`, we leave the version as `version = "0.0.0"`).
This will push the commit using the tag `rust-v${VERSION}`, which in turn kicks off [the release workflow](../.github/workflows/rust-release.yml). This will create a new GitHub Release named `$VERSION`.
If everything looks good in the generated GitHub Release, uncheck the **pre-release** box so it is the latest release.
Create a PR to update [`Cask/c/codex.rb`](https://github.com/Homebrew/homebrew-cask/blob/main/Formula/c/codex.rb) on Homebrew.
Create a PR to update [`Cask/c/llmx.rb`](https://github.com/Homebrew/homebrew-cask/blob/main/Formula/c/llmx.rb) on Homebrew.
### Security & responsible AI

View File

@@ -1,11 +1,11 @@
# Example config.toml
Use this example configuration as a starting point. For an explanation of each field and additional context, see [Configuration](./config.md). Copy the snippet below to `~/.codex/config.toml` and adjust values as needed.
Use this example configuration as a starting point. For an explanation of each field and additional context, see [Configuration](./config.md). Copy the snippet below to `~/.llmx/config.toml` and adjust values as needed.
```toml
# Codex example configuration (config.toml)
# LLMX example configuration (config.toml)
#
# This file lists all keys Codex reads from config.toml, their default values,
# This file lists all keys LLMX reads from config.toml, their default values,
# and concise explanations. Values here mirror the effective defaults compiled
# into the CLI. Adjust as needed.
#
@@ -18,17 +18,17 @@ Use this example configuration as a starting point. For an explanation of each f
# Core Model Selection
################################################################################
# Primary model used by Codex. Default differs by OS; non-Windows defaults here.
# Linux/macOS default: "gpt-5-codex"; Windows default: "gpt-5".
model = "gpt-5-codex"
# Primary model used by LLMX. Default differs by OS; non-Windows defaults here.
# Linux/macOS default: "gpt-5-llmx"; Windows default: "gpt-5".
model = "gpt-5-llmx"
# Model used by the /review feature (code reviews). Default: "gpt-5-codex".
review_model = "gpt-5-codex"
# Model used by the /review feature (code reviews). Default: "gpt-5-llmx".
review_model = "gpt-5-llmx"
# Provider id selected from [model_providers]. Default: "openai".
model_provider = "openai"
# Optional manual model metadata. When unset, Codex auto-detects from model.
# Optional manual model metadata. When unset, LLMX auto-detects from model.
# Uncomment to force values.
# model_context_window = 128000 # tokens; default: auto for model
# model_max_output_tokens = 8192 # tokens; default: auto for model
@@ -153,10 +153,10 @@ disable_paste_burst = false
windows_wsl_setup_acknowledged = false
# External notifier program (argv array). When unset: disabled.
# Example: notify = ["notify-send", "Codex"]
# Example: notify = ["notify-send", "LLMX"]
# notify = [ ]
# In-product notices (mostly set automatically by Codex).
# In-product notices (mostly set automatically by LLMX).
[notice]
# hide_full_access_warning = true
# hide_rate_limit_model_nudge = true
@@ -174,7 +174,7 @@ chatgpt_base_url = "https://chatgpt.com/backend-api/"
# Restrict ChatGPT login to a specific workspace id. Default: unset.
# forced_chatgpt_workspace_id = ""
# Force login mechanism when Codex would normally auto-select. Default: unset.
# Force login mechanism when LLMX would normally auto-select. Default: unset.
# Allowed values: chatgpt | api
# forced_login_method = "chatgpt"
@@ -315,7 +315,7 @@ mcp_oauth_credentials_store = "auto"
[profiles]
# [profiles.default]
# model = "gpt-5-codex"
# model = "gpt-5-llmx"
# model_provider = "openai"
# approval_policy = "on-request"
# sandbox_mode = "read-only"

View File

@@ -1,24 +1,24 @@
## Non-interactive mode
Use Codex in non-interactive mode to automate common workflows.
Use LLMX in non-interactive mode to automate common workflows.
```shell
codex exec "count the total number of lines of code in this project"
llmx exec "count the total number of lines of code in this project"
```
In non-interactive mode, Codex does not ask for command or edit approvals. By default it runs in `read-only` mode, so it cannot edit files or run commands that require network access.
In non-interactive mode, LLMX does not ask for command or edit approvals. By default it runs in `read-only` mode, so it cannot edit files or run commands that require network access.
Use `codex exec --full-auto` to allow file edits. Use `codex exec --sandbox danger-full-access` to allow edits and networked commands.
Use `llmx exec --full-auto` to allow file edits. Use `llmx exec --sandbox danger-full-access` to allow edits and networked commands.
### Default output mode
By default, Codex streams its activity to stderr and only writes the final message from the agent to stdout. This makes it easier to pipe `codex exec` into another tool without extra filtering.
By default, LLMX streams its activity to stderr and only writes the final message from the agent to stdout. This makes it easier to pipe `llmx exec` into another tool without extra filtering.
To write the output of `codex exec` to a file, in addition to using a shell redirect like `>`, there is also a dedicated flag to specify an output file: `-o`/`--output-last-message`.
To write the output of `llmx exec` to a file, in addition to using a shell redirect like `>`, there is also a dedicated flag to specify an output file: `-o`/`--output-last-message`.
### JSON output mode
`codex exec` supports a `--json` mode that streams events to stdout as JSON Lines (JSONL) while the agent runs.
`llmx exec` supports a `--json` mode that streams events to stdout as JSON Lines (JSONL) while the agent runs.
Supported event types:
@@ -75,40 +75,40 @@ Sample schema:
```
```shell
codex exec "Extract details of the project" --output-schema ~/schema.json
llmx exec "Extract details of the project" --output-schema ~/schema.json
...
{"project_name":"Codex CLI","programming_languages":["Rust","TypeScript","Shell"]}
{"project_name":"LLMX CLI","programming_languages":["Rust","TypeScript","Shell"]}
```
Combine `--output-schema` with `-o` to only print the final JSON output. You can also pass a file path to `-o` to save the JSON output to a file.
### Git repository requirement
Codex requires a Git repository to avoid destructive changes. To disable this check, use `codex exec --skip-git-repo-check`.
LLMX requires a Git repository to avoid destructive changes. To disable this check, use `llmx exec --skip-git-repo-check`.
### Resuming non-interactive sessions
Resume a previous non-interactive session with `codex exec resume <SESSION_ID>` or `codex exec resume --last`. This preserves conversation context so you can ask follow-up questions or give new tasks to the agent.
Resume a previous non-interactive session with `llmx exec resume <SESSION_ID>` or `llmx exec resume --last`. This preserves conversation context so you can ask follow-up questions or give new tasks to the agent.
```shell
codex exec "Review the change, look for use-after-free issues"
codex exec resume --last "Fix use-after-free issues"
llmx exec "Review the change, look for use-after-free issues"
llmx exec resume --last "Fix use-after-free issues"
```
Only the conversation context is preserved; you must still provide flags to customize Codex behavior.
Only the conversation context is preserved; you must still provide flags to customize LLMX behavior.
```shell
codex exec --model gpt-5-codex --json "Review the change, look for use-after-free issues"
codex exec --model gpt-5 --json resume --last "Fix use-after-free issues"
llmx exec --model gpt-5-llmx --json "Review the change, look for use-after-free issues"
llmx exec --model gpt-5 --json resume --last "Fix use-after-free issues"
```
## Authentication
By default, `codex exec` will use the same authentication method as Codex CLI and VSCode extension. You can override the api key by setting the `CODEX_API_KEY` environment variable.
By default, `llmx exec` will use the same authentication method as LLMX CLI and VSCode extension. You can override the api key by setting the `CODEX_API_KEY` environment variable.
```shell
CODEX_API_KEY=your-api-key-here codex exec "Fix merge conflict"
CODEX_API_KEY=your-api-key-here llmx exec "Fix merge conflict"
```
NOTE: `CODEX_API_KEY` is only supported in `codex exec`.
NOTE: `CODEX_API_KEY` is only supported in `llmx exec`.

View File

@@ -1,6 +1,6 @@
## Experimental technology disclaimer
Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
LLMX CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
- Bug reports
- Feature requests

View File

@@ -2,29 +2,29 @@
This FAQ highlights the most common questions and points you to the right deep-dive guides in `docs/`.
### OpenAI released a model called Codex in 2021 - is this related?
### OpenAI released a model called LLMX in 2021 - is this related?
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
In 2021, OpenAI released LLMX, an AI system designed to generate code from natural language prompts. That original LLMX model was deprecated as of March 2023 and is separate from the CLI tool.
### Which models are supported?
We recommend using Codex with GPT-5 Codex, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
We recommend using LLMX with GPT-5 LLMX, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
You can also use older models by using API-based auth and launching codex with the `--model` flag.
You can also use older models by using API-based auth and launching llmx with the `--model` flag.
### How do approvals and sandbox modes work together?
Approvals are the mechanism Codex uses to ask before running a tool call with elevated permissions - typically to leave the sandbox or re-run a failed command without isolation. Sandbox mode provides the baseline isolation (`Read Only`, `Workspace Write`, or `Danger Full Access`; see [Sandbox & approvals](./sandbox.md)).
Approvals are the mechanism LLMX uses to ask before running a tool call with elevated permissions - typically to leave the sandbox or re-run a failed command without isolation. Sandbox mode provides the baseline isolation (`Read Only`, `Workspace Write`, or `Danger Full Access`; see [Sandbox & approvals](./sandbox.md)).
### Can I automate tasks without the TUI?
Yes. [`codex exec`](./exec.md) runs Codex in non-interactive mode with streaming logs, JSONL output, and structured schema support. The command respects the same sandbox and approval settings you configure in the [Config guide](./config.md).
Yes. [`llmx exec`](./exec.md) runs LLMX in non-interactive mode with streaming logs, JSONL output, and structured schema support. The command respects the same sandbox and approval settings you configure in the [Config guide](./config.md).
### How do I stop Codex from editing my files?
### How do I stop LLMX from editing my files?
By default, Codex can modify files in your current working directory (Auto mode). To prevent edits, run `codex` in read-only mode with the CLI flag `--sandbox read-only`. Alternatively, you can change the approval level mid-conversation with `/approvals`.
By default, LLMX can modify files in your current working directory (Auto mode). To prevent edits, run `llmx` in read-only mode with the CLI flag `--sandbox read-only`. Alternatively, you can change the approval level mid-conversation with `/approvals`.
### How do I connect Codex to MCP servers?
### How do I connect LLMX to MCP servers?
Configure MCP servers through your `config.toml` using the examples in [Config -> Connecting to MCP servers](./config.md#connecting-to-mcp-servers).
@@ -32,24 +32,24 @@ Configure MCP servers through your `config.toml` using the examples in [Config -
Confirm your setup in three steps:
1. Walk through the auth flows in [Authentication](./authentication.md) to ensure the correct credentials are present in `~/.codex/auth.json`.
1. Walk through the auth flows in [Authentication](./authentication.md) to ensure the correct credentials are present in `~/.llmx/auth.json`.
2. If you're on a headless or remote machine, make sure port-forwarding is configured as described in [Authentication -> Connecting on a "Headless" Machine](./authentication.md#connecting-on-a-headless-machine).
### Does it work on Windows?
Running Codex directly on Windows may work, but is not officially supported. We recommend using [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install).
Running LLMX directly on Windows may work, but is not officially supported. We recommend using [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install).
### Where should I start after installation?
Follow the quick setup in [Install & build](./install.md) and then jump into [Getting started](./getting-started.md) for interactive usage tips, prompt examples, and AGENTS.md guidance.
### `brew upgrade codex` isn't upgrading me
### `brew upgrade llmx` isn't upgrading me
If you're running Codex v0.46.0 or older, `brew upgrade codex` will not move you to the latest version because we migrated from a Homebrew formula to a cask. To upgrade, uninstall the existing oudated formula and then install the new cask:
If you're running LLMX v0.46.0 or older, `brew upgrade llmx` will not move you to the latest version because we migrated from a Homebrew formula to a cask. To upgrade, uninstall the existing oudated formula and then install the new cask:
```bash
brew uninstall --formula codex
brew install --cask codex
brew uninstall --formula llmx
brew install --cask llmx
```
After reinstalling, `brew upgrade --cask codex` will keep future releases up to date.
After reinstalling, `brew upgrade --cask llmx` will keep future releases up to date.

View File

@@ -3,44 +3,44 @@
Looking for something specific? Jump ahead:
- [Tips & shortcuts](#tips--shortcuts) hotkeys, resume flow, prompts
- [Non-interactive runs](./exec.md) automate with `codex exec`
- [Non-interactive runs](./exec.md) automate with `llmx exec`
- Ready for deeper customization? Head to [`advanced.md`](./advanced.md)
### CLI usage
| Command | Purpose | Example |
| ------------------ | ---------------------------------- | ------------------------------- |
| `codex` | Interactive TUI | `codex` |
| `codex "..."` | Initial prompt for interactive TUI | `codex "fix lint errors"` |
| `codex exec "..."` | Non-interactive "automation mode" | `codex exec "explain utils.ts"` |
| `llmx` | Interactive TUI | `llmx` |
| `llmx "..."` | Initial prompt for interactive TUI | `llmx "fix lint errors"` |
| `llmx exec "..."` | Non-interactive "automation mode" | `llmx exec "explain utils.ts"` |
Key flags: `--model/-m`, `--ask-for-approval/-a`.
### Resuming interactive sessions
- Run `codex resume` to display the session picker UI
- Resume most recent: `codex resume --last`
- Resume by id: `codex resume <SESSION_ID>` (You can get session ids from /status or `~/.codex/sessions/`)
- Run `llmx resume` to display the session picker UI
- Resume most recent: `llmx resume --last`
- Resume by id: `llmx resume <SESSION_ID>` (You can get session ids from /status or `~/.llmx/sessions/`)
Examples:
```shell
# Open a picker of recent sessions
codex resume
llmx resume
# Resume the most recent session
codex resume --last
llmx resume --last
# Resume a specific session by id
codex resume 7f9f9a2e-1b3c-4c7a-9b0e-123456789abc
llmx resume 7f9f9a2e-1b3c-4c7a-9b0e-123456789abc
```
### Running with a prompt as input
You can also run Codex CLI with a prompt as input:
You can also run LLMX CLI with a prompt as input:
```shell
codex "explain this codebase to me"
llmx "explain this codebase to me"
```
### Example prompts
@@ -49,22 +49,22 @@ Below are a few bite-size examples you can copy-paste. Replace the text in quote
| ✨ | What you type | What happens |
| --- | ------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| 1 | `codex "Refactor the Dashboard component to React Hooks"` | Codex rewrites the class component, runs `npm test`, and shows the diff. |
| 2 | `codex "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
| 3 | `codex "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
| 4 | `codex "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
| 5 | `codex "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
| 6 | `codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
| 7 | `codex "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
| 1 | `llmx "Refactor the Dashboard component to React Hooks"` | LLMX rewrites the class component, runs `npm test`, and shows the diff. |
| 2 | `llmx "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
| 3 | `llmx "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
| 4 | `llmx "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
| 5 | `llmx "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
| 6 | `llmx "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
| 7 | `llmx "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
Looking to reuse your own instructions? Create slash commands with [custom prompts](./prompts.md).
### Memory with AGENTS.md
You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for them in the following places, and merges them top-down:
You can give LLMX extra instructions and guidance using `AGENTS.md` files. LLMX looks for them in the following places, and merges them top-down:
1. `~/.codex/AGENTS.md` - personal global guidance
2. Every directory from the repository root down to your current working directory (inclusive). In each directory, Codex first looks for `AGENTS.override.md` and uses it if present; otherwise it falls back to `AGENTS.md`. Use the override form when you want to replace inherited instructions for that directory.
1. `~/.llmx/AGENTS.md` - personal global guidance
2. Every directory from the repository root down to your current working directory (inclusive). In each directory, LLMX first looks for `AGENTS.override.md` and uses it if present; otherwise it falls back to `AGENTS.md`. Use the override form when you want to replace inherited instructions for that directory.
For more information on how to use AGENTS.md, see the [official AGENTS.md documentation](https://agents.md/).
@@ -76,32 +76,32 @@ Typing `@` triggers a fuzzy-filename search over the workspace root. Use up/down
#### EscEsc to edit a previous message
When the chat composer is empty, press Esc to prime “backtrack” mode. Press Esc again to open a transcript preview highlighting the last user message; press Esc repeatedly to step to older user messages. Press Enter to confirm and Codex will fork the conversation from that point, trim the visible transcript accordingly, and prefill the composer with the selected user message so you can edit and resubmit it.
When the chat composer is empty, press Esc to prime “backtrack” mode. Press Esc again to open a transcript preview highlighting the last user message; press Esc repeatedly to step to older user messages. Press Enter to confirm and LLMX will fork the conversation from that point, trim the visible transcript accordingly, and prefill the composer with the selected user message so you can edit and resubmit it.
In the transcript preview, the footer shows an `Esc edit prev` hint while editing is active.
#### `--cd`/`-C` flag
Sometimes it is not convenient to `cd` to the directory you want Codex to use as the "working root" before running Codex. Fortunately, `codex` supports a `--cd` option so you can specify whatever folder you want. You can confirm that Codex is honoring `--cd` by double-checking the **workdir** it reports in the TUI at the start of a new session.
Sometimes it is not convenient to `cd` to the directory you want LLMX to use as the "working root" before running LLMX. Fortunately, `llmx` supports a `--cd` option so you can specify whatever folder you want. You can confirm that LLMX is honoring `--cd` by double-checking the **workdir** it reports in the TUI at the start of a new session.
#### `--add-dir` flag
Need to work across multiple projects in one run? Pass `--add-dir` one or more times to expose extra directories as writable roots for the current session while keeping the main working directory unchanged. For example:
```shell
codex --cd apps/frontend --add-dir ../backend --add-dir ../shared
llmx --cd apps/frontend --add-dir ../backend --add-dir ../shared
```
Codex can then inspect and edit files in each listed directory without leaving the primary workspace.
LLMX can then inspect and edit files in each listed directory without leaving the primary workspace.
#### Shell completions
Generate shell completion scripts via:
```shell
codex completion bash
codex completion zsh
codex completion fish
llmx completion bash
llmx completion zsh
llmx completion fish
```
#### Image input
@@ -109,6 +109,6 @@ codex completion fish
Paste images directly into the composer (Ctrl+V / Cmd+V) to attach them to your prompt. You can also attach files via the CLI using `-i/--image` (commaseparated):
```bash
codex -i screenshot.png "Explain this error"
codex --image img1.png,img2.jpg "Summarize these diagrams"
llmx -i screenshot.png "Explain this error"
llmx --image img1.png,img2.jpg "Summarize these diagrams"
```

View File

@@ -10,14 +10,14 @@
### DotSlash
The GitHub Release also contains a [DotSlash](https://dotslash-cli.com/) file for the Codex CLI named `codex`. Using a DotSlash file makes it possible to make a lightweight commit to source control to ensure all contributors use the same version of an executable, regardless of what platform they use for development.
The GitHub Release also contains a [DotSlash](https://dotslash-cli.com/) file for the LLMX CLI named `llmx`. Using a DotSlash file makes it possible to make a lightweight commit to source control to ensure all contributors use the same version of an executable, regardless of what platform they use for development.
### Build from source
```bash
# Clone the repository and navigate to the root of the Cargo workspace.
git clone https://github.com/openai/codex.git
cd codex/codex-rs
git clone https://github.com/valknar/llmx.git
cd llmx/llmx-rs
# Install the Rust toolchain, if necessary.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
@@ -25,11 +25,11 @@ source "$HOME/.cargo/env"
rustup component add rustfmt
rustup component add clippy
# Build Codex.
# Build LLMX.
cargo build
# Launch the TUI with a sample prompt.
cargo run --bin codex -- "explain this codebase to me"
cargo run --bin llmx -- "explain this codebase to me"
# After making changes, ensure the code is clean.
cargo fmt -- --config imports_granularity=Item

View File

@@ -1,8 +1,8 @@
## Codex open source fund
## LLMX open source fund
We're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models.
We're excited to launch a **$1 million initiative** supporting open source projects that use LLMX CLI and other OpenAI models.
- Grants are awarded up to **$25,000** API credits.
- Applications are reviewed **on a rolling basis**.
**Interested? [Apply here](https://openai.com/form/codex-open-source-fund/).**
**Interested? [Apply here](https://openai.com/form/llmx-open-source-fund/).**

View File

@@ -1,13 +1,13 @@
## Custom Prompts
Custom prompts turn your repeatable instructions into reusable slash commands, so you can trigger them without retyping or copy/pasting. Each prompt is a Markdown file that Codex expands into the conversation the moment you run it.
Custom prompts turn your repeatable instructions into reusable slash commands, so you can trigger them without retyping or copy/pasting. Each prompt is a Markdown file that LLMX expands into the conversation the moment you run it.
### Where prompts live
- Location: store prompts in `$CODEX_HOME/prompts/` (defaults to `~/.codex/prompts/`). Set `CODEX_HOME` if you want to use a different folder.
- File type: Codex only loads `.md` files. Non-Markdown files are ignored. Both regular files and symlinks to Markdown files are supported.
- Location: store prompts in `$CODEX_HOME/prompts/` (defaults to `~/.llmx/prompts/`). Set `CODEX_HOME` if you want to use a different folder.
- File type: LLMX only loads `.md` files. Non-Markdown files are ignored. Both regular files and symlinks to Markdown files are supported.
- Naming: The filename (without `.md`) becomes the prompt name. A file called `review.md` registers the prompt `review`.
- Refresh: Prompts are loaded when a session starts. Restart Codex (or start a new session) after adding or editing files.
- Refresh: Prompts are loaded when a session starts. Restart LLMX (or start a new session) after adding or editing files.
- Conflicts: Files whose names collide with built-in commands (like `init`) stay hidden in the slash popup, but you can still invoke them with `/prompts:<name>`.
### File format
@@ -27,24 +27,24 @@ Custom prompts turn your repeatable instructions into reusable slash commands, s
### Placeholders and arguments
- Numeric placeholders: `$1``$9` insert the first nine positional arguments you type after the command. `$ARGUMENTS` inserts all positional arguments joined by a single space. Use `$$` to emit a literal dollar sign (Codex leaves `$$` untouched).
- Numeric placeholders: `$1``$9` insert the first nine positional arguments you type after the command. `$ARGUMENTS` inserts all positional arguments joined by a single space. Use `$$` to emit a literal dollar sign (LLMX leaves `$$` untouched).
- Named placeholders: Tokens such as `$FILE` or `$TICKET_ID` expand from `KEY=value` pairs you supply. Keys are case-sensitive—use the same uppercase name in the command (for example, `FILE=...`).
- Quoted arguments: Double-quote any value that contains spaces, e.g. `TICKET_TITLE="Fix logging"`.
- Invocation syntax: Run prompts via `/prompts:<name> ...`. When the slash popup is open, typing either `prompts:` or the bare prompt name will surface `/prompts:<name>` suggestions.
- Error handling: If a prompt contains named placeholders, Codex requires them all. You will see a validation message if any are missing or malformed.
- Error handling: If a prompt contains named placeholders, LLMX requires them all. You will see a validation message if any are missing or malformed.
### Running a prompt
1. Start a new Codex session (ensures the prompt list is fresh).
1. Start a new LLMX session (ensures the prompt list is fresh).
2. In the composer, type `/` to open the slash popup.
3. Type `prompts:` (or start typing the prompt name) and select it with ↑/↓.
4. Provide any required arguments, press Enter, and Codex sends the expanded content.
4. Provide any required arguments, press Enter, and LLMX sends the expanded content.
### Examples
**Draft PR helper**
`~/.codex/prompts/draftpr.md`
`~/.llmx/prompts/draftpr.md`
```markdown
---
@@ -54,4 +54,4 @@ description: Create feature branch, commit and open draft PR.
Create a branch named `tibo/<feature_name>`, commit the changes, and open a draft PR.
```
Usage: type `/prompts:draftpr` to have codex perform the work.
Usage: type `/prompts:draftpr` to have llmx perform the work.

View File

@@ -1,30 +1,30 @@
# Release Management
Currently, we made Codex binaries available in three places:
Currently, we made LLMX binaries available in three places:
- GitHub Releases https://github.com/openai/codex/releases/
- `@openai/codex` on npm: https://www.npmjs.com/package/@openai/codex
- `codex` on Homebrew: https://formulae.brew.sh/cask/codex
- GitHub Releases https://github.com/valknar/llmx/releases/
- `@llmx/llmx` on npm: https://www.npmjs.com/package/@llmx/llmx
- `llmx` on Homebrew: https://formulae.brew.sh/cask/llmx
# Cutting a Release
Run the `codex-rs/scripts/create_github_release` script in the repository to publish a new release. The script will choose the appropriate version number depending on the type of release you are creating.
Run the `llmx-rs/scripts/create_github_release` script in the repository to publish a new release. The script will choose the appropriate version number depending on the type of release you are creating.
To cut a new alpha release from `main` (feel free to cut alphas liberally):
```
./codex-rs/scripts/create_github_release --publish-alpha
./llmx-rs/scripts/create_github_release --publish-alpha
```
To cut a new _public_ release from `main` (which requires more caution), run:
```
./codex-rs/scripts/create_github_release --publish-release
./llmx-rs/scripts/create_github_release --publish-release
```
TIP: Add the `--dry-run` flag to report the next version number for the respective release and exit.
Running the publishing script will kick off a GitHub Action to build the release, so go to https://github.com/openai/codex/actions/workflows/rust-release.yml to find the corresponding workflow. (Note: we should automate finding the workflow URL with `gh`.)
Running the publishing script will kick off a GitHub Action to build the release, so go to https://github.com/valknar/llmx/actions/workflows/rust-release.yml to find the corresponding workflow. (Note: we should automate finding the workflow URL with `gh`.)
When the workflow finishes, the GitHub Release is "done," but you still have to consider npm and Homebrew.
@@ -34,12 +34,12 @@ The GitHub Action is responsible for publishing to npm.
## Publishing to Homebrew
For Homebrew, we ship Codex as a cask. Homebrew's automation system checks our GitHub repo every few hours for a new release and will open a PR to update the cask with the latest binary.
For Homebrew, we ship LLMX as a cask. Homebrew's automation system checks our GitHub repo every few hours for a new release and will open a PR to update the cask with the latest binary.
Inevitably, you just have to refresh this page periodically to see if the release has been picked up by their automation system:
https://github.com/Homebrew/homebrew-cask/pulls?q=%3Apr+codex
https://github.com/Homebrew/homebrew-cask/pulls?q=%3Apr+llmx
For reference, our Homebrew cask lives at:
https://github.com/Homebrew/homebrew-cask/blob/main/Casks/c/codex.rb
https://github.com/Homebrew/homebrew-cask/blob/main/Casks/c/llmx.rb

View File

@@ -1,36 +1,36 @@
## Sandbox & approvals
What Codex is allowed to do is governed by a combination of **sandbox modes** (what Codex is allowed to do without supervision) and **approval policies** (when you must confirm an action). This page explains the options, how they interact, and how the sandbox behaves on each platform.
What LLMX is allowed to do is governed by a combination of **sandbox modes** (what LLMX is allowed to do without supervision) and **approval policies** (when you must confirm an action). This page explains the options, how they interact, and how the sandbox behaves on each platform.
### Approval policies
Codex starts conservatively. Until you explicitly tell it a workspace is trusted, the CLI defaults to **read-only sandboxing** with the `read-only` approval preset. Codex can inspect files and answer questions, but every edit or command requires approval.
LLMX starts conservatively. Until you explicitly tell it a workspace is trusted, the CLI defaults to **read-only sandboxing** with the `read-only` approval preset. LLMX can inspect files and answer questions, but every edit or command requires approval.
When you mark a workspace as trusted (for example via the onboarding prompt or `/approvals` → “Trust this directory”), Codex upgrades the default preset to **Auto**: sandboxed writes inside the workspace with `AskForApproval::OnRequest`. Codex only interrupts you when it needs to leave the workspace or rerun something outside the sandbox.
When you mark a workspace as trusted (for example via the onboarding prompt or `/approvals` → “Trust this directory”), LLMX upgrades the default preset to **Auto**: sandboxed writes inside the workspace with `AskForApproval::OnRequest`. LLMX only interrupts you when it needs to leave the workspace or rerun something outside the sandbox.
If you want maximum guardrails for a trusted repo, switch back to Read Only from the `/approvals` picker. If you truly need hands-off automation, use `Full Access`—but be deliberate, because that skips both the sandbox and approvals.
#### Defaults and recommendations
- Every session starts in a sandbox. Until a repo is trusted, Codex enforces read-only access and will prompt before any write or command.
- Marking a repo as trusted switches the default preset to Auto (`workspace-write` + `ask-for-approval on-request`) so Codex can keep iterating locally without nagging you.
- Every session starts in a sandbox. Until a repo is trusted, LLMX enforces read-only access and will prompt before any write or command.
- Marking a repo as trusted switches the default preset to Auto (`workspace-write` + `ask-for-approval on-request`) so LLMX can keep iterating locally without nagging you.
- The workspace always includes the current directory plus temporary directories like `/tmp`. Use `/status` to confirm the exact writable roots.
- You can override the defaults from the command line at any time:
- `codex --sandbox read-only --ask-for-approval on-request`
- `codex --sandbox workspace-write --ask-for-approval on-request`
- `llmx --sandbox read-only --ask-for-approval on-request`
- `llmx --sandbox workspace-write --ask-for-approval on-request`
### Can I run without ANY approvals?
Yes, you can disable all approval prompts with `--ask-for-approval never`. This option works with all `--sandbox` modes, so you still have full control over Codex's level of autonomy. It will make its best attempt with whatever constraints you provide.
Yes, you can disable all approval prompts with `--ask-for-approval never`. This option works with all `--sandbox` modes, so you still have full control over LLMX's level of autonomy. It will make its best attempt with whatever constraints you provide.
### Common sandbox + approvals combinations
| Intent | Flags | Effect |
| ---------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| Safe read-only browsing | `--sandbox read-only --ask-for-approval on-request` | Codex can read files and answer questions. Codex requires approval to make edits, run commands, or access network. |
| Safe read-only browsing | `--sandbox read-only --ask-for-approval on-request` | LLMX can read files and answer questions. LLMX requires approval to make edits, run commands, or access network. |
| Read-only non-interactive (CI) | `--sandbox read-only --ask-for-approval never` | Reads only; never escalates |
| Let it edit the repo, ask if risky | `--sandbox workspace-write --ask-for-approval on-request` | Codex can read files, make edits, and run commands in the workspace. Codex requires approval for actions outside the workspace or for network access. |
| Auto (preset; trusted repos) | `--full-auto` (equivalent to `--sandbox workspace-write` + `--ask-for-approval on-request`) | Codex runs sandboxed commands that can write inside the workspace without prompting. Escalates only when it must leave the sandbox. |
| Let it edit the repo, ask if risky | `--sandbox workspace-write --ask-for-approval on-request` | LLMX can read files, make edits, and run commands in the workspace. LLMX requires approval for actions outside the workspace or for network access. |
| Auto (preset; trusted repos) | `--full-auto` (equivalent to `--sandbox workspace-write` + `--ask-for-approval on-request`) | LLMX runs sandboxed commands that can write inside the workspace without prompting. Escalates only when it must leave the sandbox. |
| YOLO (not recommended) | `--dangerously-bypass-approvals-and-sandbox` (alias: `--yolo`) | No sandbox; no prompts |
> Note: In `workspace-write`, network is disabled by default unless enabled in config (`[sandbox_workspace_write].network_access = true`).
@@ -65,9 +65,9 @@ sandbox_mode = "read-only"
### Sandbox mechanics by platform {#platform-sandboxing-details}
The mechanism Codex uses to enforce the sandbox policy depends on your OS:
The mechanism LLMX uses to enforce the sandbox policy depends on your OS:
- **macOS 12+** uses **Apple Seatbelt**. Codex invokes `sandbox-exec` with a profile that corresponds to the selected `--sandbox` mode, constraining filesystem and network access at the OS level.
- **macOS 12+** uses **Apple Seatbelt**. LLMX invokes `sandbox-exec` with a profile that corresponds to the selected `--sandbox` mode, constraining filesystem and network access at the OS level.
- **Linux** combines **Landlock** and **seccomp** APIs to approximate the same guarantees. Kernel support is required; older kernels may not expose the necessary features.
- **Windows (experimental)**:
- Launches commands inside a restricted token derived from an AppContainer profile.
@@ -76,20 +76,20 @@ The mechanism Codex uses to enforce the sandbox policy depends on your OS:
Windows sandbox support remains highly experimental. It cannot prevent file writes, deletions, or creations in any directory where the Everyone SID already has write permissions (for example, world-writable folders).
In containerized Linux environments (for example Docker), sandboxing may not work when the host or container configuration does not expose Landlock/seccomp. In those cases, configure the container to provide the isolation you need and run Codex with `--sandbox danger-full-access` (or the shorthand `--dangerously-bypass-approvals-and-sandbox`) inside that container.
In containerized Linux environments (for example Docker), sandboxing may not work when the host or container configuration does not expose Landlock/seccomp. In those cases, configure the container to provide the isolation you need and run LLMX with `--sandbox danger-full-access` (or the shorthand `--dangerously-bypass-approvals-and-sandbox`) inside that container.
### Experimenting with the Codex Sandbox
### Experimenting with the LLMX Sandbox
To test how commands behave under Codex's sandbox, use the CLI helpers:
To test how commands behave under LLMX's sandbox, use the CLI helpers:
```
# macOS
codex sandbox macos [--full-auto] [COMMAND]...
llmx sandbox macos [--full-auto] [COMMAND]...
# Linux
codex sandbox linux [--full-auto] [COMMAND]...
llmx sandbox linux [--full-auto] [COMMAND]...
# Legacy aliases
codex debug seatbelt [--full-auto] [COMMAND]...
codex debug landlock [--full-auto] [COMMAND]...
llmx debug seatbelt [--full-auto] [COMMAND]...
llmx debug landlock [--full-auto] [COMMAND]...
```

View File

@@ -8,24 +8,24 @@ Slash commands are special commands you can type that start with `/`.
### Built-in slash commands
Control Codexs behavior during an interactive session with slash commands.
Control LLMXs behavior during an interactive session with slash commands.
| Command | Purpose |
| ------------ | ----------------------------------------------------------- |
| `/model` | choose what model and reasoning effort to use |
| `/approvals` | choose what Codex can do without approval |
| `/approvals` | choose what LLMX can do without approval |
| `/review` | review my current changes and find issues |
| `/new` | start a new chat during a conversation |
| `/init` | create an AGENTS.md file with instructions for Codex |
| `/init` | create an AGENTS.md file with instructions for LLMX |
| `/compact` | summarize conversation to prevent hitting the context limit |
| `/undo` | ask Codex to undo a turn |
| `/undo` | ask LLMX to undo a turn |
| `/diff` | show git diff (including untracked files) |
| `/mention` | mention a file |
| `/status` | show current session configuration and token usage |
| `/mcp` | list configured MCP tools |
| `/logout` | log out of Codex |
| `/quit` | exit Codex |
| `/exit` | exit Codex |
| `/logout` | log out of LLMX |
| `/quit` | exit LLMX |
| `/exit` | exit LLMX |
| `/feedback` | send logs to maintainers |
---

View File

@@ -1,3 +1,3 @@
## Zero data retention (ZDR) usage
Codex CLI natively supports OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled.
LLMX CLI natively supports OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled.

View File

@@ -1,12 +1,12 @@
<h1 align="center">OpenAI Codex CLI</h1>
<h1 align="center">OpenAI LLMX CLI</h1>
<p align="center">Lightweight coding agent that runs in your terminal</p>
<p align="center"><code>npm i -g @openai/codex</code></p>
<p align="center"><code>npm i -g @llmx/llmx</code></p>
> [!IMPORTANT]
> This is the documentation for the _legacy_ TypeScript implementation of the Codex CLI. It has been superseded by the _Rust_ implementation. See the [README in the root of the Codex repository](https://github.com/openai/codex/blob/main/README.md) for details.
> This is the documentation for the _legacy_ TypeScript implementation of the LLMX CLI. It has been superseded by the _Rust_ implementation. See the [README in the root of the LLMX repository](https://github.com/valknar/llmx/blob/main/README.md) for details.
![Codex demo GIF using: codex "explain this codebase to me"](../.github/demo.gif)
![LLMX demo GIF using: llmx "explain this codebase to me"](../.github/demo.gif)
---
@@ -17,7 +17,7 @@
- [Experimental technology disclaimer](#experimental-technology-disclaimer)
- [Quickstart](#quickstart)
- [Why Codex?](#why-codex)
- [Why LLMX?](#why-llmx)
- [Security model & permissions](#security-model--permissions)
- [Platform sandboxing details](#platform-sandboxing-details)
- [System requirements](#system-requirements)
@@ -37,7 +37,7 @@
- [Environment variables setup](#environment-variables-setup)
- [FAQ](#faq)
- [Zero data retention (ZDR) usage](#zero-data-retention-zdr-usage)
- [Codex open source fund](#codex-open-source-fund)
- [LLMX open source fund](#llmx-open-source-fund)
- [Contributing](#contributing)
- [Development workflow](#development-workflow)
- [Git hooks with Husky](#git-hooks-with-husky)
@@ -49,7 +49,7 @@
- [Getting help](#getting-help)
- [Contributor license agreement (CLA)](#contributor-license-agreement-cla)
- [Quick fixes](#quick-fixes)
- [Releasing `codex`](#releasing-codex)
- [Releasing `llmx`](#releasing-llmx)
- [Alternative build options](#alternative-build-options)
- [Nix flake development](#nix-flake-development)
- [Security & responsible AI](#security--responsible-ai)
@@ -63,7 +63,7 @@
## Experimental technology disclaimer
Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
LLMX CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
- Bug reports
- Feature requests
@@ -77,7 +77,7 @@ Help us improve by filing issues or submitting PRs (see the section below for ho
Install globally:
```shell
npm install -g @openai/codex
npm install -g @llmx/llmx
```
Next, set your OpenAI API key as an environment variable:
@@ -97,7 +97,7 @@ export OPENAI_API_KEY="your-api-key-here"
<details>
<summary><strong>Use <code>--provider</code> to use other models</strong></summary>
> Codex also allows you to use other providers that support the OpenAI Chat Completions API. You can set the provider in the config file or use the `--provider` flag. The possible options for `--provider` are:
> LLMX also allows you to use other providers that support the OpenAI Chat Completions API. You can set the provider in the config file or use the `--provider` flag. The possible options for `--provider` are:
>
> - openai (default)
> - openrouter
@@ -129,28 +129,28 @@ export OPENAI_API_KEY="your-api-key-here"
Run interactively:
```shell
codex
llmx
```
Or, run with a prompt as input (and optionally in `Full Auto` mode):
```shell
codex "explain this codebase to me"
llmx "explain this codebase to me"
```
```shell
codex --approval-mode full-auto "create the fanciest todo-list app"
llmx --approval-mode full-auto "create the fanciest todo-list app"
```
That's it - Codex will scaffold a file, run it inside a sandbox, install any
That's it - LLMX will scaffold a file, run it inside a sandbox, install any
missing dependencies, and show you the live result. Approve the changes and
they'll be committed to your working directory.
---
## Why Codex?
## Why LLMX?
Codex CLI is built for developers who already **live in the terminal** and want
LLMX CLI is built for developers who already **live in the terminal** and want
ChatGPT-level reasoning **plus** the power to actually run code, manipulate
files, and iterate - all under version control. In short, it's _chat-driven
development_ that understands and executes your repo.
@@ -165,7 +165,7 @@ And it's **fully open-source** so you can see and contribute to how it develops!
## Security model & permissions
Codex lets you decide _how much autonomy_ the agent receives and auto-approval policy via the
LLMX lets you decide _how much autonomy_ the agent receives and auto-approval policy via the
`--approval-mode` flag (or the interactive onboarding prompt):
| Mode | What the agent may do without asking | Still requires approval |
@@ -175,7 +175,7 @@ Codex lets you decide _how much autonomy_ the agent receives and auto-approval p
| **Full Auto** | <li>Read/write files <li> Execute shell commands (network disabled, writes limited to your workdir) | - |
In **Full Auto** every command is run **network-disabled** and confined to the
current working directory (plus temporary files) for defense-in-depth. Codex
current working directory (plus temporary files) for defense-in-depth. LLMX
will also show a warning/confirmation if you start in **auto-edit** or
**full-auto** while the directory is _not_ tracked by Git, so you always have a
safety net.
@@ -185,21 +185,21 @@ the network enabled, once we're confident in additional safeguards.
### Platform sandboxing details
The hardening mechanism Codex uses depends on your OS:
The hardening mechanism LLMX uses depends on your OS:
- **macOS 12+** - commands are wrapped with **Apple Seatbelt** (`sandbox-exec`).
- Everything is placed in a read-only jail except for a small set of
writable roots (`$PWD`, `$TMPDIR`, `~/.codex`, etc.).
writable roots (`$PWD`, `$TMPDIR`, `~/.llmx`, etc.).
- Outbound network is _fully blocked_ by default - even if a child process
tries to `curl` somewhere it will fail.
- **Linux** - there is no sandboxing by default.
We recommend using Docker for sandboxing, where Codex launches itself inside a **minimal
We recommend using Docker for sandboxing, where LLMX launches itself inside a **minimal
container image** and mounts your repo _read/write_ at the same path. A
custom `iptables`/`ipset` firewall script denies all egress except the
OpenAI API. This gives you deterministic, reproducible runs without needing
root on the host. You can use the [`run_in_container.sh`](../codex-cli/scripts/run_in_container.sh) script to set up the sandbox.
root on the host. You can use the [`run_in_container.sh`](../llmx-cli/scripts/run_in_container.sh) script to set up the sandbox.
---
@@ -220,10 +220,10 @@ The hardening mechanism Codex uses depends on your OS:
| Command | Purpose | Example |
| ------------------------------------ | ----------------------------------- | ------------------------------------ |
| `codex` | Interactive REPL | `codex` |
| `codex "..."` | Initial prompt for interactive REPL | `codex "fix lint errors"` |
| `codex -q "..."` | Non-interactive "quiet mode" | `codex -q --json "explain utils.ts"` |
| `codex completion <bash\|zsh\|fish>` | Print shell completion script | `codex completion bash` |
| `llmx` | Interactive REPL | `llmx` |
| `llmx "..."` | Initial prompt for interactive REPL | `llmx "fix lint errors"` |
| `llmx -q "..."` | Non-interactive "quiet mode" | `llmx -q --json "explain utils.ts"` |
| `llmx completion <bash\|zsh\|fish>` | Print shell completion script | `llmx completion bash` |
Key flags: `--model/-m`, `--approval-mode/-a`, `--quiet/-q`, and `--notify`.
@@ -231,9 +231,9 @@ Key flags: `--model/-m`, `--approval-mode/-a`, `--quiet/-q`, and `--notify`.
## Memory & project docs
You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for `AGENTS.md` files in the following places, and merges them top-down:
You can give LLMX extra instructions and guidance using `AGENTS.md` files. LLMX looks for `AGENTS.md` files in the following places, and merges them top-down:
1. `~/.codex/AGENTS.md` - personal global guidance
1. `~/.llmx/AGENTS.md` - personal global guidance
2. `AGENTS.md` at repo root - shared project notes
3. `AGENTS.md` in the current working directory - sub-folder/feature specifics
@@ -243,14 +243,14 @@ Disable loading of these files with `--no-project-doc` or the environment variab
## Non-interactive / CI mode
Run Codex head-less in pipelines. Example GitHub Action step:
Run LLMX head-less in pipelines. Example GitHub Action step:
```yaml
- name: Update changelog via Codex
- name: Update changelog via LLMX
run: |
npm install -g @openai/codex
npm install -g @llmx/llmx
export OPENAI_API_KEY="${{ secrets.OPENAI_KEY }}"
codex -a auto-edit --quiet "update CHANGELOG for next release"
llmx -a auto-edit --quiet "update CHANGELOG for next release"
```
Set `CODEX_QUIET_MODE=1` to silence interactive UI noise.
@@ -260,24 +260,24 @@ Set `CODEX_QUIET_MODE=1` to silence interactive UI noise.
Setting the environment variable `DEBUG=true` prints full API request and response details:
```shell
DEBUG=true codex
DEBUG=true llmx
```
---
## Recipes
Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the [prompting guide](https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md) for more tips and usage patterns.
Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the [prompting guide](https://github.com/valknar/llmx/blob/main/llmx-cli/examples/prompting_guide.md) for more tips and usage patterns.
| ✨ | What you type | What happens |
| --- | ------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| 1 | `codex "Refactor the Dashboard component to React Hooks"` | Codex rewrites the class component, runs `npm test`, and shows the diff. |
| 2 | `codex "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
| 3 | `codex "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
| 4 | `codex "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
| 5 | `codex "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
| 6 | `codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
| 7 | `codex "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
| 1 | `llmx "Refactor the Dashboard component to React Hooks"` | LLMX rewrites the class component, runs `npm test`, and shows the diff. |
| 2 | `llmx "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
| 3 | `llmx "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
| 4 | `llmx "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
| 5 | `llmx "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
| 6 | `llmx "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
| 7 | `llmx "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
---
@@ -287,13 +287,13 @@ Below are a few bite-size examples you can copy-paste. Replace the text in quote
<summary><strong>From npm (Recommended)</strong></summary>
```bash
npm install -g @openai/codex
npm install -g @llmx/llmx
# or
yarn global add @openai/codex
yarn global add @llmx/llmx
# or
bun install -g @openai/codex
bun install -g @llmx/llmx
# or
pnpm add -g @openai/codex
pnpm add -g @llmx/llmx
```
</details>
@@ -303,8 +303,8 @@ pnpm add -g @openai/codex
```bash
# Clone the repository and navigate to the CLI package
git clone https://github.com/openai/codex.git
cd codex/codex-cli
git clone https://github.com/valknar/llmx.git
cd llmx/llmx-cli
# Enable corepack
corepack enable
@@ -332,7 +332,7 @@ pnpm link
## Configuration guide
Codex configuration files can be placed in the `~/.codex/` directory, supporting both YAML and JSON formats.
LLMX configuration files can be placed in the `~/.llmx/` directory, supporting both YAML and JSON formats.
### Basic configuration parameters
@@ -365,7 +365,7 @@ In the `history` object, you can configure conversation history settings:
### Configuration examples
1. YAML format (save as `~/.codex/config.yaml`):
1. YAML format (save as `~/.llmx/config.yaml`):
```yaml
model: o4-mini
@@ -374,7 +374,7 @@ fullAutoErrorMode: ask-user
notify: true
```
2. JSON format (save as `~/.codex/config.json`):
2. JSON format (save as `~/.llmx/config.json`):
```json
{
@@ -455,7 +455,7 @@ Below is a comprehensive example of `config.json` with multiple custom providers
### Custom instructions
You can create a `~/.codex/AGENTS.md` file to define custom guidance for the agent:
You can create a `~/.llmx/AGENTS.md` file to define custom guidance for the agent:
```markdown
- Always respond with emojis
@@ -485,9 +485,9 @@ export OPENROUTER_API_KEY="your-openrouter-key-here"
## FAQ
<details>
<summary>OpenAI released a model called Codex in 2021 - is this related?</summary>
<summary>OpenAI released a model called LLMX in 2021 - is this related?</summary>
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
In 2021, OpenAI released LLMX, an AI system designed to generate code from natural language prompts. That original LLMX model was deprecated as of March 2023 and is separate from the CLI tool.
</details>
@@ -505,15 +505,15 @@ It's possible that your [API account needs to be verified](https://help.openai.c
</details>
<details>
<summary>How do I stop Codex from editing my files?</summary>
<summary>How do I stop LLMX from editing my files?</summary>
Codex runs model-generated commands in a sandbox. If a proposed command or file change doesn't look right, you can simply type **n** to deny the command or give the model feedback.
LLMX runs model-generated commands in a sandbox. If a proposed command or file change doesn't look right, you can simply type **n** to deny the command or give the model feedback.
</details>
<details>
<summary>Does it work on Windows?</summary>
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex is regularly tested on macOS and Linux with Node 20+, and also supports Node 16.
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - LLMX is regularly tested on macOS and Linux with Node 20+, and also supports Node 16.
</details>
@@ -521,24 +521,24 @@ Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.mic
## Zero data retention (ZDR) usage
Codex CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled. If your OpenAI organization has Zero Data Retention enabled and you still encounter errors such as:
LLMX CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled. If your OpenAI organization has Zero Data Retention enabled and you still encounter errors such as:
```
OpenAI rejected the request. Error details: Status: 400, Code: unsupported_parameter, Type: invalid_request_error, Message: 400 Previous response cannot be used for this organization due to Zero Data Retention.
```
You may need to upgrade to a more recent version with: `npm i -g @openai/codex@latest`
You may need to upgrade to a more recent version with: `npm i -g @llmx/llmx@latest`
---
## Codex open source fund
## LLMX open source fund
We're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models.
We're excited to launch a **$1 million initiative** supporting open source projects that use LLMX CLI and other OpenAI models.
- Grants are awarded up to **$25,000** API credits.
- Applications are reviewed **on a rolling basis**.
**Interested? [Apply here](https://openai.com/form/codex-open-source-fund/).**
**Interested? [Apply here](https://openai.com/form/llmx-open-source-fund/).**
---
@@ -591,7 +591,7 @@ pnpm format:fix
### Debugging
To debug the CLI with a visual debugger, do the following in the `codex-cli` folder:
To debug the CLI with a visual debugger, do the following in the `llmx-cli` folder:
- Run `pnpm run build` to build the CLI, which will generate `cli.js.map` alongside `cli.js` in the `dist` folder.
- Run the CLI with `node --inspect-brk ./dist/cli.js` The program then waits until a debugger is attached before proceeding. Options:
@@ -602,7 +602,7 @@ To debug the CLI with a visual debugger, do the following in the `codex-cli` fol
1. **Start with an issue.** Open a new one or comment on an existing discussion so we can agree on the solution before code is written.
2. **Add or update tests.** Every new feature or bug-fix should come with test coverage that fails before your change and passes afterwards. 100% coverage is not required, but aim for meaningful assertions.
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`codex --help`), or relevant example projects.
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`llmx --help`), or relevant example projects.
4. **Keep commits atomic.** Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier.
### Opening a pull request
@@ -628,7 +628,7 @@ To debug the CLI with a visual debugger, do the following in the `codex-cli` fol
If you run into problems setting up the project, would like feedback on an idea, or just want to say _hi_ - please open a Discussion or jump into the relevant issue. We are happy to help.
Together we can make Codex CLI an incredible tool. **Happy hacking!** :rocket:
Together we can make LLMX CLI an incredible tool. **Happy hacking!** :rocket:
### Contributor license agreement (CLA)
@@ -653,11 +653,11 @@ No special Git commands, email attachments, or commit footers required.
The **DCO check** blocks merges until every commit in the PR carries the footer (with squash this is just the one).
### Releasing `codex`
### Releasing `llmx`
To publish a new version of the CLI you first need to stage the npm package. A
helper script in `codex-cli/scripts/` does all the heavy lifting. Inside the
`codex-cli` folder run:
helper script in `llmx-cli/scripts/` does all the heavy lifting. Inside the
`llmx-cli` folder run:
```bash
# Classic, JS implementation that includes small, native binaries for Linux sandboxing.
@@ -689,27 +689,27 @@ Enter a Nix development shell:
```bash
# Use either one of the commands according to which implementation you want to work with
nix develop .#codex-cli # For entering codex-cli specific shell
nix develop .#codex-rs # For entering codex-rs specific shell
nix develop .#llmx-cli # For entering llmx-cli specific shell
nix develop .#llmx-rs # For entering llmx-rs specific shell
```
This shell includes Node.js, installs dependencies, builds the CLI, and provides a `codex` command alias.
This shell includes Node.js, installs dependencies, builds the CLI, and provides a `llmx` command alias.
Build and run the CLI directly:
```bash
# Use either one of the commands according to which implementation you want to work with
nix build .#codex-cli # For building codex-cli
nix build .#codex-rs # For building codex-rs
./result/bin/codex --help
nix build .#llmx-cli # For building llmx-cli
nix build .#llmx-rs # For building llmx-rs
./result/bin/llmx --help
```
Run the CLI via the flake app:
```bash
# Use either one of the commands according to which implementation you want to work with
nix run .#codex-cli # For running codex-cli
nix run .#codex-rs # For running codex-rs
nix run .#llmx-cli # For running llmx-cli
nix run .#llmx-rs # For running llmx-rs
```
Use direnv with flakes
@@ -717,10 +717,10 @@ Use direnv with flakes
If you have direnv installed, you can use the following `.envrc` to automatically enter the Nix shell when you `cd` into the project directory:
```bash
cd codex-rs
echo "use flake ../flake.nix#codex-cli" >> .envrc && direnv allow
cd codex-cli
echo "use flake ../flake.nix#codex-rs" >> .envrc && direnv allow
cd llmx-rs
echo "use flake ../flake.nix#llmx-cli" >> .envrc && direnv allow
cd llmx-cli
echo "use flake ../flake.nix#llmx-rs" >> .envrc && direnv allow
```
---

View File

@@ -6,14 +6,14 @@ example, to stage the CLI, responses proxy, and SDK packages for version `0.6.0`
```bash
./scripts/stage_npm_packages.py \
--release-version 0.6.0 \
--package codex \
--package codex-responses-api-proxy \
--package codex-sdk
--package llmx \
--package llmx-responses-api-proxy \
--package llmx-sdk
```
This downloads the native artifacts once, hydrates `vendor/` for each package, and writes
tarballs to `dist/npm/`.
If you need to invoke `build_npm_package.py` directly, run
`codex-cli/scripts/install_native_deps.py` first and pass `--vendor-src` pointing to the
`llmx-cli/scripts/install_native_deps.py` first and pass `--vendor-src` pointing to the
directory that contains the populated `vendor/` tree.

View File

@@ -267,7 +267,7 @@ lto = "fat"
# remove everything to make the binary as small as possible.
strip = "symbols"
# See https://github.com/openai/codex/issues/1411 for details.
# See https://github.com/openai/llmx/issues/1411 for details.
codegen-units = 1
[profile.ci-test]

View File

@@ -1,74 +1,74 @@
# Codex CLI (Rust Implementation)
# LLMX CLI (Rust Implementation)
We provide Codex CLI as a standalone, native executable to ensure a zero-dependency install.
We provide LLMX CLI as a standalone, native executable to ensure a zero-dependency install.
## Installing Codex
## Installing LLMX
Today, the easiest way to install Codex is via `npm`:
Today, the easiest way to install LLMX is via `npm`:
```shell
npm i -g @openai/codex
codex
npm i -g @llmx/llmx
llmx
```
You can also install via Homebrew (`brew install --cask codex`) or download a platform-specific release directly from our [GitHub Releases](https://github.com/openai/codex/releases).
You can also install via Homebrew (`brew install --cask llmx`) or download a platform-specific release directly from our [GitHub Releases](https://github.com/valknar/llmx/releases).
## Documentation quickstart
- First run with Codex? Follow the walkthrough in [`docs/getting-started.md`](../docs/getting-started.md) for prompts, keyboard shortcuts, and session management.
- Already shipping with Codex and want deeper control? Jump to [`docs/advanced.md`](../docs/advanced.md) and the configuration reference at [`docs/config.md`](../docs/config.md).
- First run with LLMX? Follow the walkthrough in [`docs/getting-started.md`](../docs/getting-started.md) for prompts, keyboard shortcuts, and session management.
- Already shipping with LLMX and want deeper control? Jump to [`docs/advanced.md`](../docs/advanced.md) and the configuration reference at [`docs/config.md`](../docs/config.md).
## What's new in the Rust CLI
The Rust implementation is now the maintained Codex CLI and serves as the default experience. It includes a number of features that the legacy TypeScript CLI never supported.
The Rust implementation is now the maintained LLMX CLI and serves as the default experience. It includes a number of features that the legacy TypeScript CLI never supported.
### Config
Codex supports a rich set of configuration options. Note that the Rust CLI uses `config.toml` instead of `config.json`. See [`docs/config.md`](../docs/config.md) for details.
LLMX supports a rich set of configuration options. Note that the Rust CLI uses `config.toml` instead of `config.json`. See [`docs/config.md`](../docs/config.md) for details.
### Model Context Protocol Support
#### MCP client
Codex CLI functions as an MCP client that allows the Codex CLI and IDE extension to connect to MCP servers on startup. See the [`configuration documentation`](../docs/config.md#mcp_servers) for details.
LLMX CLI functions as an MCP client that allows the LLMX CLI and IDE extension to connect to MCP servers on startup. See the [`configuration documentation`](../docs/config.md#mcp_servers) for details.
#### MCP server (experimental)
Codex can be launched as an MCP _server_ by running `codex mcp-server`. This allows _other_ MCP clients to use Codex as a tool for another agent.
LLMX can be launched as an MCP _server_ by running `llmx mcp-server`. This allows _other_ MCP clients to use LLMX as a tool for another agent.
Use the [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) to try it out:
```shell
npx @modelcontextprotocol/inspector codex mcp-server
npx @modelcontextprotocol/inspector llmx mcp-server
```
Use `codex mcp` to add/list/get/remove MCP server launchers defined in `config.toml`, and `codex mcp-server` to run the MCP server directly.
Use `llmx mcp` to add/list/get/remove MCP server launchers defined in `config.toml`, and `llmx mcp-server` to run the MCP server directly.
### Notifications
You can enable notifications by configuring a script that is run whenever the agent finishes a turn. The [notify documentation](../docs/config.md#notify) includes a detailed example that explains how to get desktop notifications via [terminal-notifier](https://github.com/julienXX/terminal-notifier) on macOS.
### `codex exec` to run Codex programmatically/non-interactively
### `llmx exec` to run LLMX programmatically/non-interactively
To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the prompt via `stdin`) and Codex will work on your task until it decides that it is done and exits. Output is printed to the terminal directly. You can set the `RUST_LOG` environment variable to see more about what's going on.
To run LLMX non-interactively, run `llmx exec PROMPT` (you can also pass the prompt via `stdin`) and LLMX will work on your task until it decides that it is done and exits. Output is printed to the terminal directly. You can set the `RUST_LOG` environment variable to see more about what's going on.
### Experimenting with the Codex Sandbox
### Experimenting with the LLMX Sandbox
To test to see what happens when a command is run under the sandbox provided by Codex, we provide the following subcommands in Codex CLI:
To test to see what happens when a command is run under the sandbox provided by LLMX, we provide the following subcommands in LLMX CLI:
```
# macOS
codex sandbox macos [--full-auto] [--log-denials] [COMMAND]...
llmx sandbox macos [--full-auto] [--log-denials] [COMMAND]...
# Linux
codex sandbox linux [--full-auto] [COMMAND]...
llmx sandbox linux [--full-auto] [COMMAND]...
# Windows
codex sandbox windows [--full-auto] [COMMAND]...
llmx sandbox windows [--full-auto] [COMMAND]...
# Legacy aliases
codex debug seatbelt [--full-auto] [--log-denials] [COMMAND]...
codex debug landlock [--full-auto] [COMMAND]...
llmx debug seatbelt [--full-auto] [--log-denials] [COMMAND]...
llmx debug landlock [--full-auto] [COMMAND]...
```
### Selecting a sandbox policy via `--sandbox`
@@ -76,23 +76,23 @@ codex debug landlock [--full-auto] [COMMAND]...
The Rust CLI exposes a dedicated `--sandbox` (`-s`) flag that lets you pick the sandbox policy **without** having to reach for the generic `-c/--config` option:
```shell
# Run Codex with the default, read-only sandbox
codex --sandbox read-only
# Run LLMX with the default, read-only sandbox
llmx --sandbox read-only
# Allow the agent to write within the current workspace while still blocking network access
codex --sandbox workspace-write
llmx --sandbox workspace-write
# Danger! Disable sandboxing entirely (only do this if you are already running in a container or other isolated env)
codex --sandbox danger-full-access
llmx --sandbox danger-full-access
```
The same setting can be persisted in `~/.codex/config.toml` via the top-level `sandbox_mode = "MODE"` key, e.g. `sandbox_mode = "workspace-write"`.
The same setting can be persisted in `~/.llmx/config.toml` via the top-level `sandbox_mode = "MODE"` key, e.g. `sandbox_mode = "workspace-write"`.
## Code Organization
This folder is the root of a Cargo workspace. It contains quite a bit of experimental code, but here are the key crates:
- [`core/`](./core) contains the business logic for Codex. Ultimately, we hope this to be a library crate that is generally useful for building other Rust/native applications that use Codex.
- [`core/`](./core) contains the business logic for LLMX. Ultimately, we hope this to be a library crate that is generally useful for building other Rust/native applications that use LLMX.
- [`exec/`](./exec) "headless" CLI for use in automation.
- [`tui/`](./tui) CLI that launches a fullscreen TUI built with [Ratatui](https://ratatui.rs/).
- [`cli/`](./cli) CLI multitool that provides the aforementioned CLIs via subcommands.

View File

@@ -1,4 +1,4 @@
# oai-codex-ansi-escape
# oai-llmx-ansi-escape
Small helper functions that wrap functionality from
<https://crates.io/crates/ansi-to-tui>:

View File

@@ -1,18 +1,18 @@
# codex-app-server
# llmx-app-server
`codex app-server` is the interface Codex uses to power rich interfaces such as the [Codex VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
`llmx app-server` is the interface LLMX uses to power rich interfaces such as the [LLMX VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of LLMX may find it valuable.
## Protocol
Similar to [MCP](https://modelcontextprotocol.io/), `codex app-server` supports bidirectional communication, streaming JSONL over stdio. The protocol is JSON-RPC 2.0, though the `"jsonrpc":"2.0"` header is omitted.
Similar to [MCP](https://modelcontextprotocol.io/), `llmx app-server` supports bidirectional communication, streaming JSONL over stdio. The protocol is JSON-RPC 2.0, though the `"jsonrpc":"2.0"` header is omitted.
## Message Schema
Currently, you can dump a TypeScript version of the schema using `codex app-server generate-ts`, or a JSON Schema bundle via `codex app-server generate-json-schema`. Each output is specific to the version of Codex you used to run the command, so the generated artifacts are guaranteed to match that version.
Currently, you can dump a TypeScript version of the schema using `llmx app-server generate-ts`, or a JSON Schema bundle via `llmx app-server generate-json-schema`. Each output is specific to the version of LLMX you used to run the command, so the generated artifacts are guaranteed to match that version.
```
codex app-server generate-ts --out DIR
codex app-server generate-json-schema --out DIR
llmx app-server generate-ts --out DIR
llmx app-server generate-json-schema --out DIR
```
## Initialization
@@ -23,40 +23,40 @@ Example:
```json
{ "method": "initialize", "id": 0, "params": {
"clientInfo": { "name": "codex-vscode", "title": "Codex VS Code Extension", "version": "0.1.0" }
"clientInfo": { "name": "llmx-vscode", "title": "LLMX VS Code Extension", "version": "0.1.0" }
} }
{ "id": 0, "result": { "userAgent": "codex-app-server/0.1.0 codex-vscode/0.1.0" } }
{ "id": 0, "result": { "userAgent": "llmx-app-server/0.1.0 llmx-vscode/0.1.0" } }
{ "method": "initialized" }
```
## Core primitives
We have 3 top level primitives:
- Thread - a conversation between the Codex agent and a user. Each thread contains multiple turns.
- Thread - a conversation between the LLMX agent and a user. Each thread contains multiple turns.
- Turn - one turn of the conversation, typically starting with a user message and finishing with an agent message. Each turn contains multiple items.
- Item - represents user inputs and agent outputs as part of the turn, persisted and used as the context for future conversations.
## Thread & turn endpoints
The JSON-RPC API exposes dedicated methods for managing Codex conversations. Threads store long-lived conversation metadata, and turns store the per-message exchange (input → Codex output, including streamed items). Use the thread APIs to create, list, or archive sessions, then drive the conversation with turn APIs and notifications.
The JSON-RPC API exposes dedicated methods for managing LLMX conversations. Threads store long-lived conversation metadata, and turns store the per-message exchange (input → LLMX output, including streamed items). Use the thread APIs to create, list, or archive sessions, then drive the conversation with turn APIs and notifications.
### Quick reference
- `thread/start` — create a new thread; emits `thread/started` and auto-subscribes you to turn/item events for that thread.
- `thread/resume` — reopen an existing thread by id so subsequent `turn/start` calls append to it.
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders` filtering.
- `thread/archive` — move a threads rollout file into the archived directory; returns `{}` on success.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
- `turn/start` — add user input to a thread and begin LLMX generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
### 1) Start or resume a thread
Start a fresh thread when you need a new Codex conversation.
Start a fresh thread when you need a new LLMX conversation.
```json
{ "method": "thread/start", "id": 10, "params": {
// Optionally set config settings. If not specified, will use the user's
// current config settings.
"model": "gpt-5-codex",
"model": "gpt-5-llmx",
"cwd": "/Users/me/project",
"approvalPolicy": "never",
"sandbox": "workspaceWrite",
@@ -117,7 +117,7 @@ An archived thread will not appear in future calls to `thread/list`.
### 4) Start a turn (send user input)
Turns attach user input (text or images) to a thread and trigger Codex generation. The `input` field is a list of discriminated unions:
Turns attach user input (text or images) to a thread and trigger LLMX generation. The `input` field is a list of discriminated unions:
- `{"type":"text","text":"Explain this diff"}`
- `{"type":"image","url":"https://…png"}`
@@ -137,7 +137,7 @@ You can optionally specify config overrides on the new turn. If specified, these
"writableRoots": ["/Users/me/project"],
"networkAccess": true
},
"model": "gpt-5-codex",
"model": "gpt-5-llmx",
"effort": "medium",
"summary": "concise"
} }
@@ -161,7 +161,7 @@ You can cancel a running Turn with `turn/interrupt`.
{ "id": 31, "result": {} }
```
The server requests cancellations for running subprocesses, then emits a `turn/completed` event with `status: "interrupted"`. Rely on the `turn/completed` to know when Codex-side cleanup is done.
The server requests cancellations for running subprocesses, then emits a `turn/completed` event with `status: "interrupted"`. Rely on the `turn/completed` to know when LLMX-side cleanup is done.
## Auth endpoints
@@ -193,7 +193,7 @@ Response examples:
Field notes:
- `refreshToken` (bool): set `true` to force a token refresh.
- `requiresOpenaiAuth` reflects the active provider; when `false`, Codex can run without OpenAI credentials.
- `requiresOpenaiAuth` reflects the active provider; when `false`, LLMX can run without OpenAI credentials.
### 2) Log in with an API key
@@ -255,6 +255,6 @@ Field notes:
### Dev notes
- `codex app-server generate-ts --out <dir>` emits v2 types under `v2/`.
- `codex app-server generate-json-schema --out <dir>` outputs `codex_app_server_protocol.schemas.json`.
- `llmx app-server generate-ts --out <dir>` emits v2 types under `v2/`.
- `llmx app-server generate-json-schema --out <dir>` outputs `codex_app_server_protocol.schemas.json`.
- See [“Authentication and authorization” in the config docs](../../docs/config.md#authentication-and-authorization) for configuration knobs.

View File

@@ -1,5 +1,5 @@
# ChatGPT
This crate pertains to first party ChatGPT APIs and products such as Codex agent.
This crate pertains to first party ChatGPT APIs and products such as LLMX agent.
This crate should be primarily built and maintained by OpenAI employees. Please reach out to a maintainer before making an external contribution.

View File

@@ -1,4 +1,4 @@
# codex-common
# llmx-common
This crate is designed for utilities that need to be shared across other crates in the workspace, but should not go in `core`.

View File

@@ -1,10 +1,10 @@
# codex-core
# llmx-core
This crate implements the business logic for Codex. It is designed to be used by the various Codex UIs written in Rust.
This crate implements the business logic for LLMX. It is designed to be used by the various LLMX UIs written in Rust.
## Dependencies
Note that `codex-core` makes some assumptions about certain helper utilities being available in the environment. Currently, this support matrix is:
Note that `llmx-core` makes some assumptions about certain helper utilities being available in the environment. Currently, this support matrix is:
### macOS
@@ -12,8 +12,8 @@ Expects `/usr/bin/sandbox-exec` to be present.
### Linux
Expects the binary containing `codex-core` to run the equivalent of `codex sandbox linux` (legacy alias: `codex debug landlock`) when `arg0` is `codex-linux-sandbox`. See the `codex-arg0` crate for details.
Expects the binary containing `llmx-core` to run the equivalent of `llmx sandbox linux` (legacy alias: `llmx debug landlock`) when `arg0` is `llmx-linux-sandbox`. See the `llmx-arg0` crate for details.
### All Platforms
Expects the binary containing `codex-core` to simulate the virtual `apply_patch` CLI when `arg1` is `--codex-run-as-apply-patch`. See the `codex-arg0` crate for details.
Expects the binary containing `llmx-core` to simulate the virtual `apply_patch` CLI when `arg1` is `--llmx-run-as-apply-patch`. See the `llmx-arg0` crate for details.

View File

@@ -1,4 +1,4 @@
You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer.
You are LLMX, based on GPT-5. You are running as a coding agent in the LLMX CLI on a user's computer.
## General
@@ -27,9 +27,9 @@ When using the planning tool:
- Do not make single-step plans.
- When you made a plan, update it after having performed one of the sub-tasks that you shared on the plan.
## Codex CLI harness, sandboxing, and approvals
## LLMX CLI harness, sandboxing, and approvals
The Codex CLI harness supports several different configurations for sandboxing and escalation approvals that the user can choose from.
The LLMX CLI harness supports several different configurations for sandboxing and escalation approvals that the user can choose from.
Filesystem sandboxing defines which files can be read or written. The options for `sandbox_mode` are:
- **read-only**: The sandbox only permits reading files.

View File

@@ -1,4 +1,4 @@
You are a coding agent running in the Codex CLI, a terminal-based coding assistant. Codex CLI is an open source project led by OpenAI. You are expected to be precise, safe, and helpful.
You are a coding agent running in the LLMX CLI, a terminal-based coding assistant. LLMX CLI is an open source project led by OpenAI. You are expected to be precise, safe, and helpful.
Your capabilities:
@@ -6,7 +6,7 @@ Your capabilities:
- Communicate with the user by streaming thinking & responses, and by making & updating plans.
- Emit function calls to run terminal commands and apply patches. Depending on how this specific run is configured, you can request that these function calls be escalated to the user for approval before running. More on this in the "Sandbox and approvals" section.
Within this context, Codex refers to the open-source agentic coding interface (not the old Codex language model built by OpenAI).
Within this context, LLMX refers to the open-source agentic coding interface (not the old LLMX language model built by OpenAI).
# How you work
@@ -148,7 +148,7 @@ If completing the user's task requires writing or modifying files, your code and
## Sandbox and approvals
The Codex CLI harness supports several different sandboxing, and approval configurations that the user can choose from.
The LLMX CLI harness supports several different sandboxing, and approval configurations that the user can choose from.
Filesystem sandboxing prevents you from editing files without user approval. The options are:

View File

@@ -1,19 +1,19 @@
# Codex MCP Server Interface [experimental]
# LLMX MCP Server Interface [experimental]
This document describes Codexs experimental MCP server interface: a JSONRPC API that runs over the Model Context Protocol (MCP) transport to control a local Codex engine.
This document describes LLMXs experimental MCP server interface: a JSONRPC API that runs over the Model Context Protocol (MCP) transport to control a local LLMX engine.
- Status: experimental and subject to change without notice
- Server binary: `codex mcp-server` (or `codex-mcp-server`)
- Server binary: `llmx mcp-server` (or `llmx-mcp-server`)
- Transport: standard MCP over stdio (JSONRPC 2.0, linedelimited)
## Overview
Codex exposes a small set of MCPcompatible methods to create and manage conversations, send user input, receive live events, and handle approval prompts. The types are defined in `protocol/src/mcp_protocol.rs` and reused by the MCP server implementation in `mcp-server/`.
LLMX exposes a small set of MCPcompatible methods to create and manage conversations, send user input, receive live events, and handle approval prompts. The types are defined in `protocol/src/mcp_protocol.rs` and reused by the MCP server implementation in `mcp-server/`.
At a glance:
- Conversations
- `newConversation` → start a Codex session
- `newConversation` → start a LLMX session
- `sendUserMessage` / `sendUserTurn` → send user input into a conversation
- `interruptConversation` → stop the current turn
- `listConversations`, `resumeConversation`, `archiveConversation`
@@ -29,25 +29,25 @@ At a glance:
- `applyPatchApproval`, `execCommandApproval`
- Notifications (server → client)
- `loginChatGptComplete`, `authStatusChange`
- `codex/event` stream with agent events
- `llmx/event` stream with agent events
See code for full type definitions and exact shapes: `protocol/src/mcp_protocol.rs`.
## Starting the server
Run Codex as an MCP server and connect an MCP client:
Run LLMX as an MCP server and connect an MCP client:
```bash
codex mcp-server | your_mcp_client
llmx mcp-server | your_mcp_client
```
For a simple inspection UI, you can also try:
```bash
npx @modelcontextprotocol/inspector codex mcp-server
npx @modelcontextprotocol/inspector llmx mcp-server
```
Use the separate `codex mcp` subcommand to manage configured MCP server launchers in `config.toml`.
Use the separate `llmx mcp` subcommand to manage configured MCP server launchers in `config.toml`.
## Conversations
@@ -55,7 +55,7 @@ Start a new session with optional overrides:
Request `newConversation` params (subset):
- `model`: string model id (e.g. "o3", "gpt-5", "gpt-5-codex")
- `model`: string model id (e.g. "o3", "gpt-5", "gpt-5-llmx")
- `profile`: optional named profile
- `cwd`: optional working directory
- `approvalPolicy`: `untrusted` | `on-request` | `on-failure` | `never`
@@ -78,7 +78,7 @@ List/resume/archive: `listConversations`, `resumeConversation`, `archiveConversa
## Models
Fetch the catalog of models available in the current Codex build with `model/list`. The request accepts optional pagination inputs:
Fetch the catalog of models available in the current LLMX build with `model/list`. The request accepts optional pagination inputs:
- `pageSize` number of models to return (defaults to a server-selected value)
- `cursor` opaque string from the previous responses `nextCursor`
@@ -98,14 +98,14 @@ Each response yields:
While a conversation runs, the server sends notifications:
- `codex/event` with the serialized Codex event payload. The shape matches `core/src/protocol.rs`s `Event` and `EventMsg` types. Some notifications include a `_meta.requestId` to correlate with the originating request.
- `llmx/event` with the serialized LLMX event payload. The shape matches `core/src/protocol.rs`s `Event` and `EventMsg` types. Some notifications include a `_meta.requestId` to correlate with the originating request.
- Auth notifications via method names `loginChatGptComplete` and `authStatusChange`.
Clients should render events and, when present, surface approval requests (see next section).
## Approvals (server → client)
When Codex needs approval to apply changes or run commands, the server issues JSONRPC requests to the client:
When LLMX needs approval to apply changes or run commands, the server issues JSONRPC requests to the client:
- `applyPatchApproval { conversationId, callId, fileChanges, reason?, grantRoot? }`
- `execCommandApproval { conversationId, callId, command, cwd, reason? }`
@@ -131,10 +131,10 @@ Server responds:
Then send input:
```json
{ "jsonrpc": "2.0", "id": 2, "method": "sendUserMessage", "params": { "conversationId": "c7b0…", "items": [{ "type": "text", "text": "Hello Codex" }] } }
{ "jsonrpc": "2.0", "id": 2, "method": "sendUserMessage", "params": { "conversationId": "c7b0…", "items": [{ "type": "text", "text": "Hello LLMX" }] } }
```
While processing, the server emits `codex/event` notifications containing agent output, approvals, and status updates.
While processing, the server emits `llmx/event` notifications containing agent output, approvals, and status updates.
## Compatibility and stability

View File

@@ -6,22 +6,22 @@ NOTE: The code might not completely match this spec. There are a few minor chang
## Entities
These are entities exit on the codex backend. The intent of this section is to establish vocabulary and construct a shared mental model for the `Codex` core system.
These are entities exit on the llmx backend. The intent of this section is to establish vocabulary and construct a shared mental model for the `LLMX` core system.
0. `Model`
- In our case, this is the Responses REST API
1. `Codex`
- The core engine of codex
1. `LLMX`
- The core engine of llmx
- Runs locally, either in a background thread or separate process
- Communicated to via a queue pair SQ (Submission Queue) / EQ (Event Queue)
- Takes user input, makes requests to the `Model`, executes commands and applies patches.
2. `Session`
- The `Codex`'s current configuration and state
- `Codex` starts with no `Session`, and it is initialized by `Op::ConfigureSession`, which should be the first message sent by the UI.
- The `LLMX`'s current configuration and state
- `LLMX` starts with no `Session`, and it is initialized by `Op::ConfigureSession`, which should be the first message sent by the UI.
- The current `Session` can be reconfigured with additional `Op::ConfigureSession` calls.
- Any running execution is aborted when the session is reconfigured.
3. `Task`
- A `Task` is `Codex` executing work in response to user input.
- A `Task` is `LLMX` executing work in response to user input.
- `Session` has at most one `Task` running at a time.
- Receiving `Op::UserInput` starts a `Task`
- Consists of a series of `Turn`s
@@ -35,28 +35,28 @@ These are entities exit on the codex backend. The intent of this section is to e
- One cycle of iteration in a `Task`, consists of:
- A request to the `Model` - (initially) prompt + (optional) `last_response_id`, or (in loop) previous turn output
- The `Model` streams responses back in an SSE, which are collected until "completed" message and the SSE terminates
- `Codex` then executes command(s), applies patch(es), and outputs message(s) returned by the `Model`
- `LLMX` then executes command(s), applies patch(es), and outputs message(s) returned by the `Model`
- Pauses to request approval when necessary
- The output of one `Turn` is the input to the next `Turn`
- A `Turn` yielding no output terminates the `Task`
The term "UI" is used to refer to the application driving `Codex`. This may be the CLI / TUI chat-like interface that users operate, or it may be a GUI interface like a VSCode extension. The UI is external to `Codex`, as `Codex` is intended to be operated by arbitrary UI implementations.
The term "UI" is used to refer to the application driving `LLMX`. This may be the CLI / TUI chat-like interface that users operate, or it may be a GUI interface like a VSCode extension. The UI is external to `LLMX`, as `LLMX` is intended to be operated by arbitrary UI implementations.
When a `Turn` completes, the `response_id` from the `Model`'s final `response.completed` message is stored in the `Session` state to resume the thread given the next `Op::UserInput`. The `response_id` is also returned in the `EventMsg::TurnComplete` to the UI, which can be used to fork the thread from an earlier point by providing it in the `Op::UserInput`.
Since only 1 `Task` can be run at a time, for parallel tasks it is recommended that a single `Codex` be run for each thread of work.
Since only 1 `Task` can be run at a time, for parallel tasks it is recommended that a single `LLMX` be run for each thread of work.
## Interface
- `Codex`
- `LLMX`
- Communicates with UI via a `SQ` (Submission Queue) and `EQ` (Event Queue).
- `Submission`
- These are messages sent on the `SQ` (UI -> `Codex`)
- These are messages sent on the `SQ` (UI -> `LLMX`)
- Has an string ID provided by the UI, referred to as `sub_id`
- `Op` refers to the enum of all possible `Submission` payloads
- This enum is `non_exhaustive`; variants can be added at future dates
- `Event`
- These are messages sent on the `EQ` (`Codex` -> UI)
- These are messages sent on the `EQ` (`LLMX` -> UI)
- Each `Event` has a non-unique ID, matching the `sub_id` from the `Op::UserInput` that started the current task.
- `EventMsg` refers to the enum of all possible `Event` payloads
- This enum is `non_exhaustive`; variants can be added at future dates
@@ -98,16 +98,16 @@ sequenceDiagram
participant user as User
end
box Daemon
participant codex as Codex
participant llmx as LLMX
participant session as Session
participant task as Task
end
box Rest API
participant agent as Model
end
user->>codex: Op::ConfigureSession
codex-->>session: create session
codex->>user: Event::SessionConfigured
user->>llmx: Op::ConfigureSession
llmx-->>session: create session
llmx->>user: Event::SessionConfigured
user->>session: Op::UserInput
session-->>+task: start task
task->>user: Event::TaskStarted

View File

@@ -46,7 +46,7 @@ Of note:
- `foo` is tagged as a `ReadableFile`, so the caller should resolve `foo` relative to `getcwd()` and `realpath` it (as it may be a symlink) to determine whether `foo` is safe to read.
- While the specified executable is `ls`, `"system_path"` offers `/bin/ls` and `/usr/bin/ls` as viable alternatives to avoid using whatever `ls` happens to appear first on the user's `$PATH`. If either exists on the host, it is recommended to use it as the first argument to `execv(3)` instead of `ls`.
Further, "safety" in this system is not a guarantee that the command will execute successfully. As an example, `cat /Users/mbolin/code/codex/README.md` may be considered "safe" if the system has decided the agent is allowed to read anything under `/Users/mbolin/code/codex`, but it will fail at runtime if `README.md` does not exist. (Though this is "safe" in that the agent did not read any files that it was not authorized to read.)
Further, "safety" in this system is not a guarantee that the command will execute successfully. As an example, `cat /Users/mbolin/code/llmx/README.md` may be considered "safe" if the system has decided the agent is allowed to read anything under `/Users/mbolin/code/llmx`, but it will fail at runtime if `README.md` does not exist. (Though this is "safe" in that the agent did not read any files that it was not authorized to read.)
## Policy

View File

@@ -1,5 +1,5 @@
# codex_file_search
Fast fuzzy file search tool for Codex.
Fast fuzzy file search tool for LLMX.
Uses <https://crates.io/crates/ignore> under the hood (which is what `ripgrep` uses) to traverse a directory (while honoring `.gitignore`, etc.) to produce the list of files to search and then uses <https://crates.io/crates/nucleo-matcher> to fuzzy-match the user supplied `PATTERN` against the corpus.

View File

@@ -1,8 +1,8 @@
# codex-linux-sandbox
# llmx-linux-sandbox
This crate is responsible for producing:
- a `codex-linux-sandbox` standalone executable for Linux that is bundled with the Node.js version of the Codex CLI
- a `llmx-linux-sandbox` standalone executable for Linux that is bundled with the Node.js version of the LLMX CLI
- a lib crate that exposes the business logic of the executable as `run_main()` so that
- the `codex-exec` CLI can check if its arg0 is `codex-linux-sandbox` and, if so, execute as if it were `codex-linux-sandbox`
- this should also be true of the `codex` multitool CLI
- the `llmx-exec` CLI can check if its arg0 is `llmx-linux-sandbox` and, if so, execute as if it were `llmx-linux-sandbox`
- this should also be true of the `llmx` multitool CLI

View File

@@ -1,4 +1,4 @@
# codex-process-hardening
# llmx-process-hardening
This crate provides `pre_main_hardening()`, which is designed to be called pre-`main()` (using `#[ctor::ctor]`) to perform various process hardening steps, such as

View File

@@ -1,6 +1,6 @@
# codex-protocol
# llmx-protocol
This crate defines the "types" for the protocol used by Codex CLI, which includes both "internal types" for communication between `codex-core` and `codex-tui`, as well as "external types" used with `codex app-server`.
This crate defines the "types" for the protocol used by LLMX CLI, which includes both "internal types" for communication between `llmx-core` and `llmx-tui`, as well as "external types" used with `llmx app-server`.
This crate should have minimal dependencies.

View File

@@ -1,23 +1,23 @@
# codex-responses-api-proxy
# llmx-responses-api-proxy
A strict HTTP proxy that only forwards `POST` requests to `/v1/responses` to the OpenAI API (`https://api.openai.com`), injecting the `Authorization: Bearer $OPENAI_API_KEY` header. Everything else is rejected with `403 Forbidden`.
## Expected Usage
**IMPORTANT:** `codex-responses-api-proxy` is designed to be run by a privileged user with access to `OPENAI_API_KEY` so that an unprivileged user cannot inspect or tamper with the process. Though if `--http-shutdown` is specified, an unprivileged user _can_ make a `GET` request to `/shutdown` to shutdown the server, as an unprivileged user could not send `SIGTERM` to kill the process.
**IMPORTANT:** `llmx-responses-api-proxy` is designed to be run by a privileged user with access to `OPENAI_API_KEY` so that an unprivileged user cannot inspect or tamper with the process. Though if `--http-shutdown` is specified, an unprivileged user _can_ make a `GET` request to `/shutdown` to shutdown the server, as an unprivileged user could not send `SIGTERM` to kill the process.
A privileged user (i.e., `root` or a user with `sudo`) who has access to `OPENAI_API_KEY` would run the following to start the server, as `codex-responses-api-proxy` reads the auth token from `stdin`:
A privileged user (i.e., `root` or a user with `sudo`) who has access to `OPENAI_API_KEY` would run the following to start the server, as `llmx-responses-api-proxy` reads the auth token from `stdin`:
```shell
printenv OPENAI_API_KEY | env -u OPENAI_API_KEY codex-responses-api-proxy --http-shutdown --server-info /tmp/server-info.json
printenv OPENAI_API_KEY | env -u OPENAI_API_KEY llmx-responses-api-proxy --http-shutdown --server-info /tmp/server-info.json
```
A non-privileged user would then run Codex as follows, specifying the `model_provider` dynamically:
A non-privileged user would then run LLMX as follows, specifying the `model_provider` dynamically:
```shell
PROXY_PORT=$(jq .port /tmp/server-info.json)
PROXY_BASE_URL="http://127.0.0.1:${PROXY_PORT}"
codex exec -c "model_providers.openai-proxy={ name = 'OpenAI Proxy', base_url = '${PROXY_BASE_URL}/v1', wire_api='responses' }" \
llmx exec -c "model_providers.openai-proxy={ name = 'OpenAI Proxy', base_url = '${PROXY_BASE_URL}/v1', wire_api='responses' }" \
-c model_provider="openai-proxy" \
'Your prompt here'
```
@@ -30,7 +30,7 @@ curl --fail --silent --show-error "${PROXY_BASE_URL}/shutdown"
## Behavior
- Reads the API key from `stdin`. All callers should pipe the key in (for example, `printenv OPENAI_API_KEY | codex-responses-api-proxy`).
- Reads the API key from `stdin`. All callers should pipe the key in (for example, `printenv OPENAI_API_KEY | llmx-responses-api-proxy`).
- Formats the header value as `Bearer <key>` and attempts to `mlock(2)` the memory holding that header so it is not swapped to disk.
- Listens on the provided port or an ephemeral port if `--port` is not specified.
- Accepts exactly `POST /v1/responses` (no query string). The request body is forwarded to `https://api.openai.com/v1/responses` with `Authorization: Bearer <key>` set. All original request headers (except any incoming `Authorization`) are forwarded upstream, with `Host` overridden to `api.openai.com`. For other requests, it responds with `403`.
@@ -40,19 +40,19 @@ curl --fail --silent --show-error "${PROXY_BASE_URL}/shutdown"
## CLI
```
codex-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown] [--upstream-url <URL>]
llmx-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown] [--upstream-url <URL>]
```
- `--port <PORT>`: Port to bind on `127.0.0.1`. If omitted, an ephemeral port is chosen.
- `--server-info <FILE>`: If set, the proxy writes a single line of JSON with `{ "port": <PORT>, "pid": <PID> }` once listening.
- `--http-shutdown`: If set, enables `GET /shutdown` to exit the process with code `0`.
- `--upstream-url <URL>`: Absolute URL to forward requests to. Defaults to `https://api.openai.com/v1/responses`.
- Authentication is fixed to `Authorization: Bearer <key>` to match the Codex CLI expectations.
- Authentication is fixed to `Authorization: Bearer <key>` to match the LLMX CLI expectations.
For Azure, for example (ensure your deployment accepts `Authorization: Bearer <key>`):
```shell
printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY codex-responses-api-proxy \
printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY llmx-responses-api-proxy \
--http-shutdown \
--server-info /tmp/server-info.json \
--upstream-url "https://YOUR_PROJECT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT/responses?api-version=2025-04-01-preview"
@@ -67,7 +67,7 @@ printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY codex-responses-api-
Care is taken to restrict access/copying to the value of `OPENAI_API_KEY` retained in memory:
- We leverage [`codex_process_hardening`](https://github.com/openai/codex/blob/main/codex-rs/process-hardening/README.md) so `codex-responses-api-proxy` is run with standard process-hardening techniques.
- We leverage [`codex_process_hardening`](https://github.com/valknar/llmx/blob/main/llmx-rs/process-hardening/README.md) so `llmx-responses-api-proxy` is run with standard process-hardening techniques.
- At startup, we allocate a `1024` byte buffer on the stack and copy `"Bearer "` into the start of the buffer.
- We then read from `stdin`, copying the contents into the buffer after `"Bearer "`.
- After verifying the key matches `/^[a-zA-Z0-9_-]+$/` (and does not exceed the buffer), we create a `String` from that buffer (so the data is now on the heap).

View File

@@ -1,13 +1,13 @@
# @openai/codex-responses-api-proxy
# @llmx/llmx-responses-api-proxy
<p align="center"><code>npm i -g @openai/codex-responses-api-proxy</code> to install <code>codex-responses-api-proxy</code></p>
<p align="center"><code>npm i -g @llmx/llmx-responses-api-proxy</code> to install <code>llmx-responses-api-proxy</code></p>
This package distributes the prebuilt [Codex Responses API proxy binary](https://github.com/openai/codex/tree/main/codex-rs/responses-api-proxy) for macOS, Linux, and Windows.
This package distributes the prebuilt [LLMX Responses API proxy binary](https://github.com/valknar/llmx/tree/main/llmx-rs/responses-api-proxy) for macOS, Linux, and Windows.
To see available options, run:
```
node ./bin/codex-responses-api-proxy.js --help
node ./bin/llmx-responses-api-proxy.js --help
```
Refer to [`codex-rs/responses-api-proxy/README.md`](https://github.com/openai/codex/blob/main/codex-rs/responses-api-proxy/README.md) for detailed documentation.
Refer to [`llmx-rs/responses-api-proxy/README.md`](https://github.com/valknar/llmx/blob/main/llmx-rs/responses-api-proxy/README.md) for detailed documentation.

View File

@@ -1,4 +1,4 @@
# codex-stdio-to-uds
# llmx-stdio-to-uds
Traditionally, there are two transport mechanisms for an MCP server: stdio and HTTP.
@@ -10,7 +10,7 @@ This crate helps enable a third, which is UNIX domain socket, because it has the
To that end, this crate provides an adapter between a UDS and stdio. The idea is that someone could start an MCP server that communicates over `/tmp/mcp.sock`. Then the user could specify this on the fly like so:
```
codex --config mcp_servers.example={command="codex-stdio-to-uds",args=["/tmp/mcp.sock"]}
llmx --config mcp_servers.example={command="llmx-stdio-to-uds",args=["/tmp/mcp.sock"]}
```
Unfortunately, the Rust standard library does not provide support for UNIX domain sockets on Windows today even though support was added in October 2018 in Windows 10:

View File

@@ -10,7 +10,7 @@
- **User input tips, selection, and status indicators:** Use ANSI `cyan`.
- **Success and additions:** Use ANSI `green`.
- **Errors, failures and deletions:** Use ANSI `red`.
- **Codex:** Use ANSI `magenta`.
- **LLMX:** Use ANSI `magenta`.
# Avoid

View File

@@ -1,4 +1,4 @@
# codex-git
# llmx-git
Helpers for interacting with git, including patch application and worktree
snapshot utilities.

View File

@@ -1,13 +1,13 @@
# Codex SDK
# LLMX SDK
Embed the Codex agent in your workflows and apps.
Embed the LLMX agent in your workflows and apps.
The TypeScript SDK wraps the bundled `codex` binary. It spawns the CLI and exchanges JSONL events over stdin/stdout.
The TypeScript SDK wraps the bundled `llmx` binary. It spawns the CLI and exchanges JSONL events over stdin/stdout.
## Installation
```bash
npm install @openai/codex-sdk
npm install @llmx/llmx-sdk
```
Requires Node.js 18+.
@@ -15,10 +15,10 @@ Requires Node.js 18+.
## Quickstart
```typescript
import { Codex } from "@openai/codex-sdk";
import { LLMX } from "@llmx/llmx-sdk";
const codex = new Codex();
const thread = codex.startThread();
const llmx = new LLMX();
const thread = llmx.startThread();
const turn = await thread.run("Diagnose the test failure and propose a fix");
console.log(turn.finalResponse);
@@ -52,7 +52,7 @@ for await (const event of events) {
### Structured output
The Codex agent can produce a JSON response that conforms to a specified schema. The schema can be provided for each turn as a plain JSON object.
The LLMX agent can produce a JSON response that conforms to a specified schema. The schema can be provided for each turn as a plain JSON object.
```typescript
const schema = {
@@ -85,7 +85,7 @@ console.log(turn.finalResponse);
### Attaching images
Provide structured input entries when you need to include images alongside text. Text entries are concatenated into the final prompt while image entries are passed to the Codex CLI via `--image`.
Provide structured input entries when you need to include images alongside text. Text entries are concatenated into the final prompt while image entries are passed to the LLMX CLI via `--image`.
```typescript
const turn = await thread.run([
@@ -97,20 +97,20 @@ const turn = await thread.run([
### Resuming an existing thread
Threads are persisted in `~/.codex/sessions`. If you lose the in-memory `Thread` object, reconstruct it with `resumeThread()` and keep going.
Threads are persisted in `~/.llmx/sessions`. If you lose the in-memory `Thread` object, reconstruct it with `resumeThread()` and keep going.
```typescript
const savedThreadId = process.env.CODEX_THREAD_ID!;
const thread = codex.resumeThread(savedThreadId);
const thread = llmx.resumeThread(savedThreadId);
await thread.run("Implement the fix");
```
### Working directory controls
Codex runs in the current working directory by default. To avoid unrecoverable errors, Codex requires the working directory to be a Git repository. You can skip the Git repository check by passing the `skipGitRepoCheck` option when creating a thread.
LLMX runs in the current working directory by default. To avoid unrecoverable errors, LLMX requires the working directory to be a Git repository. You can skip the Git repository check by passing the `skipGitRepoCheck` option when creating a thread.
```typescript
const thread = codex.startThread({
const thread = llmx.startThread({
workingDirectory: "/path/to/project",
skipGitRepoCheck: true,
});