Phase 5: Configuration & Documentation
Updated all documentation and configuration files: Documentation changes: - Updated README.md to describe LLMX as LiteLLM-powered fork - Updated CLAUDE.md with LiteLLM integration details - Updated 50+ markdown files across docs/, llmx-rs/, llmx-cli/, sdk/ - Changed all references: codex → llmx, Codex → LLMX - Updated package references: @openai/codex → @llmx/llmx - Updated repository URLs: github.com/openai/codex → github.com/valknar/llmx Configuration changes: - Updated .github/dependabot.yaml - Updated .github workflow files - Updated cliff.toml (changelog configuration) - Updated Cargo.toml comments Key branding updates: - Project description: "coding agent from OpenAI" → "coding agent powered by LiteLLM" - Added attribution to original OpenAI Codex project - Documented LiteLLM integration benefits Files changed: 51 files (559 insertions, 559 deletions) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -4,7 +4,7 @@ _Based on the Apache Software Foundation Individual CLA v 2.2._
|
||||
|
||||
By commenting **“I have read the CLA Document and I hereby sign the CLA”**
|
||||
on a Pull Request, **you (“Contributor”) agree to the following terms** for any
|
||||
past and future “Contributions” submitted to the **OpenAI Codex CLI project
|
||||
past and future “Contributions” submitted to the **OpenAI LLMX CLI project
|
||||
(the “Project”)**.
|
||||
|
||||
---
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
## Advanced
|
||||
|
||||
If you already lean on Codex every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](#model-context-protocol), and script full runs with [`codex exec`](./exec.md). Jump to the section you need and keep building.
|
||||
If you already lean on LLMX every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](#model-context-protocol), and script full runs with [`llmx exec`](./exec.md). Jump to the section you need and keep building.
|
||||
|
||||
## Config quickstart {#config-quickstart}
|
||||
|
||||
@@ -8,62 +8,62 @@ Most day-to-day tuning lives in `config.toml`: set approval + sandbox presets, p
|
||||
|
||||
## Tracing / verbose logging {#tracing-verbose-logging}
|
||||
|
||||
Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
|
||||
Because LLMX is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
|
||||
|
||||
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
|
||||
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.llmx/log/llmx-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
|
||||
|
||||
```bash
|
||||
tail -F ~/.codex/log/codex-tui.log
|
||||
tail -F ~/.llmx/log/llmx-tui.log
|
||||
```
|
||||
|
||||
By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
|
||||
By comparison, the non-interactive mode (`llmx exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
|
||||
|
||||
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
|
||||
|
||||
## Model Context Protocol (MCP) {#model-context-protocol}
|
||||
|
||||
The Codex CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#mcp-integration).
|
||||
The LLMX CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#mcp-integration).
|
||||
|
||||
## Using Codex as an MCP Server {#mcp-server}
|
||||
## Using LLMX as an MCP Server {#mcp-server}
|
||||
|
||||
The Codex CLI can also be run as an MCP _server_ via `codex mcp-server`. For example, you can use `codex mcp-server` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `codex mcp` separately to add/list/get/remove MCP server launchers in your configuration.
|
||||
The LLMX CLI can also be run as an MCP _server_ via `llmx mcp-server`. For example, you can use `llmx mcp-server` to make LLMX available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `llmx mcp` separately to add/list/get/remove MCP server launchers in your configuration.
|
||||
|
||||
### Codex MCP Server Quickstart {#mcp-server-quickstart}
|
||||
### LLMX MCP Server Quickstart {#mcp-server-quickstart}
|
||||
|
||||
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
|
||||
You can launch a LLMX MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
|
||||
|
||||
```bash
|
||||
npx @modelcontextprotocol/inspector codex mcp-server
|
||||
npx @modelcontextprotocol/inspector llmx mcp-server
|
||||
```
|
||||
|
||||
Send a `tools/list` request and you will see that there are two tools available:
|
||||
|
||||
**`codex`** - Run a Codex session. Accepts configuration parameters matching the Codex Config struct. The `codex` tool takes the following properties:
|
||||
**`llmx`** - Run a LLMX session. Accepts configuration parameters matching the LLMX Config struct. The `llmx` tool takes the following properties:
|
||||
|
||||
| Property | Type | Description |
|
||||
| ----------------------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **`prompt`** (required) | string | The initial user prompt to start the Codex conversation. |
|
||||
| **`prompt`** (required) | string | The initial user prompt to start the LLMX conversation. |
|
||||
| `approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `on-request`, `never`. |
|
||||
| `base-instructions` | string | The set of instructions to use instead of the default ones. |
|
||||
| `config` | object | Individual [config settings](https://github.com/openai/codex/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
|
||||
| `config` | object | Individual [config settings](https://github.com/valknar/llmx/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
|
||||
| `cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory. |
|
||||
| `model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`). |
|
||||
| `profile` | string | Configuration profile from `config.toml` to specify default options. |
|
||||
| `sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`. |
|
||||
|
||||
**`codex-reply`** - Continue a Codex session by providing the conversation id and prompt. The `codex-reply` tool takes the following properties:
|
||||
**`llmx-reply`** - Continue a LLMX session by providing the conversation id and prompt. The `llmx-reply` tool takes the following properties:
|
||||
|
||||
| Property | Type | Description |
|
||||
| ------------------------------- | ------ | -------------------------------------------------------- |
|
||||
| **`prompt`** (required) | string | The next user prompt to continue the Codex conversation. |
|
||||
| **`prompt`** (required) | string | The next user prompt to continue the LLMX conversation. |
|
||||
| **`conversationId`** (required) | string | The id of the conversation to continue. |
|
||||
|
||||
### Trying it Out {#mcp-server-trying-it-out}
|
||||
|
||||
> [!TIP]
|
||||
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
|
||||
> LLMX often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
|
||||
|
||||
Use the MCP inspector and `codex mcp-server` to build a simple tic-tac-toe game with the following settings:
|
||||
Use the MCP inspector and `llmx mcp-server` to build a simple tic-tac-toe game with the following settings:
|
||||
|
||||
**approval-policy:** never
|
||||
|
||||
@@ -71,4 +71,4 @@ Use the MCP inspector and `codex mcp-server` to build a simple tic-tac-toe game
|
||||
|
||||
**sandbox:** workspace-write
|
||||
|
||||
Click "Run Tool" and you should see a list of events emitted from the Codex MCP server as it builds the game.
|
||||
Click "Run Tool" and you should see a list of events emitted from the LLMX MCP server as it builds the game.
|
||||
|
||||
@@ -1,38 +1,38 @@
|
||||
# AGENTS.md Discovery
|
||||
|
||||
Codex uses [`AGENTS.md`](https://agents.md/) files to gather helpful guidance before it starts assisting you. This page explains how those files are discovered and combined, so you can decide where to place your instructions.
|
||||
LLMX uses [`AGENTS.md`](https://agents.md/) files to gather helpful guidance before it starts assisting you. This page explains how those files are discovered and combined, so you can decide where to place your instructions.
|
||||
|
||||
## Global Instructions (`~/.codex`)
|
||||
## Global Instructions (`~/.llmx`)
|
||||
|
||||
- Codex looks for global guidance in your Codex home directory (usually `~/.codex`; set `CODEX_HOME` to change it). For a quick overview, see the [Memory with AGENTS.md section](../docs/getting-started.md#memory-with-agentsmd) in the getting started guide.
|
||||
- If an `AGENTS.override.md` file exists there, it takes priority. If not, Codex falls back to `AGENTS.md`.
|
||||
- Only the first non-empty file is used. Other filenames, such as `instructions.md`, have no effect unless Codex is specifically instructed to use them.
|
||||
- Whatever Codex finds here stays active for the whole session, and Codex combines it with any project-specific instructions it discovers.
|
||||
- LLMX looks for global guidance in your LLMX home directory (usually `~/.llmx`; set `CODEX_HOME` to change it). For a quick overview, see the [Memory with AGENTS.md section](../docs/getting-started.md#memory-with-agentsmd) in the getting started guide.
|
||||
- If an `AGENTS.override.md` file exists there, it takes priority. If not, LLMX falls back to `AGENTS.md`.
|
||||
- Only the first non-empty file is used. Other filenames, such as `instructions.md`, have no effect unless LLMX is specifically instructed to use them.
|
||||
- Whatever LLMX finds here stays active for the whole session, and LLMX combines it with any project-specific instructions it discovers.
|
||||
|
||||
## Project Instructions (per-repository)
|
||||
|
||||
When you work inside a project, Codex builds on those global instructions by collecting project docs:
|
||||
When you work inside a project, LLMX builds on those global instructions by collecting project docs:
|
||||
|
||||
- The search starts at the repository root and continues down to your current directory. If a Git root is not found, only the current directory is checked.
|
||||
- In each directory along that path, Codex looks for `AGENTS.override.md` first, then `AGENTS.md`, and then any fallback names listed in your Codex configuration (see [`project_doc_fallback_filenames`](../docs/config.md#project_doc_fallback_filenames)). At most one file per directory is included.
|
||||
- In each directory along that path, LLMX looks for `AGENTS.override.md` first, then `AGENTS.md`, and then any fallback names listed in your LLMX configuration (see [`project_doc_fallback_filenames`](../docs/config.md#project_doc_fallback_filenames)). At most one file per directory is included.
|
||||
- Files are read in order from root to leaf and joined together with blank lines. Empty files are skipped, and very large files are truncated once the combined size reaches 32 KiB (the default [`project_doc_max_bytes`](../docs/config.md#project_doc_max_bytes) limit). If you need more space, split guidance across nested directories or raise the limit in your configuration.
|
||||
|
||||
## How They Come Together
|
||||
|
||||
Before Codex gets to work, the instructions are ingested in precedence order: global guidance from `~/.codex` comes first, then each project doc from the repository root down to your current directory. Guidance in deeper directories overrides earlier layers, so the most specific file controls the final behavior.
|
||||
Before LLMX gets to work, the instructions are ingested in precedence order: global guidance from `~/.llmx` comes first, then each project doc from the repository root down to your current directory. Guidance in deeper directories overrides earlier layers, so the most specific file controls the final behavior.
|
||||
|
||||
### Priority Summary
|
||||
|
||||
1. Global `AGENTS.override.md` (if present), otherwise global `AGENTS.md`.
|
||||
2. For each directory from the repository root to your working directory: `AGENTS.override.md`, then `AGENTS.md`, then configured fallback names.
|
||||
|
||||
Only these filenames are considered. To use a different name, add it to the fallback list in your Codex configuration or rename the file accordingly.
|
||||
Only these filenames are considered. To use a different name, add it to the fallback list in your LLMX configuration or rename the file accordingly.
|
||||
|
||||
## Fallback Filenames
|
||||
|
||||
Codex can look for additional instruction filenames beyond the two defaults if you add them to `project_doc_fallback_filenames` in your Codex configuration. Each fallback is checked after `AGENTS.override.md` and `AGENTS.md` in every directory along the search path.
|
||||
LLMX can look for additional instruction filenames beyond the two defaults if you add them to `project_doc_fallback_filenames` in your LLMX configuration. Each fallback is checked after `AGENTS.override.md` and `AGENTS.md` in every directory along the search path.
|
||||
|
||||
Example: suppose your configuration lists `["TEAM_GUIDE.md", ".agents.md"]`. Inside each directory Codex will look in this order:
|
||||
Example: suppose your configuration lists `["TEAM_GUIDE.md", ".agents.md"]`. Inside each directory LLMX will look in this order:
|
||||
|
||||
1. `AGENTS.override.md`
|
||||
2. `AGENTS.md`
|
||||
@@ -41,7 +41,7 @@ Example: suppose your configuration lists `["TEAM_GUIDE.md", ".agents.md"]`. Ins
|
||||
|
||||
If the repository root contains `TEAM_GUIDE.md` and the `backend/` directory contains `AGENTS.override.md`, the overall instructions will combine the root `TEAM_GUIDE.md` (because no override or default file was present there) with the `backend/AGENTS.override.md` file (which takes precedence over the fallback names).
|
||||
|
||||
You can configure those fallbacks in `~/.codex/config.toml` (or another profile) like this:
|
||||
You can configure those fallbacks in `~/.llmx/config.toml` (or another profile) like this:
|
||||
|
||||
```toml
|
||||
project_doc_fallback_filenames = ["TEAM_GUIDE.md", ".agents.md"]
|
||||
|
||||
@@ -5,13 +5,13 @@
|
||||
If you prefer to pay-as-you-go, you can still authenticate with your OpenAI API key:
|
||||
|
||||
```shell
|
||||
printenv OPENAI_API_KEY | codex login --with-api-key
|
||||
printenv OPENAI_API_KEY | llmx login --with-api-key
|
||||
```
|
||||
|
||||
Alternatively, read from a file:
|
||||
|
||||
```shell
|
||||
codex login --with-api-key < my_key.txt
|
||||
llmx login --with-api-key < my_key.txt
|
||||
```
|
||||
|
||||
The legacy `--api-key` flag now exits with an error instructing you to use `--with-api-key` so that the key never appears in shell history or process listings.
|
||||
@@ -20,11 +20,11 @@ This key must, at minimum, have write access to the Responses API.
|
||||
|
||||
## Migrating to ChatGPT login from API key
|
||||
|
||||
If you've used the Codex CLI before with usage-based billing via an API key and want to switch to using your ChatGPT plan, follow these steps:
|
||||
If you've used the LLMX CLI before with usage-based billing via an API key and want to switch to using your ChatGPT plan, follow these steps:
|
||||
|
||||
1. Update the CLI and ensure `codex --version` is `0.20.0` or later
|
||||
2. Delete `~/.codex/auth.json` (on Windows: `C:\\Users\\USERNAME\\.codex\\auth.json`)
|
||||
3. Run `codex login` again
|
||||
1. Update the CLI and ensure `llmx --version` is `0.20.0` or later
|
||||
2. Delete `~/.llmx/auth.json` (on Windows: `C:\\Users\\USERNAME\\.llmx\\auth.json`)
|
||||
3. Run `llmx login` again
|
||||
|
||||
## Connecting on a "Headless" Machine
|
||||
|
||||
@@ -32,37 +32,37 @@ Today, the login process entails running a server on `localhost:1455`. If you ar
|
||||
|
||||
### Authenticate locally and copy your credentials to the "headless" machine
|
||||
|
||||
The easiest solution is likely to run through the `codex login` process on your local machine such that `localhost:1455` _is_ accessible in your web browser. When you complete the authentication process, an `auth.json` file should be available at `$CODEX_HOME/auth.json` (on Mac/Linux, `$CODEX_HOME` defaults to `~/.codex` whereas on Windows, it defaults to `%USERPROFILE%\\.codex`).
|
||||
The easiest solution is likely to run through the `llmx login` process on your local machine such that `localhost:1455` _is_ accessible in your web browser. When you complete the authentication process, an `auth.json` file should be available at `$CODEX_HOME/auth.json` (on Mac/Linux, `$CODEX_HOME` defaults to `~/.llmx` whereas on Windows, it defaults to `%USERPROFILE%\\.llmx`).
|
||||
|
||||
Because the `auth.json` file is not tied to a specific host, once you complete the authentication flow locally, you can copy the `$CODEX_HOME/auth.json` file to the headless machine and then `codex` should "just work" on that machine. Note to copy a file to a Docker container, you can do:
|
||||
Because the `auth.json` file is not tied to a specific host, once you complete the authentication flow locally, you can copy the `$CODEX_HOME/auth.json` file to the headless machine and then `llmx` should "just work" on that machine. Note to copy a file to a Docker container, you can do:
|
||||
|
||||
```shell
|
||||
# substitute MY_CONTAINER with the name or id of your Docker container:
|
||||
CONTAINER_HOME=$(docker exec MY_CONTAINER printenv HOME)
|
||||
docker exec MY_CONTAINER mkdir -p "$CONTAINER_HOME/.codex"
|
||||
docker cp auth.json MY_CONTAINER:"$CONTAINER_HOME/.codex/auth.json"
|
||||
docker exec MY_CONTAINER mkdir -p "$CONTAINER_HOME/.llmx"
|
||||
docker cp auth.json MY_CONTAINER:"$CONTAINER_HOME/.llmx/auth.json"
|
||||
```
|
||||
|
||||
whereas if you are `ssh`'d into a remote machine, you likely want to use [`scp`](https://en.wikipedia.org/wiki/Secure_copy_protocol):
|
||||
|
||||
```shell
|
||||
ssh user@remote 'mkdir -p ~/.codex'
|
||||
scp ~/.codex/auth.json user@remote:~/.codex/auth.json
|
||||
ssh user@remote 'mkdir -p ~/.llmx'
|
||||
scp ~/.llmx/auth.json user@remote:~/.llmx/auth.json
|
||||
```
|
||||
|
||||
or try this one-liner:
|
||||
|
||||
```shell
|
||||
ssh user@remote 'mkdir -p ~/.codex && cat > ~/.codex/auth.json' < ~/.codex/auth.json
|
||||
ssh user@remote 'mkdir -p ~/.llmx && cat > ~/.llmx/auth.json' < ~/.llmx/auth.json
|
||||
```
|
||||
|
||||
### Connecting through VPS or remote
|
||||
|
||||
If you run Codex on a remote machine (VPS/server) without a local browser, the login helper starts a server on `localhost:1455` on the remote host. To complete login in your local browser, forward that port to your machine before starting the login flow:
|
||||
If you run LLMX on a remote machine (VPS/server) without a local browser, the login helper starts a server on `localhost:1455` on the remote host. To complete login in your local browser, forward that port to your machine before starting the login flow:
|
||||
|
||||
```bash
|
||||
# From your local machine
|
||||
ssh -L 1455:localhost:1455 <user>@<remote-host>
|
||||
```
|
||||
|
||||
Then, in that SSH session, run `codex` and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be `http://localhost:1455/...`) in your local browser. The traffic will be tunneled to the remote server.
|
||||
Then, in that SSH session, run `llmx` and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be `http://localhost:1455/...`) in your local browser. The traffic will be tunneled to the remote server.
|
||||
|
||||
174
docs/config.md
174
docs/config.md
@@ -1,6 +1,6 @@
|
||||
# Config
|
||||
|
||||
Codex configuration gives you fine-grained control over the model, execution environment, and integrations available to the CLI. Use this guide alongside the workflows in [`codex exec`](./exec.md), the guardrails in [Sandbox & approvals](./sandbox.md), and project guidance from [AGENTS.md discovery](./agents_md.md).
|
||||
LLMX configuration gives you fine-grained control over the model, execution environment, and integrations available to the CLI. Use this guide alongside the workflows in [`llmx exec`](./exec.md), the guardrails in [Sandbox & approvals](./sandbox.md), and project guidance from [AGENTS.md discovery](./agents_md.md).
|
||||
|
||||
## Quick navigation
|
||||
|
||||
@@ -12,18 +12,18 @@ Codex configuration gives you fine-grained control over the model, execution env
|
||||
- [Profiles and overrides](#profiles-and-overrides)
|
||||
- [Reference table](#config-reference)
|
||||
|
||||
Codex supports several mechanisms for setting config values:
|
||||
LLMX supports several mechanisms for setting config values:
|
||||
|
||||
- Config-specific command-line flags, such as `--model o3` (highest precedence).
|
||||
- A generic `-c`/`--config` flag that takes a `key=value` pair, such as `--config model="o3"`.
|
||||
- The key can contain dots to set a value deeper than the root, e.g. `--config model_providers.openai.wire_api="chat"`.
|
||||
- For consistency with `config.toml`, values are a string in TOML format rather than JSON format, so use `key='{a = 1, b = 2}'` rather than `key='{"a": 1, "b": 2}'`.
|
||||
- The quotes around the value are necessary, as without them your shell would split the config argument on spaces, resulting in `codex` receiving `-c key={a` with (invalid) additional arguments `=`, `1,`, `b`, `=`, `2}`.
|
||||
- The quotes around the value are necessary, as without them your shell would split the config argument on spaces, resulting in `llmx` receiving `-c key={a` with (invalid) additional arguments `=`, `1,`, `b`, `=`, `2}`.
|
||||
- Values can contain any TOML object, such as `--config shell_environment_policy.include_only='["PATH", "HOME", "USER"]'`.
|
||||
- If `value` cannot be parsed as a valid TOML value, it is treated as a string value. This means that `-c model='"o3"'` and `-c model=o3` are equivalent.
|
||||
- In the first case, the value is the TOML string `"o3"`, while in the second the value is `o3`, which is not valid TOML and therefore treated as the TOML string `"o3"`.
|
||||
- Because quotes are interpreted by one's shell, `-c key="true"` will be correctly interpreted in TOML as `key = true` (a boolean) and not `key = "true"` (a string). If for some reason you needed the string `"true"`, you would need to use `-c key='"true"'` (note the two sets of quotes).
|
||||
- The `$CODEX_HOME/config.toml` configuration file where the `CODEX_HOME` environment value defaults to `~/.codex`. (Note `CODEX_HOME` will also be where logs and other Codex-related information are stored.)
|
||||
- The `$CODEX_HOME/config.toml` configuration file where the `CODEX_HOME` environment value defaults to `~/.llmx`. (Note `CODEX_HOME` will also be where logs and other LLMX-related information are stored.)
|
||||
|
||||
Both the `--config` flag and the `config.toml` file support the following options:
|
||||
|
||||
@@ -61,15 +61,15 @@ Notes:
|
||||
|
||||
### model
|
||||
|
||||
The model that Codex should use.
|
||||
The model that LLMX should use.
|
||||
|
||||
```toml
|
||||
model = "gpt-5" # overrides the default ("gpt-5-codex" on macOS/Linux, "gpt-5" on Windows)
|
||||
model = "gpt-5" # overrides the default ("gpt-5-llmx" on macOS/Linux, "gpt-5" on Windows)
|
||||
```
|
||||
|
||||
### model_providers
|
||||
|
||||
This option lets you add to the default set of model providers bundled with Codex. The map key becomes the value you use with `model_provider` to select the provider.
|
||||
This option lets you add to the default set of model providers bundled with LLMX. The map key becomes the value you use with `model_provider` to select the provider.
|
||||
|
||||
> [!NOTE]
|
||||
> Built-in providers are not overwritten when you reuse their key. Entries you add only take effect when the key is **new**; for example `[model_providers.openai]` leaves the original OpenAI definition untouched. To customize the bundled OpenAI provider, prefer the dedicated knobs (for example the `OPENAI_BASE_URL` environment variable) or register a new provider key and point `model_provider` at it.
|
||||
@@ -82,13 +82,13 @@ model = "gpt-4o"
|
||||
model_provider = "openai-chat-completions"
|
||||
|
||||
[model_providers.openai-chat-completions]
|
||||
# Name of the provider that will be displayed in the Codex UI.
|
||||
# Name of the provider that will be displayed in the LLMX UI.
|
||||
name = "OpenAI using Chat Completions"
|
||||
# The path `/chat/completions` will be amended to this URL to make the POST
|
||||
# request for the chat completions.
|
||||
base_url = "https://api.openai.com/v1"
|
||||
# If `env_key` is set, identifies an environment variable that must be set when
|
||||
# using Codex with this provider. The value of the environment variable must be
|
||||
# using LLMX with this provider. The value of the environment variable must be
|
||||
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
|
||||
env_key = "OPENAI_API_KEY"
|
||||
# Valid values for wire_api are "chat" and "responses". Defaults to "chat" if omitted.
|
||||
@@ -98,7 +98,7 @@ wire_api = "chat"
|
||||
query_params = {}
|
||||
```
|
||||
|
||||
Note this makes it possible to use Codex CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use Codex CLI with Ollama running locally:
|
||||
Note this makes it possible to use LLMX CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use LLMX CLI with Ollama running locally:
|
||||
|
||||
```toml
|
||||
[model_providers.ollama]
|
||||
@@ -145,7 +145,7 @@ query_params = { api-version = "2025-04-01-preview" }
|
||||
wire_api = "responses"
|
||||
```
|
||||
|
||||
Export your key before launching Codex: `export AZURE_OPENAI_API_KEY=…`
|
||||
Export your key before launching LLMX: `export AZURE_OPENAI_API_KEY=…`
|
||||
|
||||
#### Per-provider network tuning
|
||||
|
||||
@@ -166,15 +166,15 @@ stream_idle_timeout_ms = 300000 # 5m idle timeout
|
||||
|
||||
##### request_max_retries
|
||||
|
||||
How many times Codex will retry a failed HTTP request to the model provider. Defaults to `4`.
|
||||
How many times LLMX will retry a failed HTTP request to the model provider. Defaults to `4`.
|
||||
|
||||
##### stream_max_retries
|
||||
|
||||
Number of times Codex will attempt to reconnect when a streaming response is interrupted. Defaults to `5`.
|
||||
Number of times LLMX will attempt to reconnect when a streaming response is interrupted. Defaults to `5`.
|
||||
|
||||
##### stream_idle_timeout_ms
|
||||
|
||||
How long Codex will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
|
||||
How long LLMX will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
|
||||
|
||||
### model_provider
|
||||
|
||||
@@ -191,7 +191,7 @@ model = "mistral"
|
||||
|
||||
### model_reasoning_effort
|
||||
|
||||
If the selected model is known to support reasoning (for example: `o3`, `o4-mini`, `codex-*`, `gpt-5`, `gpt-5-codex`), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
|
||||
If the selected model is known to support reasoning (for example: `o3`, `o4-mini`, `llmx-*`, `gpt-5`, `gpt-5-llmx`), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
|
||||
|
||||
- `"minimal"`
|
||||
- `"low"`
|
||||
@@ -202,7 +202,7 @@ Note: to minimize reasoning, choose `"minimal"`.
|
||||
|
||||
### model_reasoning_summary
|
||||
|
||||
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"codex"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
|
||||
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"llmx"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
|
||||
|
||||
- `"auto"` (default)
|
||||
- `"concise"`
|
||||
@@ -222,7 +222,7 @@ Controls output length/detail on GPT‑5 family models when using the Responses
|
||||
- `"medium"` (default when omitted)
|
||||
- `"high"`
|
||||
|
||||
When set, Codex includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }`.
|
||||
When set, LLMX includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }`.
|
||||
|
||||
Example:
|
||||
|
||||
@@ -245,26 +245,26 @@ model_supports_reasoning_summaries = true
|
||||
|
||||
The size of the context window for the model, in tokens.
|
||||
|
||||
In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use `model_context_window` to tell Codex what value to use to determine how much context is left during a conversation.
|
||||
In general, LLMX knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the LLMX CLI, then you can use `model_context_window` to tell LLMX what value to use to determine how much context is left during a conversation.
|
||||
|
||||
### model_max_output_tokens
|
||||
|
||||
This is analogous to `model_context_window`, but for the maximum number of output tokens for the model.
|
||||
|
||||
> See also [`codex exec`](./exec.md) to see how these model settings influence non-interactive runs.
|
||||
> See also [`llmx exec`](./exec.md) to see how these model settings influence non-interactive runs.
|
||||
|
||||
## Execution environment
|
||||
|
||||
### approval_policy
|
||||
|
||||
Determines when the user should be prompted to approve whether Codex can execute a command:
|
||||
Determines when the user should be prompted to approve whether LLMX can execute a command:
|
||||
|
||||
```toml
|
||||
# Codex has hardcoded logic that defines a set of "trusted" commands.
|
||||
# Setting the approval_policy to `untrusted` means that Codex will prompt the
|
||||
# LLMX has hardcoded logic that defines a set of "trusted" commands.
|
||||
# Setting the approval_policy to `untrusted` means that LLMX will prompt the
|
||||
# user before running a command not in the "trusted" set.
|
||||
#
|
||||
# See https://github.com/openai/codex/issues/1260 for the plan to enable
|
||||
# See https://github.com/valknar/llmx/issues/1260 for the plan to enable
|
||||
# end-users to define their own trusted commands.
|
||||
approval_policy = "untrusted"
|
||||
```
|
||||
@@ -272,7 +272,7 @@ approval_policy = "untrusted"
|
||||
If you want to be notified whenever a command fails, use "on-failure":
|
||||
|
||||
```toml
|
||||
# If the command fails when run in the sandbox, Codex asks for permission to
|
||||
# If the command fails when run in the sandbox, LLMX asks for permission to
|
||||
# retry the command outside the sandbox.
|
||||
approval_policy = "on-failure"
|
||||
```
|
||||
@@ -287,14 +287,14 @@ approval_policy = "on-request"
|
||||
Alternatively, you can have the model run until it is done, and never ask to run a command with escalated permissions:
|
||||
|
||||
```toml
|
||||
# User is never prompted: if the command fails, Codex will automatically try
|
||||
# User is never prompted: if the command fails, LLMX will automatically try
|
||||
# something out. Note the `exec` subcommand always uses this mode.
|
||||
approval_policy = "never"
|
||||
```
|
||||
|
||||
### sandbox_mode
|
||||
|
||||
Codex executes model-generated shell commands inside an OS-level sandbox.
|
||||
LLMX executes model-generated shell commands inside an OS-level sandbox.
|
||||
|
||||
In most cases you can pick the desired behaviour with a single option:
|
||||
|
||||
@@ -306,9 +306,9 @@ sandbox_mode = "read-only"
|
||||
The default policy is `read-only`, which means commands can read any file on
|
||||
disk, but attempts to write a file or access the network will be blocked.
|
||||
|
||||
A more relaxed policy is `workspace-write`. When specified, the current working directory for the Codex task will be writable (as well as `$TMPDIR` on macOS). Note that the CLI defaults to using the directory where it was spawned as `cwd`, though this can be overridden using `--cwd/-C`.
|
||||
A more relaxed policy is `workspace-write`. When specified, the current working directory for the LLMX task will be writable (as well as `$TMPDIR` on macOS). Note that the CLI defaults to using the directory where it was spawned as `cwd`, though this can be overridden using `--cwd/-C`.
|
||||
|
||||
On macOS (and soon Linux), all writable roots (including `cwd`) that contain a `.git/` folder _as an immediate child_ will configure the `.git/` folder to be read-only while the rest of the Git repository will be writable. This means that commands like `git commit` will fail, by default (as it entails writing to `.git/`), and will require Codex to ask for permission.
|
||||
On macOS (and soon Linux), all writable roots (including `cwd`) that contain a `.git/` folder _as an immediate child_ will configure the `.git/` folder to be read-only while the rest of the Git repository will be writable. This means that commands like `git commit` will fail, by default (as it entails writing to `.git/`), and will require LLMX to ask for permission.
|
||||
|
||||
```toml
|
||||
# same as `--sandbox workspace-write`
|
||||
@@ -316,7 +316,7 @@ sandbox_mode = "workspace-write"
|
||||
|
||||
# Extra settings that only apply when `sandbox = "workspace-write"`.
|
||||
[sandbox_workspace_write]
|
||||
# By default, the cwd for the Codex session will be writable as well as $TMPDIR
|
||||
# By default, the cwd for the LLMX session will be writable as well as $TMPDIR
|
||||
# (if set) and /tmp (if it exists). Setting the respective options to `true`
|
||||
# will override those defaults.
|
||||
exclude_tmpdir_env_var = false
|
||||
@@ -337,9 +337,9 @@ To disable sandboxing altogether, specify `danger-full-access` like so:
|
||||
sandbox_mode = "danger-full-access"
|
||||
```
|
||||
|
||||
This is reasonable to use if Codex is running in an environment that provides its own sandboxing (such as a Docker container) such that further sandboxing is unnecessary.
|
||||
This is reasonable to use if LLMX is running in an environment that provides its own sandboxing (such as a Docker container) such that further sandboxing is unnecessary.
|
||||
|
||||
Though using this option may also be necessary if you try to use Codex in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
|
||||
Though using this option may also be necessary if you try to use LLMX in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
|
||||
|
||||
### tools.\*
|
||||
|
||||
@@ -347,29 +347,29 @@ Use the optional `[tools]` table to toggle built-in tools that the agent may cal
|
||||
|
||||
```toml
|
||||
[tools]
|
||||
web_search = true # allow Codex to issue first-party web searches without prompting you (deprecated)
|
||||
web_search = true # allow LLMX to issue first-party web searches without prompting you (deprecated)
|
||||
view_image = false # disable image uploads (they're enabled by default)
|
||||
```
|
||||
|
||||
`web_search` is deprecated; use the `web_search_request` feature flag instead.
|
||||
|
||||
The `view_image` toggle is useful when you want to include screenshots or diagrams from your repo without pasting them manually. Codex still respects sandboxing: it can only attach files inside the workspace roots you allow.
|
||||
The `view_image` toggle is useful when you want to include screenshots or diagrams from your repo without pasting them manually. LLMX still respects sandboxing: it can only attach files inside the workspace roots you allow.
|
||||
|
||||
### approval_presets
|
||||
|
||||
Codex provides three main Approval Presets:
|
||||
LLMX provides three main Approval Presets:
|
||||
|
||||
- Read Only: Codex can read files and answer questions; edits, running commands, and network access require approval.
|
||||
- Auto: Codex can read files, make edits, and run commands in the workspace without approval; asks for approval outside the workspace or for network access.
|
||||
- Read Only: LLMX can read files and answer questions; edits, running commands, and network access require approval.
|
||||
- Auto: LLMX can read files, make edits, and run commands in the workspace without approval; asks for approval outside the workspace or for network access.
|
||||
- Full Access: Full disk and network access without prompts; extremely risky.
|
||||
|
||||
You can further customize how Codex runs at the command line using the `--ask-for-approval` and `--sandbox` options.
|
||||
You can further customize how LLMX runs at the command line using the `--ask-for-approval` and `--sandbox` options.
|
||||
|
||||
> See also [Sandbox & approvals](./sandbox.md) for in-depth examples and platform-specific behaviour.
|
||||
|
||||
### shell_environment_policy
|
||||
|
||||
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment** to those subprocesses. You can tune this behavior via the **`shell_environment_policy`** block in `config.toml`:
|
||||
LLMX spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment** to those subprocesses. You can tune this behavior via the **`shell_environment_policy`** block in `config.toml`:
|
||||
|
||||
```toml
|
||||
[shell_environment_policy]
|
||||
@@ -388,7 +388,7 @@ include_only = ["PATH", "HOME"]
|
||||
| Field | Type | Default | Description |
|
||||
| ------------------------- | -------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `inherit` | string | `all` | Starting template for the environment:<br>`all` (clone full parent env), `core` (`HOME`, `PATH`, `USER`, …), or `none` (start empty). |
|
||||
| `ignore_default_excludes` | boolean | `false` | When `false`, Codex removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. |
|
||||
| `ignore_default_excludes` | boolean | `false` | When `false`, LLMX removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. |
|
||||
| `exclude` | array<string> | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"`, `"AZURE_*"`. |
|
||||
| `set` | table<string,string> | `{}` | Explicit key/value overrides or additions – always win over inherited values. |
|
||||
| `include_only` | array<string> | `[]` | If non-empty, a whitelist of patterns; only variables that match _one_ pattern survive the final step. (Generally used with `inherit = "all"`.) |
|
||||
@@ -413,7 +413,7 @@ Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment,
|
||||
|
||||
### mcp_servers
|
||||
|
||||
You can configure Codex to use [MCP servers](https://modelcontextprotocol.io/about) to give Codex access to external applications, resources, or services.
|
||||
You can configure LLMX to use [MCP servers](https://modelcontextprotocol.io/about) to give LLMX access to external applications, resources, or services.
|
||||
|
||||
#### Server configuration
|
||||
|
||||
@@ -430,7 +430,7 @@ command = "npx"
|
||||
args = ["-y", "mcp-server"]
|
||||
# Optional: propagate additional env vars to the MVP server.
|
||||
# A default whitelist of env vars will be propagated to the MCP server.
|
||||
# https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/utils.rs#L82
|
||||
# https://github.com/valknar/llmx/blob/main/llmx-rs/rmcp-client/src/utils.rs#L82
|
||||
env = { "API_KEY" = "value" }
|
||||
# or
|
||||
[mcp_servers.server_name.env]
|
||||
@@ -444,7 +444,7 @@ cwd = "/Users/<user>/code/my-server"
|
||||
|
||||
##### Streamable HTTP
|
||||
|
||||
[Streamable HTTP servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http) enable Codex to talk to resources that are accessed via a http url (either on localhost or another domain).
|
||||
[Streamable HTTP servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http) enable LLMX to talk to resources that are accessed via a http url (either on localhost or another domain).
|
||||
|
||||
```toml
|
||||
[mcp_servers.figma]
|
||||
@@ -463,7 +463,7 @@ Streamable HTTP connections always use the experimental Rust MCP client under th
|
||||
experimental_use_rmcp_client = true
|
||||
```
|
||||
|
||||
After enabling it, run `codex mcp login <server-name>` when the server supports OAuth.
|
||||
After enabling it, run `llmx mcp login <server-name>` when the server supports OAuth.
|
||||
|
||||
#### Other configuration options
|
||||
|
||||
@@ -480,7 +480,7 @@ enabled_tools = ["search", "summarize"]
|
||||
disabled_tools = ["search"]
|
||||
```
|
||||
|
||||
When both `enabled_tools` and `disabled_tools` are specified, Codex first restricts the server to the allow-list and then removes any tools that appear in the deny-list.
|
||||
When both `enabled_tools` and `disabled_tools` are specified, LLMX first restricts the server to the allow-list and then removes any tools that appear in the deny-list.
|
||||
|
||||
#### Experimental RMCP client
|
||||
|
||||
@@ -497,32 +497,32 @@ experimental_use_rmcp_client = true
|
||||
|
||||
```shell
|
||||
# List all available commands
|
||||
codex mcp --help
|
||||
llmx mcp --help
|
||||
|
||||
# Add a server (env can be repeated; `--` separates the launcher command)
|
||||
codex mcp add docs -- docs-server --port 4000
|
||||
llmx mcp add docs -- docs-server --port 4000
|
||||
|
||||
# List configured servers (pretty table or JSON)
|
||||
codex mcp list
|
||||
codex mcp list --json
|
||||
llmx mcp list
|
||||
llmx mcp list --json
|
||||
|
||||
# Show one server (table or JSON)
|
||||
codex mcp get docs
|
||||
codex mcp get docs --json
|
||||
llmx mcp get docs
|
||||
llmx mcp get docs --json
|
||||
|
||||
# Remove a server
|
||||
codex mcp remove docs
|
||||
llmx mcp remove docs
|
||||
|
||||
# Log in to a streamable HTTP server that supports oauth
|
||||
codex mcp login SERVER_NAME
|
||||
llmx mcp login SERVER_NAME
|
||||
|
||||
# Log out from a streamable HTTP server that supports oauth
|
||||
codex mcp logout SERVER_NAME
|
||||
llmx mcp logout SERVER_NAME
|
||||
```
|
||||
|
||||
### Examples of useful MCPs
|
||||
|
||||
There is an ever growing list of useful MCP servers that can be helpful while you are working with Codex.
|
||||
There is an ever growing list of useful MCP servers that can be helpful while you are working with LLMX.
|
||||
|
||||
Some of the most common MCPs we've seen are:
|
||||
|
||||
@@ -530,14 +530,14 @@ Some of the most common MCPs we've seen are:
|
||||
- Figma [Local](https://developers.figma.com/docs/figma-mcp-server/local-server-installation/) and [Remote](https://developers.figma.com/docs/figma-mcp-server/remote-server-installation/) - access to your Figma designs
|
||||
- [Playwright](https://www.npmjs.com/package/@playwright/mcp) - control and inspect a browser using Playwright
|
||||
- [Chrome Developer Tools](https://github.com/ChromeDevTools/chrome-devtools-mcp/) — control and inspect a Chrome browser
|
||||
- [Sentry](https://docs.sentry.io/product/sentry-mcp/#codex) — access to your Sentry logs
|
||||
- [Sentry](https://docs.sentry.io/product/sentry-mcp/#llmx) — access to your Sentry logs
|
||||
- [GitHub](https://github.com/github/github-mcp-server) — Control over your GitHub account beyond what git allows (like controlling PRs, issues, etc.)
|
||||
|
||||
## Observability and telemetry
|
||||
|
||||
### otel
|
||||
|
||||
Codex can emit [OpenTelemetry](https://opentelemetry.io/) **log events** that
|
||||
LLMX can emit [OpenTelemetry](https://opentelemetry.io/) **log events** that
|
||||
describe each run: outbound API requests, streamed responses, user input,
|
||||
tool-approval decisions, and the result of every tool invocation. Export is
|
||||
**disabled by default** so local runs remain self-contained. Opt in by adding an
|
||||
@@ -550,7 +550,7 @@ exporter = "none" # defaults to "none"; set to otlp-http or otlp-grpc t
|
||||
log_user_prompt = false # defaults to false; redact prompt text unless explicitly enabled
|
||||
```
|
||||
|
||||
Codex tags every exported event with `service.name = $ORIGINATOR` (the same
|
||||
LLMX tags every exported event with `service.name = $ORIGINATOR` (the same
|
||||
value sent in the `originator` header, `codex_cli_rs` by default), the CLI
|
||||
version, and an `env` attribute so downstream collectors can distinguish
|
||||
dev/staging/prod traffic. Only telemetry produced inside the `codex_otel`
|
||||
@@ -562,10 +562,10 @@ Every event shares a common set of metadata fields: `event.timestamp`,
|
||||
`conversation.id`, `app.version`, `auth_mode` (when available),
|
||||
`user.account_id` (when available), `user.email` (when available), `terminal.type`, `model`, and `slug`.
|
||||
|
||||
With OTEL enabled Codex emits the following event types (in addition to the
|
||||
With OTEL enabled LLMX emits the following event types (in addition to the
|
||||
metadata above):
|
||||
|
||||
- `codex.conversation_starts`
|
||||
- `llmx.conversation_starts`
|
||||
- `provider_name`
|
||||
- `reasoning_effort` (optional)
|
||||
- `reasoning_summary`
|
||||
@@ -576,12 +576,12 @@ metadata above):
|
||||
- `sandbox_policy`
|
||||
- `mcp_servers` (comma-separated list)
|
||||
- `active_profile` (optional)
|
||||
- `codex.api_request`
|
||||
- `llmx.api_request`
|
||||
- `attempt`
|
||||
- `duration_ms`
|
||||
- `http.response.status_code` (optional)
|
||||
- `error.message` (failures)
|
||||
- `codex.sse_event`
|
||||
- `llmx.sse_event`
|
||||
- `event.kind`
|
||||
- `duration_ms`
|
||||
- `error.message` (failures)
|
||||
@@ -590,15 +590,15 @@ metadata above):
|
||||
- `cached_token_count` (responses only, optional)
|
||||
- `reasoning_token_count` (responses only, optional)
|
||||
- `tool_token_count` (responses only)
|
||||
- `codex.user_prompt`
|
||||
- `llmx.user_prompt`
|
||||
- `prompt_length`
|
||||
- `prompt` (redacted unless `log_user_prompt = true`)
|
||||
- `codex.tool_decision`
|
||||
- `llmx.tool_decision`
|
||||
- `tool_name`
|
||||
- `call_id`
|
||||
- `decision` (`approved`, `approved_for_session`, `denied`, or `abort`)
|
||||
- `source` (`config` or `user`)
|
||||
- `codex.tool_result`
|
||||
- `llmx.tool_result`
|
||||
- `tool_name`
|
||||
- `call_id` (optional)
|
||||
- `arguments` (optional)
|
||||
@@ -641,14 +641,14 @@ If the exporter is `none` nothing is written anywhere; otherwise you must run or
|
||||
own collector. All exporters run on a background batch worker that is flushed on
|
||||
shutdown.
|
||||
|
||||
If you build Codex from source the OTEL crate is still behind an `otel` feature
|
||||
If you build LLMX from source the OTEL crate is still behind an `otel` feature
|
||||
flag; the official prebuilt binaries ship with the feature enabled. When the
|
||||
feature is disabled the telemetry hooks become no-ops so the CLI continues to
|
||||
function without the extra dependencies.
|
||||
|
||||
### notify
|
||||
|
||||
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
|
||||
Specify a program that will be executed to get notified about events generated by LLMX. Note that the program will receive the notification argument as a string of JSON, e.g.:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -663,7 +663,7 @@ Specify a program that will be executed to get notified about events generated b
|
||||
|
||||
The `"type"` property will always be set. Currently, `"agent-turn-complete"` is the only notification type that is supported.
|
||||
|
||||
`"thread-id"` contains a string that identifies the Codex session that produced the notification; you can use it to correlate multiple turns that belong to the same task.
|
||||
`"thread-id"` contains a string that identifies the LLMX session that produced the notification; you can use it to correlate multiple turns that belong to the same task.
|
||||
|
||||
`"cwd"` reports the absolute working directory for the session so scripts can disambiguate which project triggered the notification.
|
||||
|
||||
@@ -691,9 +691,9 @@ def main() -> int:
|
||||
case "agent-turn-complete":
|
||||
assistant_message = notification.get("last-assistant-message")
|
||||
if assistant_message:
|
||||
title = f"Codex: {assistant_message}"
|
||||
title = f"LLMX: {assistant_message}"
|
||||
else:
|
||||
title = "Codex: Turn Complete!"
|
||||
title = "LLMX: Turn Complete!"
|
||||
input_messages = notification.get("input-messages", [])
|
||||
message = " ".join(input_messages)
|
||||
title += message
|
||||
@@ -711,7 +711,7 @@ def main() -> int:
|
||||
"-message",
|
||||
message,
|
||||
"-group",
|
||||
"codex-" + thread_id,
|
||||
"llmx-" + thread_id,
|
||||
"-ignoreDnD",
|
||||
"-activate",
|
||||
"com.googlecode.iterm2",
|
||||
@@ -725,18 +725,18 @@ if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
```
|
||||
|
||||
To have Codex use this script for notifications, you would configure it via `notify` in `~/.codex/config.toml` using the appropriate path to `notify.py` on your computer:
|
||||
To have LLMX use this script for notifications, you would configure it via `notify` in `~/.llmx/config.toml` using the appropriate path to `notify.py` on your computer:
|
||||
|
||||
```toml
|
||||
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
|
||||
notify = ["python3", "/Users/mbolin/.llmx/notify.py"]
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Use `notify` for automation and integrations: Codex invokes your external program with a single JSON argument for each event, independent of the TUI. If you only want lightweight desktop notifications while using the TUI, prefer `tui.notifications`, which uses terminal escape codes and requires no external program. You can enable both; `tui.notifications` covers in‑TUI alerts (e.g., approval prompts), while `notify` is best for system‑level hooks or custom notifiers. Currently, `notify` emits only `agent-turn-complete`, whereas `tui.notifications` supports `agent-turn-complete` and `approval-requested` with optional filtering.
|
||||
> Use `notify` for automation and integrations: LLMX invokes your external program with a single JSON argument for each event, independent of the TUI. If you only want lightweight desktop notifications while using the TUI, prefer `tui.notifications`, which uses terminal escape codes and requires no external program. You can enable both; `tui.notifications` covers in‑TUI alerts (e.g., approval prompts), while `notify` is best for system‑level hooks or custom notifiers. Currently, `notify` emits only `agent-turn-complete`, whereas `tui.notifications` supports `agent-turn-complete` and `approval-requested` with optional filtering.
|
||||
|
||||
### hide_agent_reasoning
|
||||
|
||||
Codex intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
|
||||
LLMX intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
|
||||
|
||||
Setting `hide_agent_reasoning` to `true` suppresses these events in **both** the TUI as well as the headless `exec` sub-command:
|
||||
|
||||
@@ -804,11 +804,11 @@ Users can specify config values at multiple levels. Order of precedence is as fo
|
||||
1. custom command-line argument, e.g., `--model o3`
|
||||
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
|
||||
3. as an entry in `config.toml`, e.g., `model = "o3"`
|
||||
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `gpt-5-codex`)
|
||||
4. the default value that comes with LLMX CLI (i.e., LLMX CLI defaults to `gpt-5-llmx`)
|
||||
|
||||
### history
|
||||
|
||||
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
|
||||
By default, LLMX CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
|
||||
|
||||
To disable this behavior, configure `[history]` as follows:
|
||||
|
||||
@@ -831,7 +831,7 @@ Note this is **not** a general editor setting (like `$EDITOR`), as it only accep
|
||||
- `"cursor"`
|
||||
- `"none"` to explicitly disable this feature
|
||||
|
||||
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
|
||||
Currently, `"vscode"` is the default, though LLMX does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
|
||||
|
||||
### project_doc_max_bytes
|
||||
|
||||
@@ -847,7 +847,7 @@ project_doc_fallback_filenames = ["CLAUDE.md", ".exampleagentrules.md"]
|
||||
|
||||
We recommend migrating instructions to AGENTS.md; other filenames may reduce model performance.
|
||||
|
||||
> See also [AGENTS.md discovery](./agents_md.md) for how Codex locates these files during a session.
|
||||
> See also [AGENTS.md discovery](./agents_md.md) for how LLMX locates these files during a session.
|
||||
|
||||
### tui
|
||||
|
||||
@@ -865,7 +865,7 @@ notifications = [ "agent-turn-complete", "approval-requested" ]
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Codex emits desktop notifications using terminal escape codes. Not all terminals support these (notably, macOS Terminal.app and VS Code's terminal do not support custom notifications. iTerm2, Ghostty and WezTerm do support these notifications).
|
||||
> LLMX emits desktop notifications using terminal escape codes. Not all terminals support these (notably, macOS Terminal.app and VS Code's terminal do not support custom notifications. iTerm2, Ghostty and WezTerm do support these notifications).
|
||||
|
||||
> [!NOTE] > `tui.notifications` is built‑in and limited to the TUI session. For programmatic or cross‑environment notifications—or to integrate with OS‑specific notifiers—use the top‑level `notify` option to run an external program that receives event JSON. The two settings are independent and can be used together.
|
||||
|
||||
@@ -873,17 +873,17 @@ notifications = [ "agent-turn-complete", "approval-requested" ]
|
||||
|
||||
### Forcing a login method
|
||||
|
||||
To force users on a given machine to use a specific login method or workspace, use a combination of [managed configurations](https://developers.openai.com/codex/security#managed-configuration) as well as either or both of the following fields:
|
||||
To force users on a given machine to use a specific login method or workspace, use a combination of [managed configurations](https://developers.openai.com/llmx/security#managed-configuration) as well as either or both of the following fields:
|
||||
|
||||
```toml
|
||||
# Force the user to log in with ChatGPT or via an api key.
|
||||
forced_login_method = "chatgpt" or "api"
|
||||
# When logging in with ChatGPT, only the specified workspace ID will be presented during the login
|
||||
# flow and the id will be validated during the oauth callback as well as every time Codex starts.
|
||||
# flow and the id will be validated during the oauth callback as well as every time LLMX starts.
|
||||
forced_chatgpt_workspace_id = "00000000-0000-0000-0000-000000000000"
|
||||
```
|
||||
|
||||
If the active credentials don't match the config, the user will be logged out and Codex will exit.
|
||||
If the active credentials don't match the config, the user will be logged out and LLMX will exit.
|
||||
|
||||
If `forced_chatgpt_workspace_id` is set but `forced_login_method` is not set, API key login will still work.
|
||||
|
||||
@@ -907,7 +907,7 @@ Valid values:
|
||||
|
||||
| Key | Type / Values | Notes |
|
||||
| ------------------------------------------------ | ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `model` | string | Model to use (e.g., `gpt-5-codex`). |
|
||||
| `model` | string | Model to use (e.g., `gpt-5-llmx`). |
|
||||
| `model_provider` | string | Provider id from `model_providers` (default: `openai`). |
|
||||
| `model_context_window` | number | Context window tokens. |
|
||||
| `model_max_output_tokens` | number | Max output tokens. |
|
||||
@@ -925,7 +925,7 @@ Valid values:
|
||||
| `mcp_servers.<id>.env` | map<string,string> | MCP server env vars (stdio servers only). |
|
||||
| `mcp_servers.<id>.url` | string | MCP server url (streamable http servers only). |
|
||||
| `mcp_servers.<id>.bearer_token_env_var` | string | environment variable containing a bearer token to use for auth (streamable http servers only). |
|
||||
| `mcp_servers.<id>.enabled` | boolean | When false, Codex skips starting the server (default: true). |
|
||||
| `mcp_servers.<id>.enabled` | boolean | When false, LLMX skips starting the server (default: true). |
|
||||
| `mcp_servers.<id>.startup_timeout_sec` | number | Startup timeout in seconds (default: 10). Timeout is applied both for initializing MCP server and initially listing tools. |
|
||||
| `mcp_servers.<id>.tool_timeout_sec` | number | Per-tool timeout in seconds (default: 60). Accepts fractional values; omit to use the default. |
|
||||
| `mcp_servers.<id>.enabled_tools` | array<string> | Restrict the server to the listed tool names. |
|
||||
@@ -960,7 +960,7 @@ Valid values:
|
||||
| `experimental_use_exec_command_tool` | boolean | Use experimental exec command tool. |
|
||||
| `projects.<path>.trust_level` | string | Mark project/worktree as trusted (only `"trusted"` is recognized). |
|
||||
| `tools.web_search` | boolean | Enable web search tool (deprecated) (default: false). |
|
||||
| `tools.view_image` | boolean | Enable or disable the `view_image` tool so Codex can attach local image files from the workspace (default: true). |
|
||||
| `forced_login_method` | `chatgpt` \| `api` | Only allow Codex to be used with ChatGPT or API keys. |
|
||||
| `forced_chatgpt_workspace_id` | string (uuid) | Only allow Codex to be used with the specified ChatGPT workspace. |
|
||||
| `tools.view_image` | boolean | Enable or disable the `view_image` tool so LLMX can attach local image files from the workspace (default: true). |
|
||||
| `forced_login_method` | `chatgpt` \| `api` | Only allow LLMX to be used with ChatGPT or API keys. |
|
||||
| `forced_chatgpt_workspace_id` | string (uuid) | Only allow LLMX to be used with the specified ChatGPT workspace. |
|
||||
| `cli_auth_credentials_store` | `file` \| `keyring` \| `auto` | Where to store CLI login credentials (default: `file`). |
|
||||
|
||||
@@ -18,7 +18,7 @@ If you want to add a new feature or change the behavior of an existing one, plea
|
||||
|
||||
1. **Start with an issue.** Open a new one or comment on an existing discussion so we can agree on the solution before code is written.
|
||||
2. **Add or update tests.** Every new feature or bug-fix should come with test coverage that fails before your change and passes afterwards. 100% coverage is not required, but aim for meaningful assertions.
|
||||
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`codex --help`), or relevant example projects.
|
||||
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`llmx --help`), or relevant example projects.
|
||||
4. **Keep commits atomic.** Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier.
|
||||
|
||||
### Opening a pull request
|
||||
@@ -46,7 +46,7 @@ If you want to add a new feature or change the behavior of an existing one, plea
|
||||
|
||||
If you run into problems setting up the project, would like feedback on an idea, or just want to say _hi_ - please open a Discussion or jump into the relevant issue. We are happy to help.
|
||||
|
||||
Together we can make Codex CLI an incredible tool. **Happy hacking!** :rocket:
|
||||
Together we can make LLMX CLI an incredible tool. **Happy hacking!** :rocket:
|
||||
|
||||
### Contributor license agreement (CLA)
|
||||
|
||||
@@ -71,7 +71,7 @@ No special Git commands, email attachments, or commit footers required.
|
||||
|
||||
The **DCO check** blocks merges until every commit in the PR carries the footer (with squash this is just the one).
|
||||
|
||||
### Releasing `codex`
|
||||
### Releasing `llmx`
|
||||
|
||||
_For admins only._
|
||||
|
||||
@@ -79,16 +79,16 @@ Make sure you are on `main` and have no local changes. Then run:
|
||||
|
||||
```shell
|
||||
VERSION=0.2.0 # Can also be 0.2.0-alpha.1 or any valid Rust version.
|
||||
./codex-rs/scripts/create_github_release.sh "$VERSION"
|
||||
./llmx-rs/scripts/create_github_release.sh "$VERSION"
|
||||
```
|
||||
|
||||
This will make a local commit on top of `main` with `version` set to `$VERSION` in `codex-rs/Cargo.toml` (note that on `main`, we leave the version as `version = "0.0.0"`).
|
||||
This will make a local commit on top of `main` with `version` set to `$VERSION` in `llmx-rs/Cargo.toml` (note that on `main`, we leave the version as `version = "0.0.0"`).
|
||||
|
||||
This will push the commit using the tag `rust-v${VERSION}`, which in turn kicks off [the release workflow](../.github/workflows/rust-release.yml). This will create a new GitHub Release named `$VERSION`.
|
||||
|
||||
If everything looks good in the generated GitHub Release, uncheck the **pre-release** box so it is the latest release.
|
||||
|
||||
Create a PR to update [`Cask/c/codex.rb`](https://github.com/Homebrew/homebrew-cask/blob/main/Formula/c/codex.rb) on Homebrew.
|
||||
Create a PR to update [`Cask/c/llmx.rb`](https://github.com/Homebrew/homebrew-cask/blob/main/Formula/c/llmx.rb) on Homebrew.
|
||||
|
||||
### Security & responsible AI
|
||||
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
# Example config.toml
|
||||
|
||||
Use this example configuration as a starting point. For an explanation of each field and additional context, see [Configuration](./config.md). Copy the snippet below to `~/.codex/config.toml` and adjust values as needed.
|
||||
Use this example configuration as a starting point. For an explanation of each field and additional context, see [Configuration](./config.md). Copy the snippet below to `~/.llmx/config.toml` and adjust values as needed.
|
||||
|
||||
```toml
|
||||
# Codex example configuration (config.toml)
|
||||
# LLMX example configuration (config.toml)
|
||||
#
|
||||
# This file lists all keys Codex reads from config.toml, their default values,
|
||||
# This file lists all keys LLMX reads from config.toml, their default values,
|
||||
# and concise explanations. Values here mirror the effective defaults compiled
|
||||
# into the CLI. Adjust as needed.
|
||||
#
|
||||
@@ -18,17 +18,17 @@ Use this example configuration as a starting point. For an explanation of each f
|
||||
# Core Model Selection
|
||||
################################################################################
|
||||
|
||||
# Primary model used by Codex. Default differs by OS; non-Windows defaults here.
|
||||
# Linux/macOS default: "gpt-5-codex"; Windows default: "gpt-5".
|
||||
model = "gpt-5-codex"
|
||||
# Primary model used by LLMX. Default differs by OS; non-Windows defaults here.
|
||||
# Linux/macOS default: "gpt-5-llmx"; Windows default: "gpt-5".
|
||||
model = "gpt-5-llmx"
|
||||
|
||||
# Model used by the /review feature (code reviews). Default: "gpt-5-codex".
|
||||
review_model = "gpt-5-codex"
|
||||
# Model used by the /review feature (code reviews). Default: "gpt-5-llmx".
|
||||
review_model = "gpt-5-llmx"
|
||||
|
||||
# Provider id selected from [model_providers]. Default: "openai".
|
||||
model_provider = "openai"
|
||||
|
||||
# Optional manual model metadata. When unset, Codex auto-detects from model.
|
||||
# Optional manual model metadata. When unset, LLMX auto-detects from model.
|
||||
# Uncomment to force values.
|
||||
# model_context_window = 128000 # tokens; default: auto for model
|
||||
# model_max_output_tokens = 8192 # tokens; default: auto for model
|
||||
@@ -153,10 +153,10 @@ disable_paste_burst = false
|
||||
windows_wsl_setup_acknowledged = false
|
||||
|
||||
# External notifier program (argv array). When unset: disabled.
|
||||
# Example: notify = ["notify-send", "Codex"]
|
||||
# Example: notify = ["notify-send", "LLMX"]
|
||||
# notify = [ ]
|
||||
|
||||
# In-product notices (mostly set automatically by Codex).
|
||||
# In-product notices (mostly set automatically by LLMX).
|
||||
[notice]
|
||||
# hide_full_access_warning = true
|
||||
# hide_rate_limit_model_nudge = true
|
||||
@@ -174,7 +174,7 @@ chatgpt_base_url = "https://chatgpt.com/backend-api/"
|
||||
# Restrict ChatGPT login to a specific workspace id. Default: unset.
|
||||
# forced_chatgpt_workspace_id = ""
|
||||
|
||||
# Force login mechanism when Codex would normally auto-select. Default: unset.
|
||||
# Force login mechanism when LLMX would normally auto-select. Default: unset.
|
||||
# Allowed values: chatgpt | api
|
||||
# forced_login_method = "chatgpt"
|
||||
|
||||
@@ -315,7 +315,7 @@ mcp_oauth_credentials_store = "auto"
|
||||
[profiles]
|
||||
|
||||
# [profiles.default]
|
||||
# model = "gpt-5-codex"
|
||||
# model = "gpt-5-llmx"
|
||||
# model_provider = "openai"
|
||||
# approval_policy = "on-request"
|
||||
# sandbox_mode = "read-only"
|
||||
|
||||
38
docs/exec.md
38
docs/exec.md
@@ -1,24 +1,24 @@
|
||||
## Non-interactive mode
|
||||
|
||||
Use Codex in non-interactive mode to automate common workflows.
|
||||
Use LLMX in non-interactive mode to automate common workflows.
|
||||
|
||||
```shell
|
||||
codex exec "count the total number of lines of code in this project"
|
||||
llmx exec "count the total number of lines of code in this project"
|
||||
```
|
||||
|
||||
In non-interactive mode, Codex does not ask for command or edit approvals. By default it runs in `read-only` mode, so it cannot edit files or run commands that require network access.
|
||||
In non-interactive mode, LLMX does not ask for command or edit approvals. By default it runs in `read-only` mode, so it cannot edit files or run commands that require network access.
|
||||
|
||||
Use `codex exec --full-auto` to allow file edits. Use `codex exec --sandbox danger-full-access` to allow edits and networked commands.
|
||||
Use `llmx exec --full-auto` to allow file edits. Use `llmx exec --sandbox danger-full-access` to allow edits and networked commands.
|
||||
|
||||
### Default output mode
|
||||
|
||||
By default, Codex streams its activity to stderr and only writes the final message from the agent to stdout. This makes it easier to pipe `codex exec` into another tool without extra filtering.
|
||||
By default, LLMX streams its activity to stderr and only writes the final message from the agent to stdout. This makes it easier to pipe `llmx exec` into another tool without extra filtering.
|
||||
|
||||
To write the output of `codex exec` to a file, in addition to using a shell redirect like `>`, there is also a dedicated flag to specify an output file: `-o`/`--output-last-message`.
|
||||
To write the output of `llmx exec` to a file, in addition to using a shell redirect like `>`, there is also a dedicated flag to specify an output file: `-o`/`--output-last-message`.
|
||||
|
||||
### JSON output mode
|
||||
|
||||
`codex exec` supports a `--json` mode that streams events to stdout as JSON Lines (JSONL) while the agent runs.
|
||||
`llmx exec` supports a `--json` mode that streams events to stdout as JSON Lines (JSONL) while the agent runs.
|
||||
|
||||
Supported event types:
|
||||
|
||||
@@ -75,40 +75,40 @@ Sample schema:
|
||||
```
|
||||
|
||||
```shell
|
||||
codex exec "Extract details of the project" --output-schema ~/schema.json
|
||||
llmx exec "Extract details of the project" --output-schema ~/schema.json
|
||||
...
|
||||
|
||||
{"project_name":"Codex CLI","programming_languages":["Rust","TypeScript","Shell"]}
|
||||
{"project_name":"LLMX CLI","programming_languages":["Rust","TypeScript","Shell"]}
|
||||
```
|
||||
|
||||
Combine `--output-schema` with `-o` to only print the final JSON output. You can also pass a file path to `-o` to save the JSON output to a file.
|
||||
|
||||
### Git repository requirement
|
||||
|
||||
Codex requires a Git repository to avoid destructive changes. To disable this check, use `codex exec --skip-git-repo-check`.
|
||||
LLMX requires a Git repository to avoid destructive changes. To disable this check, use `llmx exec --skip-git-repo-check`.
|
||||
|
||||
### Resuming non-interactive sessions
|
||||
|
||||
Resume a previous non-interactive session with `codex exec resume <SESSION_ID>` or `codex exec resume --last`. This preserves conversation context so you can ask follow-up questions or give new tasks to the agent.
|
||||
Resume a previous non-interactive session with `llmx exec resume <SESSION_ID>` or `llmx exec resume --last`. This preserves conversation context so you can ask follow-up questions or give new tasks to the agent.
|
||||
|
||||
```shell
|
||||
codex exec "Review the change, look for use-after-free issues"
|
||||
codex exec resume --last "Fix use-after-free issues"
|
||||
llmx exec "Review the change, look for use-after-free issues"
|
||||
llmx exec resume --last "Fix use-after-free issues"
|
||||
```
|
||||
|
||||
Only the conversation context is preserved; you must still provide flags to customize Codex behavior.
|
||||
Only the conversation context is preserved; you must still provide flags to customize LLMX behavior.
|
||||
|
||||
```shell
|
||||
codex exec --model gpt-5-codex --json "Review the change, look for use-after-free issues"
|
||||
codex exec --model gpt-5 --json resume --last "Fix use-after-free issues"
|
||||
llmx exec --model gpt-5-llmx --json "Review the change, look for use-after-free issues"
|
||||
llmx exec --model gpt-5 --json resume --last "Fix use-after-free issues"
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
By default, `codex exec` will use the same authentication method as Codex CLI and VSCode extension. You can override the api key by setting the `CODEX_API_KEY` environment variable.
|
||||
By default, `llmx exec` will use the same authentication method as LLMX CLI and VSCode extension. You can override the api key by setting the `CODEX_API_KEY` environment variable.
|
||||
|
||||
```shell
|
||||
CODEX_API_KEY=your-api-key-here codex exec "Fix merge conflict"
|
||||
CODEX_API_KEY=your-api-key-here llmx exec "Fix merge conflict"
|
||||
```
|
||||
|
||||
NOTE: `CODEX_API_KEY` is only supported in `codex exec`.
|
||||
NOTE: `CODEX_API_KEY` is only supported in `llmx exec`.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
## Experimental technology disclaimer
|
||||
|
||||
Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
|
||||
LLMX CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
|
||||
|
||||
- Bug reports
|
||||
- Feature requests
|
||||
|
||||
32
docs/faq.md
32
docs/faq.md
@@ -2,29 +2,29 @@
|
||||
|
||||
This FAQ highlights the most common questions and points you to the right deep-dive guides in `docs/`.
|
||||
|
||||
### OpenAI released a model called Codex in 2021 - is this related?
|
||||
### OpenAI released a model called LLMX in 2021 - is this related?
|
||||
|
||||
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
|
||||
In 2021, OpenAI released LLMX, an AI system designed to generate code from natural language prompts. That original LLMX model was deprecated as of March 2023 and is separate from the CLI tool.
|
||||
|
||||
### Which models are supported?
|
||||
|
||||
We recommend using Codex with GPT-5 Codex, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
|
||||
We recommend using LLMX with GPT-5 LLMX, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
|
||||
|
||||
You can also use older models by using API-based auth and launching codex with the `--model` flag.
|
||||
You can also use older models by using API-based auth and launching llmx with the `--model` flag.
|
||||
|
||||
### How do approvals and sandbox modes work together?
|
||||
|
||||
Approvals are the mechanism Codex uses to ask before running a tool call with elevated permissions - typically to leave the sandbox or re-run a failed command without isolation. Sandbox mode provides the baseline isolation (`Read Only`, `Workspace Write`, or `Danger Full Access`; see [Sandbox & approvals](./sandbox.md)).
|
||||
Approvals are the mechanism LLMX uses to ask before running a tool call with elevated permissions - typically to leave the sandbox or re-run a failed command without isolation. Sandbox mode provides the baseline isolation (`Read Only`, `Workspace Write`, or `Danger Full Access`; see [Sandbox & approvals](./sandbox.md)).
|
||||
|
||||
### Can I automate tasks without the TUI?
|
||||
|
||||
Yes. [`codex exec`](./exec.md) runs Codex in non-interactive mode with streaming logs, JSONL output, and structured schema support. The command respects the same sandbox and approval settings you configure in the [Config guide](./config.md).
|
||||
Yes. [`llmx exec`](./exec.md) runs LLMX in non-interactive mode with streaming logs, JSONL output, and structured schema support. The command respects the same sandbox and approval settings you configure in the [Config guide](./config.md).
|
||||
|
||||
### How do I stop Codex from editing my files?
|
||||
### How do I stop LLMX from editing my files?
|
||||
|
||||
By default, Codex can modify files in your current working directory (Auto mode). To prevent edits, run `codex` in read-only mode with the CLI flag `--sandbox read-only`. Alternatively, you can change the approval level mid-conversation with `/approvals`.
|
||||
By default, LLMX can modify files in your current working directory (Auto mode). To prevent edits, run `llmx` in read-only mode with the CLI flag `--sandbox read-only`. Alternatively, you can change the approval level mid-conversation with `/approvals`.
|
||||
|
||||
### How do I connect Codex to MCP servers?
|
||||
### How do I connect LLMX to MCP servers?
|
||||
|
||||
Configure MCP servers through your `config.toml` using the examples in [Config -> Connecting to MCP servers](./config.md#connecting-to-mcp-servers).
|
||||
|
||||
@@ -32,24 +32,24 @@ Configure MCP servers through your `config.toml` using the examples in [Config -
|
||||
|
||||
Confirm your setup in three steps:
|
||||
|
||||
1. Walk through the auth flows in [Authentication](./authentication.md) to ensure the correct credentials are present in `~/.codex/auth.json`.
|
||||
1. Walk through the auth flows in [Authentication](./authentication.md) to ensure the correct credentials are present in `~/.llmx/auth.json`.
|
||||
2. If you're on a headless or remote machine, make sure port-forwarding is configured as described in [Authentication -> Connecting on a "Headless" Machine](./authentication.md#connecting-on-a-headless-machine).
|
||||
|
||||
### Does it work on Windows?
|
||||
|
||||
Running Codex directly on Windows may work, but is not officially supported. We recommend using [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install).
|
||||
Running LLMX directly on Windows may work, but is not officially supported. We recommend using [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install).
|
||||
|
||||
### Where should I start after installation?
|
||||
|
||||
Follow the quick setup in [Install & build](./install.md) and then jump into [Getting started](./getting-started.md) for interactive usage tips, prompt examples, and AGENTS.md guidance.
|
||||
|
||||
### `brew upgrade codex` isn't upgrading me
|
||||
### `brew upgrade llmx` isn't upgrading me
|
||||
|
||||
If you're running Codex v0.46.0 or older, `brew upgrade codex` will not move you to the latest version because we migrated from a Homebrew formula to a cask. To upgrade, uninstall the existing oudated formula and then install the new cask:
|
||||
If you're running LLMX v0.46.0 or older, `brew upgrade llmx` will not move you to the latest version because we migrated from a Homebrew formula to a cask. To upgrade, uninstall the existing oudated formula and then install the new cask:
|
||||
|
||||
```bash
|
||||
brew uninstall --formula codex
|
||||
brew install --cask codex
|
||||
brew uninstall --formula llmx
|
||||
brew install --cask llmx
|
||||
```
|
||||
|
||||
After reinstalling, `brew upgrade --cask codex` will keep future releases up to date.
|
||||
After reinstalling, `brew upgrade --cask llmx` will keep future releases up to date.
|
||||
|
||||
@@ -3,44 +3,44 @@
|
||||
Looking for something specific? Jump ahead:
|
||||
|
||||
- [Tips & shortcuts](#tips--shortcuts) – hotkeys, resume flow, prompts
|
||||
- [Non-interactive runs](./exec.md) – automate with `codex exec`
|
||||
- [Non-interactive runs](./exec.md) – automate with `llmx exec`
|
||||
- Ready for deeper customization? Head to [`advanced.md`](./advanced.md)
|
||||
|
||||
### CLI usage
|
||||
|
||||
| Command | Purpose | Example |
|
||||
| ------------------ | ---------------------------------- | ------------------------------- |
|
||||
| `codex` | Interactive TUI | `codex` |
|
||||
| `codex "..."` | Initial prompt for interactive TUI | `codex "fix lint errors"` |
|
||||
| `codex exec "..."` | Non-interactive "automation mode" | `codex exec "explain utils.ts"` |
|
||||
| `llmx` | Interactive TUI | `llmx` |
|
||||
| `llmx "..."` | Initial prompt for interactive TUI | `llmx "fix lint errors"` |
|
||||
| `llmx exec "..."` | Non-interactive "automation mode" | `llmx exec "explain utils.ts"` |
|
||||
|
||||
Key flags: `--model/-m`, `--ask-for-approval/-a`.
|
||||
|
||||
### Resuming interactive sessions
|
||||
|
||||
- Run `codex resume` to display the session picker UI
|
||||
- Resume most recent: `codex resume --last`
|
||||
- Resume by id: `codex resume <SESSION_ID>` (You can get session ids from /status or `~/.codex/sessions/`)
|
||||
- Run `llmx resume` to display the session picker UI
|
||||
- Resume most recent: `llmx resume --last`
|
||||
- Resume by id: `llmx resume <SESSION_ID>` (You can get session ids from /status or `~/.llmx/sessions/`)
|
||||
|
||||
Examples:
|
||||
|
||||
```shell
|
||||
# Open a picker of recent sessions
|
||||
codex resume
|
||||
llmx resume
|
||||
|
||||
# Resume the most recent session
|
||||
codex resume --last
|
||||
llmx resume --last
|
||||
|
||||
# Resume a specific session by id
|
||||
codex resume 7f9f9a2e-1b3c-4c7a-9b0e-123456789abc
|
||||
llmx resume 7f9f9a2e-1b3c-4c7a-9b0e-123456789abc
|
||||
```
|
||||
|
||||
### Running with a prompt as input
|
||||
|
||||
You can also run Codex CLI with a prompt as input:
|
||||
You can also run LLMX CLI with a prompt as input:
|
||||
|
||||
```shell
|
||||
codex "explain this codebase to me"
|
||||
llmx "explain this codebase to me"
|
||||
```
|
||||
|
||||
### Example prompts
|
||||
@@ -49,22 +49,22 @@ Below are a few bite-size examples you can copy-paste. Replace the text in quote
|
||||
|
||||
| ✨ | What you type | What happens |
|
||||
| --- | ------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
|
||||
| 1 | `codex "Refactor the Dashboard component to React Hooks"` | Codex rewrites the class component, runs `npm test`, and shows the diff. |
|
||||
| 2 | `codex "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
|
||||
| 3 | `codex "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
|
||||
| 4 | `codex "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
|
||||
| 5 | `codex "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
|
||||
| 6 | `codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
|
||||
| 7 | `codex "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
|
||||
| 1 | `llmx "Refactor the Dashboard component to React Hooks"` | LLMX rewrites the class component, runs `npm test`, and shows the diff. |
|
||||
| 2 | `llmx "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
|
||||
| 3 | `llmx "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
|
||||
| 4 | `llmx "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
|
||||
| 5 | `llmx "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
|
||||
| 6 | `llmx "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
|
||||
| 7 | `llmx "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
|
||||
|
||||
Looking to reuse your own instructions? Create slash commands with [custom prompts](./prompts.md).
|
||||
|
||||
### Memory with AGENTS.md
|
||||
|
||||
You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for them in the following places, and merges them top-down:
|
||||
You can give LLMX extra instructions and guidance using `AGENTS.md` files. LLMX looks for them in the following places, and merges them top-down:
|
||||
|
||||
1. `~/.codex/AGENTS.md` - personal global guidance
|
||||
2. Every directory from the repository root down to your current working directory (inclusive). In each directory, Codex first looks for `AGENTS.override.md` and uses it if present; otherwise it falls back to `AGENTS.md`. Use the override form when you want to replace inherited instructions for that directory.
|
||||
1. `~/.llmx/AGENTS.md` - personal global guidance
|
||||
2. Every directory from the repository root down to your current working directory (inclusive). In each directory, LLMX first looks for `AGENTS.override.md` and uses it if present; otherwise it falls back to `AGENTS.md`. Use the override form when you want to replace inherited instructions for that directory.
|
||||
|
||||
For more information on how to use AGENTS.md, see the [official AGENTS.md documentation](https://agents.md/).
|
||||
|
||||
@@ -76,32 +76,32 @@ Typing `@` triggers a fuzzy-filename search over the workspace root. Use up/down
|
||||
|
||||
#### Esc–Esc to edit a previous message
|
||||
|
||||
When the chat composer is empty, press Esc to prime “backtrack” mode. Press Esc again to open a transcript preview highlighting the last user message; press Esc repeatedly to step to older user messages. Press Enter to confirm and Codex will fork the conversation from that point, trim the visible transcript accordingly, and pre‑fill the composer with the selected user message so you can edit and resubmit it.
|
||||
When the chat composer is empty, press Esc to prime “backtrack” mode. Press Esc again to open a transcript preview highlighting the last user message; press Esc repeatedly to step to older user messages. Press Enter to confirm and LLMX will fork the conversation from that point, trim the visible transcript accordingly, and pre‑fill the composer with the selected user message so you can edit and resubmit it.
|
||||
|
||||
In the transcript preview, the footer shows an `Esc edit prev` hint while editing is active.
|
||||
|
||||
#### `--cd`/`-C` flag
|
||||
|
||||
Sometimes it is not convenient to `cd` to the directory you want Codex to use as the "working root" before running Codex. Fortunately, `codex` supports a `--cd` option so you can specify whatever folder you want. You can confirm that Codex is honoring `--cd` by double-checking the **workdir** it reports in the TUI at the start of a new session.
|
||||
Sometimes it is not convenient to `cd` to the directory you want LLMX to use as the "working root" before running LLMX. Fortunately, `llmx` supports a `--cd` option so you can specify whatever folder you want. You can confirm that LLMX is honoring `--cd` by double-checking the **workdir** it reports in the TUI at the start of a new session.
|
||||
|
||||
#### `--add-dir` flag
|
||||
|
||||
Need to work across multiple projects in one run? Pass `--add-dir` one or more times to expose extra directories as writable roots for the current session while keeping the main working directory unchanged. For example:
|
||||
|
||||
```shell
|
||||
codex --cd apps/frontend --add-dir ../backend --add-dir ../shared
|
||||
llmx --cd apps/frontend --add-dir ../backend --add-dir ../shared
|
||||
```
|
||||
|
||||
Codex can then inspect and edit files in each listed directory without leaving the primary workspace.
|
||||
LLMX can then inspect and edit files in each listed directory without leaving the primary workspace.
|
||||
|
||||
#### Shell completions
|
||||
|
||||
Generate shell completion scripts via:
|
||||
|
||||
```shell
|
||||
codex completion bash
|
||||
codex completion zsh
|
||||
codex completion fish
|
||||
llmx completion bash
|
||||
llmx completion zsh
|
||||
llmx completion fish
|
||||
```
|
||||
|
||||
#### Image input
|
||||
@@ -109,6 +109,6 @@ codex completion fish
|
||||
Paste images directly into the composer (Ctrl+V / Cmd+V) to attach them to your prompt. You can also attach files via the CLI using `-i/--image` (comma‑separated):
|
||||
|
||||
```bash
|
||||
codex -i screenshot.png "Explain this error"
|
||||
codex --image img1.png,img2.jpg "Summarize these diagrams"
|
||||
llmx -i screenshot.png "Explain this error"
|
||||
llmx --image img1.png,img2.jpg "Summarize these diagrams"
|
||||
```
|
||||
|
||||
@@ -10,14 +10,14 @@
|
||||
|
||||
### DotSlash
|
||||
|
||||
The GitHub Release also contains a [DotSlash](https://dotslash-cli.com/) file for the Codex CLI named `codex`. Using a DotSlash file makes it possible to make a lightweight commit to source control to ensure all contributors use the same version of an executable, regardless of what platform they use for development.
|
||||
The GitHub Release also contains a [DotSlash](https://dotslash-cli.com/) file for the LLMX CLI named `llmx`. Using a DotSlash file makes it possible to make a lightweight commit to source control to ensure all contributors use the same version of an executable, regardless of what platform they use for development.
|
||||
|
||||
### Build from source
|
||||
|
||||
```bash
|
||||
# Clone the repository and navigate to the root of the Cargo workspace.
|
||||
git clone https://github.com/openai/codex.git
|
||||
cd codex/codex-rs
|
||||
git clone https://github.com/valknar/llmx.git
|
||||
cd llmx/llmx-rs
|
||||
|
||||
# Install the Rust toolchain, if necessary.
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
@@ -25,11 +25,11 @@ source "$HOME/.cargo/env"
|
||||
rustup component add rustfmt
|
||||
rustup component add clippy
|
||||
|
||||
# Build Codex.
|
||||
# Build LLMX.
|
||||
cargo build
|
||||
|
||||
# Launch the TUI with a sample prompt.
|
||||
cargo run --bin codex -- "explain this codebase to me"
|
||||
cargo run --bin llmx -- "explain this codebase to me"
|
||||
|
||||
# After making changes, ensure the code is clean.
|
||||
cargo fmt -- --config imports_granularity=Item
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
## Codex open source fund
|
||||
## LLMX open source fund
|
||||
|
||||
We're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models.
|
||||
We're excited to launch a **$1 million initiative** supporting open source projects that use LLMX CLI and other OpenAI models.
|
||||
|
||||
- Grants are awarded up to **$25,000** API credits.
|
||||
- Applications are reviewed **on a rolling basis**.
|
||||
|
||||
**Interested? [Apply here](https://openai.com/form/codex-open-source-fund/).**
|
||||
**Interested? [Apply here](https://openai.com/form/llmx-open-source-fund/).**
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
## Custom Prompts
|
||||
|
||||
Custom prompts turn your repeatable instructions into reusable slash commands, so you can trigger them without retyping or copy/pasting. Each prompt is a Markdown file that Codex expands into the conversation the moment you run it.
|
||||
Custom prompts turn your repeatable instructions into reusable slash commands, so you can trigger them without retyping or copy/pasting. Each prompt is a Markdown file that LLMX expands into the conversation the moment you run it.
|
||||
|
||||
### Where prompts live
|
||||
|
||||
- Location: store prompts in `$CODEX_HOME/prompts/` (defaults to `~/.codex/prompts/`). Set `CODEX_HOME` if you want to use a different folder.
|
||||
- File type: Codex only loads `.md` files. Non-Markdown files are ignored. Both regular files and symlinks to Markdown files are supported.
|
||||
- Location: store prompts in `$CODEX_HOME/prompts/` (defaults to `~/.llmx/prompts/`). Set `CODEX_HOME` if you want to use a different folder.
|
||||
- File type: LLMX only loads `.md` files. Non-Markdown files are ignored. Both regular files and symlinks to Markdown files are supported.
|
||||
- Naming: The filename (without `.md`) becomes the prompt name. A file called `review.md` registers the prompt `review`.
|
||||
- Refresh: Prompts are loaded when a session starts. Restart Codex (or start a new session) after adding or editing files.
|
||||
- Refresh: Prompts are loaded when a session starts. Restart LLMX (or start a new session) after adding or editing files.
|
||||
- Conflicts: Files whose names collide with built-in commands (like `init`) stay hidden in the slash popup, but you can still invoke them with `/prompts:<name>`.
|
||||
|
||||
### File format
|
||||
@@ -27,24 +27,24 @@ Custom prompts turn your repeatable instructions into reusable slash commands, s
|
||||
|
||||
### Placeholders and arguments
|
||||
|
||||
- Numeric placeholders: `$1`–`$9` insert the first nine positional arguments you type after the command. `$ARGUMENTS` inserts all positional arguments joined by a single space. Use `$$` to emit a literal dollar sign (Codex leaves `$$` untouched).
|
||||
- Numeric placeholders: `$1`–`$9` insert the first nine positional arguments you type after the command. `$ARGUMENTS` inserts all positional arguments joined by a single space. Use `$$` to emit a literal dollar sign (LLMX leaves `$$` untouched).
|
||||
- Named placeholders: Tokens such as `$FILE` or `$TICKET_ID` expand from `KEY=value` pairs you supply. Keys are case-sensitive—use the same uppercase name in the command (for example, `FILE=...`).
|
||||
- Quoted arguments: Double-quote any value that contains spaces, e.g. `TICKET_TITLE="Fix logging"`.
|
||||
- Invocation syntax: Run prompts via `/prompts:<name> ...`. When the slash popup is open, typing either `prompts:` or the bare prompt name will surface `/prompts:<name>` suggestions.
|
||||
- Error handling: If a prompt contains named placeholders, Codex requires them all. You will see a validation message if any are missing or malformed.
|
||||
- Error handling: If a prompt contains named placeholders, LLMX requires them all. You will see a validation message if any are missing or malformed.
|
||||
|
||||
### Running a prompt
|
||||
|
||||
1. Start a new Codex session (ensures the prompt list is fresh).
|
||||
1. Start a new LLMX session (ensures the prompt list is fresh).
|
||||
2. In the composer, type `/` to open the slash popup.
|
||||
3. Type `prompts:` (or start typing the prompt name) and select it with ↑/↓.
|
||||
4. Provide any required arguments, press Enter, and Codex sends the expanded content.
|
||||
4. Provide any required arguments, press Enter, and LLMX sends the expanded content.
|
||||
|
||||
### Examples
|
||||
|
||||
**Draft PR helper**
|
||||
|
||||
`~/.codex/prompts/draftpr.md`
|
||||
`~/.llmx/prompts/draftpr.md`
|
||||
|
||||
```markdown
|
||||
---
|
||||
@@ -54,4 +54,4 @@ description: Create feature branch, commit and open draft PR.
|
||||
Create a branch named `tibo/<feature_name>`, commit the changes, and open a draft PR.
|
||||
```
|
||||
|
||||
Usage: type `/prompts:draftpr` to have codex perform the work.
|
||||
Usage: type `/prompts:draftpr` to have llmx perform the work.
|
||||
|
||||
@@ -1,30 +1,30 @@
|
||||
# Release Management
|
||||
|
||||
Currently, we made Codex binaries available in three places:
|
||||
Currently, we made LLMX binaries available in three places:
|
||||
|
||||
- GitHub Releases https://github.com/openai/codex/releases/
|
||||
- `@openai/codex` on npm: https://www.npmjs.com/package/@openai/codex
|
||||
- `codex` on Homebrew: https://formulae.brew.sh/cask/codex
|
||||
- GitHub Releases https://github.com/valknar/llmx/releases/
|
||||
- `@llmx/llmx` on npm: https://www.npmjs.com/package/@llmx/llmx
|
||||
- `llmx` on Homebrew: https://formulae.brew.sh/cask/llmx
|
||||
|
||||
# Cutting a Release
|
||||
|
||||
Run the `codex-rs/scripts/create_github_release` script in the repository to publish a new release. The script will choose the appropriate version number depending on the type of release you are creating.
|
||||
Run the `llmx-rs/scripts/create_github_release` script in the repository to publish a new release. The script will choose the appropriate version number depending on the type of release you are creating.
|
||||
|
||||
To cut a new alpha release from `main` (feel free to cut alphas liberally):
|
||||
|
||||
```
|
||||
./codex-rs/scripts/create_github_release --publish-alpha
|
||||
./llmx-rs/scripts/create_github_release --publish-alpha
|
||||
```
|
||||
|
||||
To cut a new _public_ release from `main` (which requires more caution), run:
|
||||
|
||||
```
|
||||
./codex-rs/scripts/create_github_release --publish-release
|
||||
./llmx-rs/scripts/create_github_release --publish-release
|
||||
```
|
||||
|
||||
TIP: Add the `--dry-run` flag to report the next version number for the respective release and exit.
|
||||
|
||||
Running the publishing script will kick off a GitHub Action to build the release, so go to https://github.com/openai/codex/actions/workflows/rust-release.yml to find the corresponding workflow. (Note: we should automate finding the workflow URL with `gh`.)
|
||||
Running the publishing script will kick off a GitHub Action to build the release, so go to https://github.com/valknar/llmx/actions/workflows/rust-release.yml to find the corresponding workflow. (Note: we should automate finding the workflow URL with `gh`.)
|
||||
|
||||
When the workflow finishes, the GitHub Release is "done," but you still have to consider npm and Homebrew.
|
||||
|
||||
@@ -34,12 +34,12 @@ The GitHub Action is responsible for publishing to npm.
|
||||
|
||||
## Publishing to Homebrew
|
||||
|
||||
For Homebrew, we ship Codex as a cask. Homebrew's automation system checks our GitHub repo every few hours for a new release and will open a PR to update the cask with the latest binary.
|
||||
For Homebrew, we ship LLMX as a cask. Homebrew's automation system checks our GitHub repo every few hours for a new release and will open a PR to update the cask with the latest binary.
|
||||
|
||||
Inevitably, you just have to refresh this page periodically to see if the release has been picked up by their automation system:
|
||||
|
||||
https://github.com/Homebrew/homebrew-cask/pulls?q=%3Apr+codex
|
||||
https://github.com/Homebrew/homebrew-cask/pulls?q=%3Apr+llmx
|
||||
|
||||
For reference, our Homebrew cask lives at:
|
||||
|
||||
https://github.com/Homebrew/homebrew-cask/blob/main/Casks/c/codex.rb
|
||||
https://github.com/Homebrew/homebrew-cask/blob/main/Casks/c/llmx.rb
|
||||
|
||||
@@ -1,36 +1,36 @@
|
||||
## Sandbox & approvals
|
||||
|
||||
What Codex is allowed to do is governed by a combination of **sandbox modes** (what Codex is allowed to do without supervision) and **approval policies** (when you must confirm an action). This page explains the options, how they interact, and how the sandbox behaves on each platform.
|
||||
What LLMX is allowed to do is governed by a combination of **sandbox modes** (what LLMX is allowed to do without supervision) and **approval policies** (when you must confirm an action). This page explains the options, how they interact, and how the sandbox behaves on each platform.
|
||||
|
||||
### Approval policies
|
||||
|
||||
Codex starts conservatively. Until you explicitly tell it a workspace is trusted, the CLI defaults to **read-only sandboxing** with the `read-only` approval preset. Codex can inspect files and answer questions, but every edit or command requires approval.
|
||||
LLMX starts conservatively. Until you explicitly tell it a workspace is trusted, the CLI defaults to **read-only sandboxing** with the `read-only` approval preset. LLMX can inspect files and answer questions, but every edit or command requires approval.
|
||||
|
||||
When you mark a workspace as trusted (for example via the onboarding prompt or `/approvals` → “Trust this directory”), Codex upgrades the default preset to **Auto**: sandboxed writes inside the workspace with `AskForApproval::OnRequest`. Codex only interrupts you when it needs to leave the workspace or rerun something outside the sandbox.
|
||||
When you mark a workspace as trusted (for example via the onboarding prompt or `/approvals` → “Trust this directory”), LLMX upgrades the default preset to **Auto**: sandboxed writes inside the workspace with `AskForApproval::OnRequest`. LLMX only interrupts you when it needs to leave the workspace or rerun something outside the sandbox.
|
||||
|
||||
If you want maximum guardrails for a trusted repo, switch back to Read Only from the `/approvals` picker. If you truly need hands-off automation, use `Full Access`—but be deliberate, because that skips both the sandbox and approvals.
|
||||
|
||||
#### Defaults and recommendations
|
||||
|
||||
- Every session starts in a sandbox. Until a repo is trusted, Codex enforces read-only access and will prompt before any write or command.
|
||||
- Marking a repo as trusted switches the default preset to Auto (`workspace-write` + `ask-for-approval on-request`) so Codex can keep iterating locally without nagging you.
|
||||
- Every session starts in a sandbox. Until a repo is trusted, LLMX enforces read-only access and will prompt before any write or command.
|
||||
- Marking a repo as trusted switches the default preset to Auto (`workspace-write` + `ask-for-approval on-request`) so LLMX can keep iterating locally without nagging you.
|
||||
- The workspace always includes the current directory plus temporary directories like `/tmp`. Use `/status` to confirm the exact writable roots.
|
||||
- You can override the defaults from the command line at any time:
|
||||
- `codex --sandbox read-only --ask-for-approval on-request`
|
||||
- `codex --sandbox workspace-write --ask-for-approval on-request`
|
||||
- `llmx --sandbox read-only --ask-for-approval on-request`
|
||||
- `llmx --sandbox workspace-write --ask-for-approval on-request`
|
||||
|
||||
### Can I run without ANY approvals?
|
||||
|
||||
Yes, you can disable all approval prompts with `--ask-for-approval never`. This option works with all `--sandbox` modes, so you still have full control over Codex's level of autonomy. It will make its best attempt with whatever constraints you provide.
|
||||
Yes, you can disable all approval prompts with `--ask-for-approval never`. This option works with all `--sandbox` modes, so you still have full control over LLMX's level of autonomy. It will make its best attempt with whatever constraints you provide.
|
||||
|
||||
### Common sandbox + approvals combinations
|
||||
|
||||
| Intent | Flags | Effect |
|
||||
| ---------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Safe read-only browsing | `--sandbox read-only --ask-for-approval on-request` | Codex can read files and answer questions. Codex requires approval to make edits, run commands, or access network. |
|
||||
| Safe read-only browsing | `--sandbox read-only --ask-for-approval on-request` | LLMX can read files and answer questions. LLMX requires approval to make edits, run commands, or access network. |
|
||||
| Read-only non-interactive (CI) | `--sandbox read-only --ask-for-approval never` | Reads only; never escalates |
|
||||
| Let it edit the repo, ask if risky | `--sandbox workspace-write --ask-for-approval on-request` | Codex can read files, make edits, and run commands in the workspace. Codex requires approval for actions outside the workspace or for network access. |
|
||||
| Auto (preset; trusted repos) | `--full-auto` (equivalent to `--sandbox workspace-write` + `--ask-for-approval on-request`) | Codex runs sandboxed commands that can write inside the workspace without prompting. Escalates only when it must leave the sandbox. |
|
||||
| Let it edit the repo, ask if risky | `--sandbox workspace-write --ask-for-approval on-request` | LLMX can read files, make edits, and run commands in the workspace. LLMX requires approval for actions outside the workspace or for network access. |
|
||||
| Auto (preset; trusted repos) | `--full-auto` (equivalent to `--sandbox workspace-write` + `--ask-for-approval on-request`) | LLMX runs sandboxed commands that can write inside the workspace without prompting. Escalates only when it must leave the sandbox. |
|
||||
| YOLO (not recommended) | `--dangerously-bypass-approvals-and-sandbox` (alias: `--yolo`) | No sandbox; no prompts |
|
||||
|
||||
> Note: In `workspace-write`, network is disabled by default unless enabled in config (`[sandbox_workspace_write].network_access = true`).
|
||||
@@ -65,9 +65,9 @@ sandbox_mode = "read-only"
|
||||
|
||||
### Sandbox mechanics by platform {#platform-sandboxing-details}
|
||||
|
||||
The mechanism Codex uses to enforce the sandbox policy depends on your OS:
|
||||
The mechanism LLMX uses to enforce the sandbox policy depends on your OS:
|
||||
|
||||
- **macOS 12+** uses **Apple Seatbelt**. Codex invokes `sandbox-exec` with a profile that corresponds to the selected `--sandbox` mode, constraining filesystem and network access at the OS level.
|
||||
- **macOS 12+** uses **Apple Seatbelt**. LLMX invokes `sandbox-exec` with a profile that corresponds to the selected `--sandbox` mode, constraining filesystem and network access at the OS level.
|
||||
- **Linux** combines **Landlock** and **seccomp** APIs to approximate the same guarantees. Kernel support is required; older kernels may not expose the necessary features.
|
||||
- **Windows (experimental)**:
|
||||
- Launches commands inside a restricted token derived from an AppContainer profile.
|
||||
@@ -76,20 +76,20 @@ The mechanism Codex uses to enforce the sandbox policy depends on your OS:
|
||||
|
||||
Windows sandbox support remains highly experimental. It cannot prevent file writes, deletions, or creations in any directory where the Everyone SID already has write permissions (for example, world-writable folders).
|
||||
|
||||
In containerized Linux environments (for example Docker), sandboxing may not work when the host or container configuration does not expose Landlock/seccomp. In those cases, configure the container to provide the isolation you need and run Codex with `--sandbox danger-full-access` (or the shorthand `--dangerously-bypass-approvals-and-sandbox`) inside that container.
|
||||
In containerized Linux environments (for example Docker), sandboxing may not work when the host or container configuration does not expose Landlock/seccomp. In those cases, configure the container to provide the isolation you need and run LLMX with `--sandbox danger-full-access` (or the shorthand `--dangerously-bypass-approvals-and-sandbox`) inside that container.
|
||||
|
||||
### Experimenting with the Codex Sandbox
|
||||
### Experimenting with the LLMX Sandbox
|
||||
|
||||
To test how commands behave under Codex's sandbox, use the CLI helpers:
|
||||
To test how commands behave under LLMX's sandbox, use the CLI helpers:
|
||||
|
||||
```
|
||||
# macOS
|
||||
codex sandbox macos [--full-auto] [COMMAND]...
|
||||
llmx sandbox macos [--full-auto] [COMMAND]...
|
||||
|
||||
# Linux
|
||||
codex sandbox linux [--full-auto] [COMMAND]...
|
||||
llmx sandbox linux [--full-auto] [COMMAND]...
|
||||
|
||||
# Legacy aliases
|
||||
codex debug seatbelt [--full-auto] [COMMAND]...
|
||||
codex debug landlock [--full-auto] [COMMAND]...
|
||||
llmx debug seatbelt [--full-auto] [COMMAND]...
|
||||
llmx debug landlock [--full-auto] [COMMAND]...
|
||||
```
|
||||
|
||||
@@ -8,24 +8,24 @@ Slash commands are special commands you can type that start with `/`.
|
||||
|
||||
### Built-in slash commands
|
||||
|
||||
Control Codex’s behavior during an interactive session with slash commands.
|
||||
Control LLMX’s behavior during an interactive session with slash commands.
|
||||
|
||||
| Command | Purpose |
|
||||
| ------------ | ----------------------------------------------------------- |
|
||||
| `/model` | choose what model and reasoning effort to use |
|
||||
| `/approvals` | choose what Codex can do without approval |
|
||||
| `/approvals` | choose what LLMX can do without approval |
|
||||
| `/review` | review my current changes and find issues |
|
||||
| `/new` | start a new chat during a conversation |
|
||||
| `/init` | create an AGENTS.md file with instructions for Codex |
|
||||
| `/init` | create an AGENTS.md file with instructions for LLMX |
|
||||
| `/compact` | summarize conversation to prevent hitting the context limit |
|
||||
| `/undo` | ask Codex to undo a turn |
|
||||
| `/undo` | ask LLMX to undo a turn |
|
||||
| `/diff` | show git diff (including untracked files) |
|
||||
| `/mention` | mention a file |
|
||||
| `/status` | show current session configuration and token usage |
|
||||
| `/mcp` | list configured MCP tools |
|
||||
| `/logout` | log out of Codex |
|
||||
| `/quit` | exit Codex |
|
||||
| `/exit` | exit Codex |
|
||||
| `/logout` | log out of LLMX |
|
||||
| `/quit` | exit LLMX |
|
||||
| `/exit` | exit LLMX |
|
||||
| `/feedback` | send logs to maintainers |
|
||||
|
||||
---
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
## Zero data retention (ZDR) usage
|
||||
|
||||
Codex CLI natively supports OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled.
|
||||
LLMX CLI natively supports OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled.
|
||||
|
||||
Reference in New Issue
Block a user