Phase 5: Configuration & Documentation

Updated all documentation and configuration files:

Documentation changes:
- Updated README.md to describe LLMX as LiteLLM-powered fork
- Updated CLAUDE.md with LiteLLM integration details
- Updated 50+ markdown files across docs/, llmx-rs/, llmx-cli/, sdk/
- Changed all references: codex → llmx, Codex → LLMX
- Updated package references: @openai/codex → @llmx/llmx
- Updated repository URLs: github.com/openai/codex → github.com/valknar/llmx

Configuration changes:
- Updated .github/dependabot.yaml
- Updated .github workflow files
- Updated cliff.toml (changelog configuration)
- Updated Cargo.toml comments

Key branding updates:
- Project description: "coding agent from OpenAI" → "coding agent powered by LiteLLM"
- Added attribution to original OpenAI Codex project
- Documented LiteLLM integration benefits

Files changed: 51 files (559 insertions, 559 deletions)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Sebastian Krüger
2025-11-11 14:45:40 +01:00
parent 0c2c36e14e
commit c493ea1347
51 changed files with 559 additions and 559 deletions

View File

@@ -1,6 +1,6 @@
## Advanced
If you already lean on Codex every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](#model-context-protocol), and script full runs with [`codex exec`](./exec.md). Jump to the section you need and keep building.
If you already lean on LLMX every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](#model-context-protocol), and script full runs with [`llmx exec`](./exec.md). Jump to the section you need and keep building.
## Config quickstart {#config-quickstart}
@@ -8,62 +8,62 @@ Most day-to-day tuning lives in `config.toml`: set approval + sandbox presets, p
## Tracing / verbose logging {#tracing-verbose-logging}
Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
Because LLMX is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.llmx/log/llmx-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
```bash
tail -F ~/.codex/log/codex-tui.log
tail -F ~/.llmx/log/llmx-tui.log
```
By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
By comparison, the non-interactive mode (`llmx exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
## Model Context Protocol (MCP) {#model-context-protocol}
The Codex CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#mcp-integration).
The LLMX CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#mcp-integration).
## Using Codex as an MCP Server {#mcp-server}
## Using LLMX as an MCP Server {#mcp-server}
The Codex CLI can also be run as an MCP _server_ via `codex mcp-server`. For example, you can use `codex mcp-server` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `codex mcp` separately to add/list/get/remove MCP server launchers in your configuration.
The LLMX CLI can also be run as an MCP _server_ via `llmx mcp-server`. For example, you can use `llmx mcp-server` to make LLMX available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `llmx mcp` separately to add/list/get/remove MCP server launchers in your configuration.
### Codex MCP Server Quickstart {#mcp-server-quickstart}
### LLMX MCP Server Quickstart {#mcp-server-quickstart}
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
You can launch a LLMX MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
```bash
npx @modelcontextprotocol/inspector codex mcp-server
npx @modelcontextprotocol/inspector llmx mcp-server
```
Send a `tools/list` request and you will see that there are two tools available:
**`codex`** - Run a Codex session. Accepts configuration parameters matching the Codex Config struct. The `codex` tool takes the following properties:
**`llmx`** - Run a LLMX session. Accepts configuration parameters matching the LLMX Config struct. The `llmx` tool takes the following properties:
| Property | Type | Description |
| ----------------------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **`prompt`** (required) | string | The initial user prompt to start the Codex conversation. |
| **`prompt`** (required) | string | The initial user prompt to start the LLMX conversation. |
| `approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `on-request`, `never`. |
| `base-instructions` | string | The set of instructions to use instead of the default ones. |
| `config` | object | Individual [config settings](https://github.com/openai/codex/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
| `config` | object | Individual [config settings](https://github.com/valknar/llmx/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
| `cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory. |
| `model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`). |
| `profile` | string | Configuration profile from `config.toml` to specify default options. |
| `sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`. |
**`codex-reply`** - Continue a Codex session by providing the conversation id and prompt. The `codex-reply` tool takes the following properties:
**`llmx-reply`** - Continue a LLMX session by providing the conversation id and prompt. The `llmx-reply` tool takes the following properties:
| Property | Type | Description |
| ------------------------------- | ------ | -------------------------------------------------------- |
| **`prompt`** (required) | string | The next user prompt to continue the Codex conversation. |
| **`prompt`** (required) | string | The next user prompt to continue the LLMX conversation. |
| **`conversationId`** (required) | string | The id of the conversation to continue. |
### Trying it Out {#mcp-server-trying-it-out}
> [!TIP]
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
> LLMX often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
Use the MCP inspector and `codex mcp-server` to build a simple tic-tac-toe game with the following settings:
Use the MCP inspector and `llmx mcp-server` to build a simple tic-tac-toe game with the following settings:
**approval-policy:** never
@@ -71,4 +71,4 @@ Use the MCP inspector and `codex mcp-server` to build a simple tic-tac-toe game
**sandbox:** workspace-write
Click "Run Tool" and you should see a list of events emitted from the Codex MCP server as it builds the game.
Click "Run Tool" and you should see a list of events emitted from the LLMX MCP server as it builds the game.