Updated all documentation and configuration files: Documentation changes: - Updated README.md to describe LLMX as LiteLLM-powered fork - Updated CLAUDE.md with LiteLLM integration details - Updated 50+ markdown files across docs/, llmx-rs/, llmx-cli/, sdk/ - Changed all references: codex → llmx, Codex → LLMX - Updated package references: @openai/codex → @llmx/llmx - Updated repository URLs: github.com/openai/codex → github.com/valknar/llmx Configuration changes: - Updated .github/dependabot.yaml - Updated .github workflow files - Updated cliff.toml (changelog configuration) - Updated Cargo.toml comments Key branding updates: - Project description: "coding agent from OpenAI" → "coding agent powered by LiteLLM" - Added attribution to original OpenAI Codex project - Documented LiteLLM integration benefits Files changed: 51 files (559 insertions, 559 deletions) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
75 lines
5.6 KiB
Markdown
75 lines
5.6 KiB
Markdown
## Advanced
|
|
|
|
If you already lean on LLMX every day and just need a little more control, this page collects the knobs you are most likely to reach for: tweak defaults in [Config](./config.md), add extra tools through [Model Context Protocol support](#model-context-protocol), and script full runs with [`llmx exec`](./exec.md). Jump to the section you need and keep building.
|
|
|
|
## Config quickstart {#config-quickstart}
|
|
|
|
Most day-to-day tuning lives in `config.toml`: set approval + sandbox presets, pin model defaults, and add MCP server launchers. The [Config guide](./config.md) walks through every option and provides copy-paste examples for common setups.
|
|
|
|
## Tracing / verbose logging {#tracing-verbose-logging}
|
|
|
|
Because LLMX is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
|
|
|
|
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.llmx/log/llmx-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
|
|
|
|
```bash
|
|
tail -F ~/.llmx/log/llmx-tui.log
|
|
```
|
|
|
|
By comparison, the non-interactive mode (`llmx exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
|
|
|
|
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
|
|
|
|
## Model Context Protocol (MCP) {#model-context-protocol}
|
|
|
|
The LLMX CLI and IDE extension is a MCP client which means that it can be configured to connect to MCP servers. For more information, refer to the [`config docs`](./config.md#mcp-integration).
|
|
|
|
## Using LLMX as an MCP Server {#mcp-server}
|
|
|
|
The LLMX CLI can also be run as an MCP _server_ via `llmx mcp-server`. For example, you can use `llmx mcp-server` to make LLMX available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `llmx mcp` separately to add/list/get/remove MCP server launchers in your configuration.
|
|
|
|
### LLMX MCP Server Quickstart {#mcp-server-quickstart}
|
|
|
|
You can launch a LLMX MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
|
|
|
|
```bash
|
|
npx @modelcontextprotocol/inspector llmx mcp-server
|
|
```
|
|
|
|
Send a `tools/list` request and you will see that there are two tools available:
|
|
|
|
**`llmx`** - Run a LLMX session. Accepts configuration parameters matching the LLMX Config struct. The `llmx` tool takes the following properties:
|
|
|
|
| Property | Type | Description |
|
|
| ----------------------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
|
| **`prompt`** (required) | string | The initial user prompt to start the LLMX conversation. |
|
|
| `approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `on-request`, `never`. |
|
|
| `base-instructions` | string | The set of instructions to use instead of the default ones. |
|
|
| `config` | object | Individual [config settings](https://github.com/valknar/llmx/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
|
|
| `cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory. |
|
|
| `model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`). |
|
|
| `profile` | string | Configuration profile from `config.toml` to specify default options. |
|
|
| `sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`. |
|
|
|
|
**`llmx-reply`** - Continue a LLMX session by providing the conversation id and prompt. The `llmx-reply` tool takes the following properties:
|
|
|
|
| Property | Type | Description |
|
|
| ------------------------------- | ------ | -------------------------------------------------------- |
|
|
| **`prompt`** (required) | string | The next user prompt to continue the LLMX conversation. |
|
|
| **`conversationId`** (required) | string | The id of the conversation to continue. |
|
|
|
|
### Trying it Out {#mcp-server-trying-it-out}
|
|
|
|
> [!TIP]
|
|
> LLMX often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
|
|
|
|
Use the MCP inspector and `llmx mcp-server` to build a simple tic-tac-toe game with the following settings:
|
|
|
|
**approval-policy:** never
|
|
|
|
**prompt:** Implement a simple tic-tac-toe game with HTML, JavaScript, and CSS. Write the game in a single file called index.html.
|
|
|
|
**sandbox:** workspace-write
|
|
|
|
Click "Run Tool" and you should see a list of events emitted from the LLMX MCP server as it builds the game.
|