Updated all documentation and configuration files: Documentation changes: - Updated README.md to describe LLMX as LiteLLM-powered fork - Updated CLAUDE.md with LiteLLM integration details - Updated 50+ markdown files across docs/, llmx-rs/, llmx-cli/, sdk/ - Changed all references: codex → llmx, Codex → LLMX - Updated package references: @openai/codex → @llmx/llmx - Updated repository URLs: github.com/openai/codex → github.com/valknar/llmx Configuration changes: - Updated .github/dependabot.yaml - Updated .github workflow files - Updated cliff.toml (changelog configuration) - Updated Cargo.toml comments Key branding updates: - Project description: "coding agent from OpenAI" → "coding agent powered by LiteLLM" - Added attribution to original OpenAI Codex project - Documented LiteLLM integration benefits Files changed: 51 files (559 insertions, 559 deletions) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
5.1 KiB
Non-interactive mode
Use LLMX in non-interactive mode to automate common workflows.
llmx exec "count the total number of lines of code in this project"
In non-interactive mode, LLMX does not ask for command or edit approvals. By default it runs in read-only mode, so it cannot edit files or run commands that require network access.
Use llmx exec --full-auto to allow file edits. Use llmx exec --sandbox danger-full-access to allow edits and networked commands.
Default output mode
By default, LLMX streams its activity to stderr and only writes the final message from the agent to stdout. This makes it easier to pipe llmx exec into another tool without extra filtering.
To write the output of llmx exec to a file, in addition to using a shell redirect like >, there is also a dedicated flag to specify an output file: -o/--output-last-message.
JSON output mode
llmx exec supports a --json mode that streams events to stdout as JSON Lines (JSONL) while the agent runs.
Supported event types:
thread.started- when a thread is started or resumed.turn.started- when a turn starts. A turn encompasses all events between the user message and the assistant response.turn.completed- when a turn completes; includes token usage.turn.failed- when a turn fails; includes error details.item.started/item.updated/item.completed- when a thread item is added/updated/completed.error- when the stream reports an unrecoverable error; includes the error message.
Supported item types:
agent_message- assistant message.reasoning- a summary of the assistant's thinking.command_execution- assistant executing a command.file_change- assistant making file changes.mcp_tool_call- assistant calling an MCP tool.web_search- assistant performing a web search.todo_list- the agent's running plan when the plan tool is active, updating as steps change.
Typically, an agent_message is added at the end of the turn.
Sample output:
{"type":"thread.started","thread_id":"0199a213-81c0-7800-8aa1-bbab2a035a53"}
{"type":"turn.started"}
{"type":"item.completed","item":{"id":"item_0","type":"reasoning","text":"**Searching for README files**"}}
{"type":"item.started","item":{"id":"item_1","type":"command_execution","command":"bash -lc ls","aggregated_output":"","status":"in_progress"}}
{"type":"item.completed","item":{"id":"item_1","type":"command_execution","command":"bash -lc ls","aggregated_output":"2025-09-11\nAGENTS.md\nCHANGELOG.md\ncliff.toml\ncodex-cli\ncodex-rs\ndocs\nexamples\nflake.lock\nflake.nix\nLICENSE\nnode_modules\nNOTICE\npackage.json\npnpm-lock.yaml\npnpm-workspace.yaml\nPNPM.md\nREADME.md\nscripts\nsdk\ntmp\n","exit_code":0,"status":"completed"}}
{"type":"item.completed","item":{"id":"item_2","type":"reasoning","text":"**Checking repository root for README**"}}
{"type":"item.completed","item":{"id":"item_3","type":"agent_message","text":"Yep — there’s a `README.md` in the repository root."}}
{"type":"turn.completed","usage":{"input_tokens":24763,"cached_input_tokens":24448,"output_tokens":122}}
Structured output
By default, the agent responds with natural language. Use --output-schema to provide a JSON Schema that defines the expected JSON output.
The JSON Schema must follow the strict schema rules.
Sample schema:
{
"type": "object",
"properties": {
"project_name": { "type": "string" },
"programming_languages": { "type": "array", "items": { "type": "string" } }
},
"required": ["project_name", "programming_languages"],
"additionalProperties": false
}
llmx exec "Extract details of the project" --output-schema ~/schema.json
...
{"project_name":"LLMX CLI","programming_languages":["Rust","TypeScript","Shell"]}
Combine --output-schema with -o to only print the final JSON output. You can also pass a file path to -o to save the JSON output to a file.
Git repository requirement
LLMX requires a Git repository to avoid destructive changes. To disable this check, use llmx exec --skip-git-repo-check.
Resuming non-interactive sessions
Resume a previous non-interactive session with llmx exec resume <SESSION_ID> or llmx exec resume --last. This preserves conversation context so you can ask follow-up questions or give new tasks to the agent.
llmx exec "Review the change, look for use-after-free issues"
llmx exec resume --last "Fix use-after-free issues"
Only the conversation context is preserved; you must still provide flags to customize LLMX behavior.
llmx exec --model gpt-5-llmx --json "Review the change, look for use-after-free issues"
llmx exec --model gpt-5 --json resume --last "Fix use-after-free issues"
Authentication
By default, llmx exec will use the same authentication method as LLMX CLI and VSCode extension. You can override the api key by setting the CODEX_API_KEY environment variable.
CODEX_API_KEY=your-api-key-here llmx exec "Fix merge conflict"
NOTE: CODEX_API_KEY is only supported in llmx exec.