- Fixed environment variable names from LITELLM_* to LLMX_*
- LITELLM_BASE_URL → LLMX_BASE_URL
- LITELLM_API_KEY → LLMX_API_KEY
- Updated all documentation links to use absolute GitHub URLs instead of relative paths
- Fixes 404 errors when viewing README on npm registry
- All ./docs/ and ./LITELLM-SETUP.md links now point to github.com/valknarthing/llmx
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Update project branding to reflect community fork status:
- Changed welcome message from "OpenAI's command-line coding agent"
to "your command-line coding agent"
- Updated system prompt from "led by OpenAI" to "community project"
- Removed broken developers.openai.com/llmx documentation URLs
- Updated Windows setup instructions with correct npm package name
- Removed OpenAI CLA (not applicable to community fork)
- Removed OpenAI open source fund documentation
- Updated config docs to remove managed configuration references
- Changed npm package reference from @openai/llmx to @valknarthing/llmx
These changes complete the rebranding from OpenAI's project to an
independent community fork while maintaining proper attribution to
the original project in README.md.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove .github/*.png images (not needed)
- Remove Homebrew installation instructions (no cask available)
- Fix original project URL to point to openai/codex
- Fix GitHub URLs from valknar to valknarthing
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This release represents a comprehensive transformation of the codebase from Codex to LLMX,
enhanced with LiteLLM integration to support 100+ LLM providers through a unified API.
## Major Changes
### Phase 1: Repository & Infrastructure Setup
- Established new repository structure and branching strategy
- Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md)
- Set up development environment and tooling configuration
### Phase 2: Rust Workspace Transformation
- Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates)
- Updated package names, binary names, and workspace members
- Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs
- Updated all internal references, imports, and type names
- Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/
- Fixed all Rust compilation errors after mass rename
### Phase 3: LiteLLM Integration
- Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.)
- Implemented OpenAI-compatible Chat Completions API support
- Added model family detection and provider-specific handling
- Updated authentication to support LiteLLM API keys
- Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL
- Added LLMX_API_KEY for unified authentication
- Enhanced error handling for Chat Completions API responses
- Implemented fallback mechanisms between Responses API and Chat Completions API
### Phase 4: TypeScript/Node.js Components
- Renamed npm package: @codex/codex-cli → @valknar/llmx
- Updated TypeScript SDK to use new LLMX APIs and endpoints
- Fixed all TypeScript compilation and linting errors
- Updated SDK tests to support both API backends
- Enhanced mock server to handle multiple API formats
- Updated build scripts for cross-platform packaging
### Phase 5: Configuration & Documentation
- Updated all configuration files to use LLMX naming
- Rewrote README and documentation for LLMX branding
- Updated config paths: ~/.codex/ → ~/.llmx/
- Added comprehensive LiteLLM setup guide
- Updated all user-facing strings and help text
- Created release plan and migration documentation
### Phase 6: Testing & Validation
- Fixed all Rust tests for new naming scheme
- Updated snapshot tests in TUI (36 frame files)
- Fixed authentication storage tests
- Updated Chat Completions payload and SSE tests
- Fixed SDK tests for new API endpoints
- Ensured compatibility with Claude Sonnet 4.5 model
- Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL)
### Phase 7: Build & Release Pipeline
- Updated GitHub Actions workflows for LLMX binary names
- Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/
- Updated CI/CD pipelines for new package names
- Made Apple code signing optional in release workflow
- Enhanced npm packaging resilience for partial platform builds
- Added Windows sandbox support to workspace
- Updated dotslash configuration for new binary names
### Phase 8: Final Polish
- Renamed all assets (.github images, labels, templates)
- Updated VSCode and DevContainer configurations
- Fixed all clippy warnings and formatting issues
- Applied cargo fmt and prettier formatting across codebase
- Updated issue templates and pull request templates
- Fixed all remaining UI text references
## Technical Details
**Breaking Changes:**
- Binary name changed from `codex` to `llmx`
- Config directory changed from `~/.codex/` to `~/.llmx/`
- Environment variables renamed (CODEX_* → LLMX_*)
- npm package renamed to `@valknar/llmx`
**New Features:**
- Support for 100+ LLM providers via LiteLLM
- Unified authentication with LLMX_API_KEY
- Enhanced model provider detection and handling
- Improved error handling and fallback mechanisms
**Files Changed:**
- 578 files modified across Rust, TypeScript, and documentation
- 30+ Rust crates renamed and updated
- Complete rebrand of UI, CLI, and documentation
- All tests updated and passing
**Dependencies:**
- Updated Cargo.lock with new package names
- Updated npm dependencies in llmx-cli
- Enhanced OpenAPI models for LLMX backend
This release establishes LLMX as a standalone project with comprehensive LiteLLM
integration, maintaining full backward compatibility with existing functionality
while opening support for a wide ecosystem of LLM providers.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Sebastian Krüger <support@pivoine.art>
I was missing an example config.toml, and following docs/config.md alone
was slower. I had GPT-5 scan the codebase for every accepted config key,
check the defaults, and generate a single example config.toml with
annotations. It lists all keys Codex reads from TOML, sets each to its
effective default where it exists, leaves optional ones commented, and
adds short comments on purpose and valid values. This should make
onboarding faster and reduce configuration errors. I can rename it to
config.example.toml or move it under docs/ if you prefer.
This pull request adds a new documentation section to explain the
available slash commands in Codex. The update introduces a clear
overview and a reference table for built-in commands, making it easier
for users to understand and utilize these features.
Documentation updates:
* Added a new section to `docs/slash_commands.md` describing what slash
commands are and listing all built-in commands with their purposes in a
formatted table.
I finished reading “Getting Started,” but couldn’t find the
“Configuration” section in the README. After following the link, I
realized “Configuration” is in a separate file, so I updated the README
accordingly.
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
Co-authored-by: Eric Traut <etraut@openai.com>
Expand the custom prompts documentation and link it from other guides. Show saved prompt metadata in the slash-command popup, with tests covering description fallbacks.
Apparently we were not running our `pnpm run prettier` check in CI, so
many files that were covered by the existing Prettier check were not
well-formatted.
This updates CI and formats the files.
The default install command causes unexpected code to be executed:
```
npm install -g @openai/codex # Alternatively: `brew install codex`
```
The problem is some environment will treat # as literal string, not
start of comment. Therefore the user will execute this instead (because
it's in backtick)
```
brew install codex
```
And then the npm command will error (because it's trying to install
package #)
This PR cleans up the monolithic README by breaking it into a set
navigable pages under docs/ (install, getting started, configuration,
authentication, sandboxing and approvals, platform details, FAQ, ZDR,
contributing, license). The top‑level README is now more concise and
intuitive, (with corrected screenshots).
It also consolidates overlapping content from codex-rs/README.md into
the top‑level docs and updates links accordingly. The codex-rs README
remains in place for now as a pointer and for continuity.
Finally, added an extensive config reference table at the bottom of
docs/config.md.
---------
Co-authored-by: easong-openai <easong@openai.com>
Motivation: we have users who uses their API key although they want to
use ChatGPT account. We want to give them the chance to always login
with their account.
This PR displays login options when the user is not signed in with
ChatGPT. Even if you have set an OpenAI API key as an environment
variable, you will still be prompted to log in with ChatGPT.
We’ve also added a new flag, `always_use_api_key_signing` false by
default, which ensures you are never asked to log in with ChatGPT and
always defaults to using your API key.
https://github.com/user-attachments/assets/b61ebfa9-3c5e-4ab7-bf94-395c23a0e0af
After ChatGPT sign in:
https://github.com/user-attachments/assets/d58b366b-c46a-428f-a22f-2ac230f991c0
Users on "headless" machines, such as WSL users, are understandable
having trouble authenticating successfully. To date, I have been
providing one-off user support on issues such as
https://github.com/openai/codex/issues/2000, but we need a more detailed
explanation that we can link to so that users can self-serve. This PR
aims to provide detailed information that we can link to in response to
user issues going forward.
That said, it would also be helpful if we employed heuristics to detect
this issue at runtime, and/or we should just link to these docs as part
of the `codex login` flow.
This adds support for easily running Codex backed by a local Ollama
instance running our new open source models. See
https://github.com/openai/gpt-oss for details.
If you pass in `--oss` you'll be prompted to install/launch ollama, and
it will automatically download the 20b model and attempt to use it.
We'll likely want to expand this with some options later to make the
experience smoother for users who can't run the 20b or want to run the
120b.
Co-authored-by: Michael Bolin <mbolin@openai.com>
On a high-level, we try to design `config.toml` so that you don't have
to "comment out a lot of stuff" when testing different options.
Previously, defining a sandbox policy was somewhat at odds with this
principle because you would define the policy as attributes of
`[sandbox]` like so:
```toml
[sandbox]
mode = "workspace-write"
writable_roots = [ "/tmp" ]
```
but if you wanted to temporarily change to a read-only sandbox, you
might feel compelled to modify your file to be:
```toml
[sandbox]
mode = "read-only"
# mode = "workspace-write"
# writable_roots = [ "/tmp" ]
```
Technically, commenting out `writable_roots` would not be strictly
necessary, as `mode = "read-only"` would ignore `writable_roots`, but
it's still a reasonable thing to do to keep things tidy.
Currently, the various values for `mode` do not support that many
attributes, so this is not that hard to maintain, but one could imagine
this becoming more complex in the future.
In this PR, we change Codex CLI so that it no longer recognizes
`[sandbox]`. Instead, it introduces a top-level option, `sandbox_mode`,
and `[sandbox_workspace_write]` is used to further configure the sandbox
when when `sandbox_mode = "workspace-write"` is used:
```toml
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
```
This feels a bit more future-proof in that it is less tedious to
configure different sandboxes:
```toml
sandbox_mode = "workspace-write"
[sandbox_read_only]
# read-only options here...
[sandbox_workspace_write]
writable_roots = [ "/tmp" ]
[sandbox_danger_full_access]
# danger-full-access options here...
```
In this scheme, you never need to comment out the configuration for an
individual sandbox type: you only need to redefine `sandbox_mode`.
Relatedly, previous to this change, a user had to do `-c
sandbox.mode=read-only` to change the mode on the command line. With
this change, things are arguably a bit cleaner because the equivalent
option is `-c sandbox_mode=read-only` (and now `-c
sandbox_workspace_write=...` can be set separately).
Though more importantly, we introduce the `-s/--sandbox` option to the
CLI, which maps directly to `sandbox_mode` in `config.toml`, making
config override behavior easier to reason about. Moreover, as you can
see in the updates to the various Markdown files, it is much easier to
explain how to configure sandboxing when things like `--sandbox
read-only` can be used as an example.
Relatedly, this cleanup also made it straightforward to add support for
a `sandbox` option for Codex when used as an MCP server (see the changes
to `mcp-server/src/codex_tool_config.rs`).
Fixes https://github.com/openai/codex/issues/1248.
v0.2.0 of https://www.npmjs.com/package/@openai/codex now runs the Rust
CLI, so it makes sense to bring back the instructions to use `npm i -g
@openai/codex`.
In most places, I list `npm install` before `brew install` because I
believe `npm` is more readily available, though I in the more detailed
part of the documentation, I note that `brew install` will download
fewer bytes, and in that sense, is preferred.
As promised on https://github.com/openai/codex/discussions/1405, we are
making the first official release of the Rust CLI as v0.2.0. As part of
this move, we are making it available in Homebrew:
https://github.com/Homebrew/homebrew-core/pull/228615
Ultimately, we also plan to continue to make the CLI available in npm,
as well, though brew is a bit nicer in that `brew install` will download
only the binary for your platform whereas an npm module is expected to
contain the binaries for _all_ supported platforms, so it is a bit more
heavyweight.
A big part of this change is updating the root `README.md` to document
the behavior of the Rust CLI, which differs in a number of ways from the
TypeScript CLI. The existing `README.md` is moved to
`codex-cli/README.md` as part of this PR, as it is still applicable to
that folder.
As this is still early days for the Rust CLI, I encourage folks to
provide feedback on the command line flags and configuration options.
- Use Responses API for Azure provider endpoints
- Added a unit test to catch regression on the change from
`/chat/completions` to `/responses`
- Updated the default AOAI api version from `2025-03-01-preview` to
`2025-04-01-preview` to avoid user/400 errors due to missing summary
support in the March API version.
- Changes have been tested locally on AOAI endpoints
Right now since the repo is having two different implementations of
codex, flake was updated to work with both typescript implementation and
rust implementation
This PR introduces an optional build flag, `--native`, that will build a
version of the Codex npm module that:
- Includes both the Node.js and native Rust versions (for Mac and Linux)
- Will run the native version if `CODEX_RUST=1` is set
- Runs the TypeScript version otherwise
Note this PR also updates the workflow URL to
https://github.com/openai/codex/actions/runs/14872557396, as that is a
build from today that includes everything up through
https://github.com/openai/codex/pull/843.
Test Plan:
In `~/code/codex/codex-cli`, I ran:
```
pnpm stage-release --native
```
The end of the output was:
```
Staged version 0.1.2505121317 for release in /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN
Test Node:
node /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN/bin/codex.js --help
Test Rust:
CODEX_RUST=1 node /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN/bin/codex.js --help
Next: cd "/var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN" && npm publish --tag native
```
I verified that running each of these commands ran the expected version
of Codex.
While here, I also added `bin` to the `files` list in `package.json`,
which should have been done as part of
https://github.com/openai/codex/pull/757, as that added new entries to
`bin` that were matched by `.gitignore` but should have been included in
a release.
- Added ArceeAI as a provider - https://conductor.arcee.ai/v1
- Compatible with ArceeAI SLMs (Virtuoso, Maestro)
- Works with ArceeAI's Conductor auto‑router models (auto, auto‑tool),
once #817 is merged
## Summary
This PR introduces support for Azure OpenAI as a provider within the
Codex CLI. Users can now configure the tool to leverage their Azure
OpenAI deployments by specifying `"azure"` as the provider in
`config.json` and setting the corresponding `AZURE_OPENAI_API_KEY` and
`AZURE_OPENAI_API_VERSION` environment variables. This functionality is
added alongside the existing provider options (OpenAI, OpenRouter,
etc.).
Related to #92
**Note:** This PR is currently in **Draft** status because tests on the
`main` branch are failing. It will be marked as ready for review once
the `main` branch is stable and tests are passing.
---
## What’s Changed
- **Configuration (`config.ts`, `providers.ts`, `README.md`):**
- Added `"azure"` to the supported `providers` list in `providers.ts`,
specifying its name, default base URL structure, and environment
variable key (`AZURE_OPENAI_API_KEY`).
- Defined the `AZURE_OPENAI_API_VERSION` environment variable in
`config.ts` with a default value (`2025-03-01-preview`).
- Updated `README.md` to:
- Include "azure" in the list of providers.
- Add a configuration section for Azure OpenAI, detailing the required
environment variables (`AZURE_OPENAI_API_KEY`,
`AZURE_OPENAI_API_VERSION`) with examples.
- **Client Instantiation (`terminal-chat.tsx`, `singlepass-cli-app.tsx`,
`agent-loop.ts`, `compact-summary.ts`, `model-utils.ts`):**
- Modified various components and utility functions where the OpenAI
client is initialized.
- Added conditional logic to check if the configured `provider` is
`"azure"`.
- If the provider is Azure, the `AzureOpenAI` client from the `openai`
package is instantiated, using the configured `baseURL`, `apiKey` (from
`AZURE_OPENAI_API_KEY`), and `apiVersion` (from
`AZURE_OPENAI_API_VERSION`).
- Otherwise, the standard `OpenAI` client is instantiated as before.
- **Dependencies:**
- Relies on the `openai` package's built-in support for `AzureOpenAI`.
No *new* external dependencies were added specifically for this Azure
implementation beyond the `openai` package itself.
---
## How to Test
*This has been tested locally and confirmed working with Azure OpenAI.*
1. **Configure `config.json`:**
Ensure your `~/.codex/config.json` (or project-specific config) includes
Azure and sets it as the active provider:
```json
{
"providers": {
// ... other providers
"azure": {
"name": "AzureOpenAI",
"baseURL": "https://YOUR_RESOURCE_NAME.openai.azure.com", // Replace
with your Azure endpoint
"envKey": "AZURE_OPENAI_API_KEY"
}
},
"provider": "azure", // Set Azure as the active provider
"model": "o4-mini" // Use your Azure deployment name here
// ... other config settings
}
```
2. **Set up Environment Variables:**
```bash
# Set the API Key for your Azure OpenAI resource
export AZURE_OPENAI_API_KEY="your-azure-api-key-here"
# Set the API Version (Optional - defaults to `2025-03-01-preview` if
not set)
# Ensure this version is supported by your Azure deployment and endpoint
export AZURE_OPENAI_API_VERSION="2025-03-01-preview"
```
3. **Get the Codex CLI by building from this PR branch:**
Clone your fork, checkout this branch (`feat/azure-openai`), navigate to
`codex-cli`, and build:
```bash
# cd /path/to/your/fork/codex
git checkout feat/azure-openai # Or your branch name
cd codex-cli
corepack enable
pnpm install
pnpm build
```
4. **Invoke Codex:**
Run the locally built CLI using `node` from the `codex-cli` directory:
```bash
node ./dist/cli.js "Explain the purpose of this PR"
```
*(Alternatively, if you ran `pnpm link` after building, you can use
`codex "Explain the purpose of this PR"` from anywhere)*.
5. **Verify:** Confirm that the command executes successfully and
interacts with your configured Azure OpenAI deployment.
---
## Tests
- [x] Tested locally against an Azure OpenAI deployment using API Key
authentication. Basic commands and interactions confirmed working.
---
## Checklist
- [x] Added Azure provider details to configuration files
(`providers.ts`, `config.ts`).
- [x] Implemented conditional `AzureOpenAI` client initialization based
on provider setting.
- [x] Ensured `apiVersion` is passed correctly to the Azure client.
- [x] Updated `README.md` with Azure OpenAI setup instructions.
- [x] Manually tested core functionality against a live Azure OpenAI
endpoint.
- [x] Add/update automated tests for the Azure code path (pending `main`
stability).
cc @theabhinavdas @nikodem-wrona @fouad-openai @tibo-openai (adjust as
needed)
---
I have read the CLA Document and I hereby sign the CLA
This introduces `./codex-cli/scripts/stage_release.sh`, which is a shell
script that stages a release for the Node.js module in a temp directory.
It updates the release to include these native binaries:
```
bin/codex-linux-sandbox-arm64
bin/codex-linux-sandbox-x64
```
though this PR does not update Codex CLI to use them yet.
When doing local development, run
`./codex-cli/scripts/install_native_deps.sh` to install these in your
own `bin/` folder.
This PR also updates `README.md` to document the new workflow.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/757).
* #763
* __->__ #757
I am working to simplify the build process. As a first step, update
`session.ts` so it reads the `version` from `package.json` at runtime so
we no longer have to modify it during the build process. I want to get
to a place where the build looks like:
```
cd codex-cli
pnpm i
pnpm build
RELEASE_DIR=$(mktemp -d)
cp -r bin "$RELEASE_DIR/bin"
cp -r dist "$RELEASE_DIR/dist"
cp -r src "$RELEASE_DIR/src" # important if we want sourcemaps to continue to work
cp ../README.md "$RELEASE_DIR"
VERSION=$(printf '0.1.%d' $(date +%y%m%d%H%M))
jq --arg version "$VERSION" '.version = $version' package.json > "$RELEASE_DIR/package.json"
```
Then the contents of `$RELEASE_DIR` should be good to `npm publish`, no?
close: #651
Hi! @tibo-openai 👋 Could you share some great examples of
`instructions.md` files? Thanks!
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
When using a non-built-in provider with the `--provider` option, users
are prompted:
```
Set the environment variable <provider>_API_KEY and re-run this command.
You can create a <provider>_API_KEY in the <provider> dashboard.
```
However, many users are confused because, even after correctly setting
`<provider>_API_KEY`, authentication may still fail unless
`OPENAI_API_KEY` is _also_ present in the environment. This is not
intuitive and leads to ambiguity about which API key is actually
required and used as a fallback, especially when using custom or
third-party (non-listed) providers.
Furthermore, the original README/documentation did not mention the
requirement to set `<provider>_BASE_URL` for non-built-in providers,
which is necessary for proper client behavior. This omission made the
configuration process more difficult for users trying to integrate with
custom endpoints.
This introduces a Python script (written by Codex!) to verify that the
table of contents in the root `README.md` matches the headings. Like
`scripts/asciicheck.py` in https://github.com/openai/codex/pull/513, it
reports differences by default (and exits non-zero if there are any) and
also has a `--fix` option to synchronize the ToC with the headings.
This will be enforced by CI and the changes to `README.md` in this PR
were generated by the script, so you can see that our ToC was missing
some entries prior to this PR.