Reasoning effort was already available, but not expressed into the help
text, so it was non-discoverable.
Other issues discovered, but will fix in separate PR since they are
larger:
* #816 reasoningEffort isn't displayed in the terminal-header, making it
rather hard to see the state of configuration
* I don't think the config file setting works, as the CLI option always
"wins" and overwrites it
Fix: retry on server_error responses that lack an HTTP status code
### What happened
1. An OpenAI endpoint returned a **5xx** (transient server-side
failure).
2. The SDK surfaced it as an `APIError` with
{ "type": "server_error", "message": "...", "status": undefined }
(The SDK does not always populate `status` for these cases.)
3. Our retry logic in `src/utils/agent/agent-loop.ts` determined
isServerError = typeof status === "number" && status >= 500;
Because `status` was *undefined*, the error was **not** recognised as
retriable, the exception bubbled out, and the CLI crashed with a stack
trace similar to:
Error: An error occurred while processing the request.
at .../cli.js:474:1514
### Root cause
The transient-error detector ignored the semantic flag type ===
"server_error" that the SDK provides when the numeric status is missing.
#### Fix (1 loc + comment)
Extend the check:
const status = errCtx?.status ?? errCtx?.httpStatus ??
errCtx?.statusCode;
const isServerError = (typeof status === "number" && status >= 500) ||
// classic 5xx
errCtx?.type === "server_error"; // <-- NEW
Now the agent:
* Retries up to **5** times (existing logic) when the backend reports a
transient failure, even if `status` is absent.
* If all retries fail, surfaces the existing friendly system message
instead of an uncaught exception.
### Tests & validation
pnpm test # all suites green (17 agent-level tests now include this
path)
pnpm run lint # 0 errors / warnings
pnpm run typecheck
A new unit-test file isn’t required—the behaviour is already covered by
tests/agent-server-retry.test.ts, which stubs type: "server_error" and
now passes with the updated logic.
### Impact
* No API-surface changes.
* Prevents CLI crashes on intermittent OpenAI outages.
* Adds robust handling for other providers that may follow the same
error-shape.
## Summary
This PR introduces support for Azure OpenAI as a provider within the
Codex CLI. Users can now configure the tool to leverage their Azure
OpenAI deployments by specifying `"azure"` as the provider in
`config.json` and setting the corresponding `AZURE_OPENAI_API_KEY` and
`AZURE_OPENAI_API_VERSION` environment variables. This functionality is
added alongside the existing provider options (OpenAI, OpenRouter,
etc.).
Related to #92
**Note:** This PR is currently in **Draft** status because tests on the
`main` branch are failing. It will be marked as ready for review once
the `main` branch is stable and tests are passing.
---
## What’s Changed
- **Configuration (`config.ts`, `providers.ts`, `README.md`):**
- Added `"azure"` to the supported `providers` list in `providers.ts`,
specifying its name, default base URL structure, and environment
variable key (`AZURE_OPENAI_API_KEY`).
- Defined the `AZURE_OPENAI_API_VERSION` environment variable in
`config.ts` with a default value (`2025-03-01-preview`).
- Updated `README.md` to:
- Include "azure" in the list of providers.
- Add a configuration section for Azure OpenAI, detailing the required
environment variables (`AZURE_OPENAI_API_KEY`,
`AZURE_OPENAI_API_VERSION`) with examples.
- **Client Instantiation (`terminal-chat.tsx`, `singlepass-cli-app.tsx`,
`agent-loop.ts`, `compact-summary.ts`, `model-utils.ts`):**
- Modified various components and utility functions where the OpenAI
client is initialized.
- Added conditional logic to check if the configured `provider` is
`"azure"`.
- If the provider is Azure, the `AzureOpenAI` client from the `openai`
package is instantiated, using the configured `baseURL`, `apiKey` (from
`AZURE_OPENAI_API_KEY`), and `apiVersion` (from
`AZURE_OPENAI_API_VERSION`).
- Otherwise, the standard `OpenAI` client is instantiated as before.
- **Dependencies:**
- Relies on the `openai` package's built-in support for `AzureOpenAI`.
No *new* external dependencies were added specifically for this Azure
implementation beyond the `openai` package itself.
---
## How to Test
*This has been tested locally and confirmed working with Azure OpenAI.*
1. **Configure `config.json`:**
Ensure your `~/.codex/config.json` (or project-specific config) includes
Azure and sets it as the active provider:
```json
{
"providers": {
// ... other providers
"azure": {
"name": "AzureOpenAI",
"baseURL": "https://YOUR_RESOURCE_NAME.openai.azure.com", // Replace
with your Azure endpoint
"envKey": "AZURE_OPENAI_API_KEY"
}
},
"provider": "azure", // Set Azure as the active provider
"model": "o4-mini" // Use your Azure deployment name here
// ... other config settings
}
```
2. **Set up Environment Variables:**
```bash
# Set the API Key for your Azure OpenAI resource
export AZURE_OPENAI_API_KEY="your-azure-api-key-here"
# Set the API Version (Optional - defaults to `2025-03-01-preview` if
not set)
# Ensure this version is supported by your Azure deployment and endpoint
export AZURE_OPENAI_API_VERSION="2025-03-01-preview"
```
3. **Get the Codex CLI by building from this PR branch:**
Clone your fork, checkout this branch (`feat/azure-openai`), navigate to
`codex-cli`, and build:
```bash
# cd /path/to/your/fork/codex
git checkout feat/azure-openai # Or your branch name
cd codex-cli
corepack enable
pnpm install
pnpm build
```
4. **Invoke Codex:**
Run the locally built CLI using `node` from the `codex-cli` directory:
```bash
node ./dist/cli.js "Explain the purpose of this PR"
```
*(Alternatively, if you ran `pnpm link` after building, you can use
`codex "Explain the purpose of this PR"` from anywhere)*.
5. **Verify:** Confirm that the command executes successfully and
interacts with your configured Azure OpenAI deployment.
---
## Tests
- [x] Tested locally against an Azure OpenAI deployment using API Key
authentication. Basic commands and interactions confirmed working.
---
## Checklist
- [x] Added Azure provider details to configuration files
(`providers.ts`, `config.ts`).
- [x] Implemented conditional `AzureOpenAI` client initialization based
on provider setting.
- [x] Ensured `apiVersion` is passed correctly to the Azure client.
- [x] Updated `README.md` with Azure OpenAI setup instructions.
- [x] Manually tested core functionality against a live Azure OpenAI
endpoint.
- [x] Add/update automated tests for the Azure code path (pending `main`
stability).
cc @theabhinavdas @nikodem-wrona @fouad-openai @tibo-openai (adjust as
needed)
---
I have read the CLA Document and I hereby sign the CLA
This Pull Request addresses an issue where the output of commands
executed in the raw-exec utility was being truncated due to restrictive
limits on the number of lines and bytes collected. The truncation caused
the message [Output truncated: too many lines or bytes] to appear when
processing large outputs, which could hinder the functionality of the
CLI.
Changes Made
Increased the maximum output limits in the
[createTruncatingCollector](https://github.com/openai/codex/pull/575)
utility:
Bytes: Increased from 10 KB to 100 KB.
Lines: Increased from 256 lines to 1024 lines.
Installed the @types/node package to resolve missing type definitions
for [NodeJS](https://github.com/openai/codex/pull/575) and
[Buffer](https://github.com/openai/codex/pull/575).
Verified and fixed any related errors in the
[createTruncatingCollector](https://github.com/openai/codex/pull/575)
implementation.
Issue Solved:
This PR ensures that larger outputs can be processed without truncation,
improving the usability of the CLI for commands that generate extensive
output. https://github.com/openai/codex/issues/509
---------
Co-authored-by: Michael Bolin <bolinfest@gmail.com>
Building on top of https://github.com/openai/codex/pull/757, this PR
updates Codex to use the Landlock executor binary for sandboxing in the
Node.js CLI. Note that Codex has to be invoked with either `--full-auto`
or `--auto-edit` to activate sandboxing. (Using `--suggest` or
`--dangerously-auto-approve-everything` ensures the sandboxing codepath
will not be exercised.)
When I tested this on a Linux host (specifically, `Ubuntu 24.04.1 LTS`),
things worked as expected: I ran Codex CLI with `--full-auto` and then
asked it to do `echo 'hello mbolin' into hello_world.txt` and it
succeeded without prompting me.
However, in my testing, I discovered that the sandboxing did *not* work
when using `--full-auto` in a Linux Docker container from a macOS host.
I updated the code to throw a detailed error message when this happens:

This introduces `./codex-cli/scripts/stage_release.sh`, which is a shell
script that stages a release for the Node.js module in a temp directory.
It updates the release to include these native binaries:
```
bin/codex-linux-sandbox-arm64
bin/codex-linux-sandbox-x64
```
though this PR does not update Codex CLI to use them yet.
When doing local development, run
`./codex-cli/scripts/install_native_deps.sh` to install these in your
own `bin/` folder.
This PR also updates `README.md` to document the new workflow.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/757).
* #763
* __->__ #757
## `0.1.2504301751`
### 🚀 Features
- User config api key (#569)
- `@mention` files in codex (#701)
- Add `--reasoning` CLI flag (#314)
- Lower default retry wait time and increase number of tries (#720)
- Add common package registries domains to allowed-domains list (#414)
### 🪲 Bug Fixes
- Insufficient quota message (#758)
- Input keyboard shortcut opt+delete (#685)
- `/diff` should include untracked files (#686)
- Only allow running without sandbox if explicitly marked in safe
container (#699)
- Tighten up check for /usr/bin/sandbox-exec (#710)
- Check if sandbox-exec is available (#696)
- Duplicate messages in quiet mode (#680)
Solves #700
## State of the World Before
Prior to this PR, when users wanted to share file contents with Codex,
they had two options:
- Manually copy and paste file contents into the chat
- Wait for the assistant to use the shell tool to view the file
The second approach required the assistant to:
1. Recognize the need to view a file
2. Execute a shell tool call
3. Wait for the tool call to complete
4. Process the file contents
This consumed extra tokens and reduced user control over which files
were shared with the model.
## State of the World After
With this PR, users can now:
- Reference files directly in their chat input using the `@path` syntax
- Have file contents automatically expanded into XML blocks before being
sent to the LLM
For example, users can type `@src/utils/config.js` in their message, and
the file contents will be included in context. Within the terminal chat
history, these file blocks will be collapsed back to `@path` format in
the UI for clean presentation.
Tag File suggestions:
<img width="857" alt="file-suggestions"
src="https://github.com/user-attachments/assets/397669dc-ad83-492d-b5f0-164fab2ff4ba"
/>
Tagging files in action:
<img width="858" alt="tagging-files"
src="https://github.com/user-attachments/assets/0de9d559-7b7f-4916-aeff-87ae9b16550a"
/>
Demo video of file tagging:
[](https://www.youtube.com/watch?v=vL4LqtBnqt8)
## Implementation Details
This PR consists of 2 main components:
1. **File Tag Utilities**:
- New `file-tag-utils.ts` utility module that handles both expansion and
collapsing of file tags
- `expandFileTags()` identifies `@path` tokens and replaces them with
XML blocks containing file contents
- `collapseXmlBlocks()` reverses the process, converting XML blocks back
to `@path` format for UI display
- Tokens are only expanded if they point to valid files (directories are
ignored)
- Expansion happens just before sending input to the model
2. **Terminal Chat Integration**:
- Leveraged the existing file system completion system for tabbing to
support the `@path` syntax
- Added `updateFsSuggestions` helper to manage filesystem suggestions
- Added `replaceFileSystemSuggestion` to replace input with filesystem
suggestions
- Applied `collapseXmlBlocks` in the chat response rendering so that
tagged files are shown as simple `@path` tags
The PR also includes test coverage for both the UI and the file tag
utilities.
## Next Steps
Some ideas I'd like to implement if this feature gets merged:
- Line selection: `@path[50:80]` to grab specific sections of files
- Method selection: `@path#methodName` to grab just one function/class
- Visual improvements: highlight file tags in the UI to make them more
noticeable
This pull request includes a change to improve the error message
displayed when there is insufficient quota in the `AgentLoop` class. The
updated message provides more detailed information and a link for
managing or purchasing credits.
Error message improvement:
*
[`codex-cli/src/utils/agent/agent-loop.ts`](diffhunk://#diff-b15957eac2720c3f1f55aa32f172cdd0ac6969caf4e7be87983df747a9f97083L1140-R1140):
Updated the error message in the `AgentLoop` class to include the
specific error message (if available) and a link to manage or purchase
credits.
Fixes#751
I suspect this was done originally so that `execForSandbox()` had a
consistent signature for both the `SandboxType.NONE` and
`SandboxType.MACOS_SEATBELT` cases, but that is not really necessary and
turns out to make the upcoming Landlock support a bit more complicated
to implement, so I had Codex remove it and clean up the call sites.
I am working to simplify the build process. As a first step, update
`session.ts` so it reads the `version` from `package.json` at runtime so
we no longer have to modify it during the build process. I want to get
to a place where the build looks like:
```
cd codex-cli
pnpm i
pnpm build
RELEASE_DIR=$(mktemp -d)
cp -r bin "$RELEASE_DIR/bin"
cp -r dist "$RELEASE_DIR/dist"
cp -r src "$RELEASE_DIR/src" # important if we want sourcemaps to continue to work
cp ../README.md "$RELEASE_DIR"
VERSION=$(printf '0.1.%d' $(date +%y%m%d%H%M))
jq --arg version "$VERSION" '.version = $version' package.json > "$RELEASE_DIR/package.json"
```
Then the contents of `$RELEASE_DIR` should be good to `npm publish`, no?
The saveConfig() function only includes a hardcoded subset of properties
when writing the config file. Any property not explicitly listed (like
disableResponseStorage) will be dropped.
I have added `disableResponseStorage` to the `configToSave` object as
the immediate fix.
[Linking Issue this fixes.](https://github.com/openai/codex/issues/726)
This PR adds a new CLI flag: `--reasoning`, which allows users to
customize the reasoning effort level (`low`, `medium`, or `high`) used
by OpenAI's `o` models.
By introducing the `--reasoning` flag, users gain more flexibility when
working with the models. It enables optimization for either speed or
depth of reasoning, depending on specific use cases.
This PR resolves#107
- **Flag**: `--reasoning`
- **Accepted Values**: `low`, `medium`, `high`
- **Default Behavior**: If not specified, the model uses the default
reasoning level.
## Example Usage
```bash
codex --reasoning=low "Write a simple function to calculate factorial"
---------
Co-authored-by: Fouad Matin <169186268+fouad-openai@users.noreply.github.com>
Co-authored-by: yashrwealthy <yash.rastogi@wealthy.in>
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
* In both TypeScript and Rust, we now invoke `/usr/bin/sandbox-exec`
explicitly rather than whatever `sandbox-exec` happens to be on the
`PATH`.
* Changed `isSandboxExecAvailable` to use `access()` rather than
`command -v` so that:
* We only do the check once over the lifetime of the Codex process.
* The check is specific to `/usr/bin/sandbox-exec`.
* We now do a syscall rather than incur the overhead of spawning a
process, dealing with timeouts, etc.
I think there is still room for improvement here where we should move
the `isSandboxExecAvailable` check earlier in the CLI, ideally right
after we do arg parsing to verify that we can provide the Seatbelt
sandbox if that is what the user has requested.
- Introduce `isSandboxExecAvailable()` helper and tidy import ordering
in `handle-exec-command.ts`.
- Add runtime check for the `sandbox-exec` binary on macOS; fall back to
`SandboxType.NONE` with a warning if it’s missing, preventing crashes.
---------
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
Co-authored-by: Fouad Matin <fouad@openai.com>
Adds support for reading OPENAI_API_KEY (and other variables) from a
user‑wide dotenv file (~/.codex.config). Precedence order is now:
1. explicit environment variable
2. project‑local .env (loaded earlier)
3. ~/.codex.config
Also adds a regression test that ensures the multiline editor correctly
handles cases where printable text and the CSI‑u Shift+Enter sequence
arrive in the same input chunk.
House‑kept with Prettier; removed stray temp.json artifact.
Addressing #600 and #664 (partially)
## Bug
Codex was staging duplicate items in output running when the same
response item appeared in both the streaming events. Specifically:
1. Items would be staged once when received as a
`response.output_item.done` event
2. The same items would be staged again when included in the final
`response.completed` payload
This duplication would result in each message being sent several times
in the quiet mode output.
## Changes
- Added a Set (`alreadyStagedItemIds`) to track items that have already
been staged
- Modified the `stageItem` function to check if an item's ID is already
in this set before staging it
- Added a regression test (`agent-dedupe-items.test.ts`) that verifies
items with the same ID are only staged once
## Testing
Like other tests, the included test creates a mock OpenAI stream that
emits the same message twice (once as an incremental event and once in
the final response) and verifies the item is only passed to `onItem`
once.
## `0.1.2504251709`
### 🚀 Features
- Add openai model info configuration (#551)
- Added provider to run quiet mode function (#571)
- Create parent directories when creating new files (#552)
- Print bug report URL in terminal instead of opening browser (#510)
(#528)
- Add support for custom provider configuration in the user config
(#537)
- Add support for OpenAI-Organization and OpenAI-Project headers (#626)
- Add specific instructions for creating API keys in error msg (#581)
- Enhance toCodePoints to prevent potential unicode 14 errors (#615)
- More native keyboard navigation in multiline editor (#655)
- Display error on selection of invalid model (#594)
### 🪲 Bug Fixes
- Model selection (#643)
- Nits in apply patch (#640)
- Input keyboard shortcuts (#676)
- `apply_patch` unicode characters (#625)
- Don't clear turn input before retries (#611)
- More loosely match context for apply_patch (#610)
- Update bug report template - there is no --revision flag (#614)
- Remove outdated copy of text input and external editor feature (#670)
- Remove unreachable "disableResponseStorage" logic flow introduced in
#543 (#573)
- Non-openai mode - fix for gemini content: null, fix 429 to throw
before stream (#563)
- Only allow going up in history when not already in history if input is
empty (#654)
- Do not grant "node" user sudo access when using run_in_container.sh
(#627)
- Update scripts/build_container.sh to use pnpm instead of npm (#631)
- Update lint-staged config to use pnpm --filter (#582)
- Non-openai mode - don't default temp and top_p (#572)
- Fix error catching when checking for updates (#597)
- Close stdin when running an exec tool call (#636)
- Replace setTimeout(10ms) with queueMicrotask for immediate processing
- Add minimal 3ms setTimeout for rendering to maintain readable UX
- Reduces per-token delay while preserving streaming experience
- Add performance test to verify optimization works correctly
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
\+ cleanup below input help to be "ctrl+c to exit | "/" to see commands
| enter to send" now that we have command autocompletion
\+ minor other drive-by code cleanups
---------
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
fix: pass correct selected model in ModelOverlay
The ModelOverlay component was incorrectly passing the current model
instead of the newly selected model to its onSelect callback. This
prevented model changes from being applied properly.
The fix ensures that when a user selects a new model, the parent
component receives the correct newly selected model value, allowing
model changes to work as intended.
## Description
This PR addresses the following improvements:
**Unify Prettier Version**: Currently, the Prettier version used in
`/package.json` and `/codex-cli/package.json` are different. In this PR,
we're updating both to use Prettier v3.
- Prettier v3 introduces improved support for JavaScript and TypeScript.
(e.g. the formatting scenario shown in the image below. This is more
aligned with the TypeScript indentation standard).
<img width="1126" alt="image"
src="https://github.com/user-attachments/assets/6e237eb8-4553-4574-b336-ed9561c55370"
/>
**Add Prettier Auto-Formatting in lint-staged**: We've added a step to
automatically run prettier --write on JavaScript and TypeScript files as
part of the lint-staged process, before the ESLint checks.
- This will help ensure that all committed code is properly formatted
according to the project's Prettier configuration.
## Description
When `saveConfig` is called, the project doc is incorrectly saved into
user instructions. This change ensures that only user instructions are
saved to `instructions.md` during saveConfig, preventing data
corruption.
close: #576
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Solves #510
This PR changes the `/bug` command to print the URL into the terminal
(so it works in headless sessions) instead of trying to open a browser.
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Up-to-date of #78Fixes#32
addressed requested changes @tibo-openai :) made sense to me
though, previous rationale with passing the state up was assuming there
could be a future need to have a shared state with all available models
being available to the parent
I suspect this is why some contributors kept accidentally including a
new `codex-cli/package-lock.json` in their PRs.
Note the `Dockerfile` still uses `npm` instead of `pnpm`, but that
appears to be fine. (Probably nicer to globally install as few things as
possible in the image.)
This exploration came out of my review of
https://github.com/openai/codex/pull/414.
`run_in_container.sh` runs Codex in a Docker container like so:
bd1c3deed9/codex-cli/scripts/run_in_container.sh (L51-L58)
But then runs `init_firewall.sh` to set up the firewall to restrict
network access.
Previously, we did this by adding `/usr/local/bin/init_firewall.sh` to
the container and adding a special rule in `/etc/sudoers.d` so the
unprivileged user (`node`) could run the privileged `init_firewall.sh`
script to open up the firewall for `api.openai.com`:
31d0d7a305/codex-cli/Dockerfile (L51-L56)
Though I believe this is unnecessary, as we can use `docker exec --user
root` from _outside_ the container to run
`/usr/local/bin/init_firewall.sh` as `root` without adding a special
case in `/etc/sudoers.d`.
This appears to work as expected, as I tested it by doing the following:
```
./codex-cli/scripts/build_container.sh
./codex-cli/scripts/run_in_container.sh 'what is the output of `curl https://www.openai.com`'
```
This was a bit funny because in some of my runs, Codex wasn't convinced
it had network access, so I had to convince it to try the `curl`
request:

As you can see, when it ran `curl -s https\://www.openai.com`, it a
connection failure, so the network policy appears to be working as
intended.
Note this PR also removes `sudo` from the `apt-get install` list in the
`Dockerfile`.
## Description
The `as AppConfig` type assertion in the constructor may introduce
potential type safety risks. Removing the assertion and making `notify`
an optional parameter could enhance type robustness and prevent
unexpected runtime errors.
close: #605
When using a non-built-in provider with the `--provider` option, users
are prompted:
```
Set the environment variable <provider>_API_KEY and re-run this command.
You can create a <provider>_API_KEY in the <provider> dashboard.
```
However, many users are confused because, even after correctly setting
`<provider>_API_KEY`, authentication may still fail unless
`OPENAI_API_KEY` is _also_ present in the environment. This is not
intuitive and leads to ambiguity about which API key is actually
required and used as a fallback, especially when using custom or
third-party (non-listed) providers.
Furthermore, the original README/documentation did not mention the
requirement to set `<provider>_BASE_URL` for non-built-in providers,
which is necessary for proper client behavior. This omission made the
configuration process more difficult for users trying to integrate with
custom endpoints.
More of a proposal than anything but models seem to struggle with
composing valid patches for `apply_patch` for context matching when
there are unicode look-a-likes involved. This would normalize them.
```
top-level # ASCII
top-level # U+2011 NON-BREAKING HYPHEN
top–level # U+2013 EN DASH
top—level # U+2014 EM DASH
top‒level # U+2012 FIGURE DASH
```
thanks unicode.
Updates the error message for missing Gemini API keys to reference
"Google AI Studio" instead of the generic "GEMINI dashboard". This
provides users with more accurate information about where to obtain
their Gemini API keys.
This could be extended to other providers as well.
The current turn input in the agent loop is being discarded before
consuming the stream events which causes the stream reconnect (after
rate limit failure) to not include the inputs. Since the new stream
includes the previous response ID, it triggers a bad request exception
considering the input doesn't match what OpenAI has stored on the server
side and subsequently a very confusing error message of: `No tool output
found for function call call_xyz`.
This should fix https://github.com/openai/codex/issues/586.
## Testing
I have a personal project that I'm working on that runs multiple Codex
CLIs in parallel and often runs into rate limit errors (as seen in the
OpenAI logs). After making this change, I am no longer experiencing
Codex crashing and it was able to retry and handle everything gracefully
until completion (even though I still see rate limiting in the OpenAI
logs).
This fixes https://github.com/openai/codex/issues/480 where the latest
code was crashing when attempting to be run inside docker since the
update checker attempts to reach out to `npm.antfu.dev` but that DNS is
not allowed in the firewall rules.
I believe the original code was attempting to catch and ignore any
errors when checking for updates but was doing so incorrectly. If you
use await on a promise, you have to use a standard try/catch instead of
`Promise.catch` so this fixes that.
## Testing
### Before
```
$ scripts/run_in_container.sh "explain this project to me"
7d1aa845edf9a36fe4d5b331474b5cb8ba79537b682922b554ea677f14996c6b
Resolving api.openai.com...
Adding 162.159.140.245 for api.openai.com
Adding 172.66.0.243 for api.openai.com
Host network detected as: 172.17.0.0/24
Firewall configuration complete
Verifying firewall rules...
Firewall verification passed - unable to reach https://example.com as expected
Firewall verification passed - able to reach https://api.openai.com as expected
TypeError: fetch failed
at node:internal/deps/undici/undici:13510:13
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async getLatestVersionBatch (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132669:17)
at async getLatestVersion (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132674:19)
at async getUpdateCheckInfo (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132748:20)
at async checkForUpdates (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132772:23)
at async file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:142027:1 {
[cause]: AggregateError [ECONNREFUSED]:
at internalConnectMultiple (node:net:1122:18)
at afterConnectMultiple (node:net:1689:7) {
code: 'ECONNREFUSED',
[errors]: [ [Error], [Error] ]
}
}
```
### After
```
$ scripts/run_in_container.sh "explain this project to me"
91aa716e3d3f86c9cf6013dd567be31b2c44eb5d7ab184d55ef498731020bb8d
Resolving api.openai.com...
Adding 162.159.140.245 for api.openai.com
Adding 172.66.0.243 for api.openai.com
Host network detected as: 172.17.0.0/24
Firewall configuration complete
Verifying firewall rules...
Firewall verification passed - unable to reach https://example.com as expected
Firewall verification passed - able to reach https://api.openai.com as expected
╭──────────────────────────────────────────────────────────────╮
│ ● OpenAI Codex (research preview) v0.1.2504221401 │
╰──────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────╮
│ localhost session: 7c782f196ae04503866e39f071e26a69 │
│ ↳ model: o4-mini │
│ ↳ provider: openai │
│ ↳ approval: full-auto │
╰──────────────────────────────────────────────────────────────╯
user
explain this project to me
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│( ● ) 2s Thinking │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
send q or ctrl+c to exit | send "/clear" to reset | send "/help" for commands | press enter to send | shift+enter for new line — 100% context left
```
### What
- Add support for loading and merging custom provider configurations
from a local `providers.json` file.
- Allow users to override or extend default providers with their own
settings.
### Why
This change enables users to flexibly customize and extend provider
endpoints and API keys without modifying the codebase, making the CLI
more adaptable for various LLM backends and enterprise use cases.
### How
- Introduced `loadProvidersFromFile` and `getMergedProviders` in config
logic.
- Added/updated related tests in [tests/config.test.tsx]
### Checklist
- [x] Lint passes for changed files
- [x] Tests pass for all files
- [x] Documentation/comments updated as needed
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>