## Description
When `saveConfig` is called, the project doc is incorrectly saved into
user instructions. This change ensures that only user instructions are
saved to `instructions.md` during saveConfig, preventing data
corruption.
close: #576
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Solves #510
This PR changes the `/bug` command to print the URL into the terminal
(so it works in headless sessions) instead of trying to open a browser.
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Up-to-date of #78Fixes#32
addressed requested changes @tibo-openai :) made sense to me
though, previous rationale with passing the state up was assuming there
could be a future need to have a shared state with all available models
being available to the parent
I suspect this is why some contributors kept accidentally including a
new `codex-cli/package-lock.json` in their PRs.
Note the `Dockerfile` still uses `npm` instead of `pnpm`, but that
appears to be fine. (Probably nicer to globally install as few things as
possible in the image.)
This exploration came out of my review of
https://github.com/openai/codex/pull/414.
`run_in_container.sh` runs Codex in a Docker container like so:
bd1c3deed9/codex-cli/scripts/run_in_container.sh (L51-L58)
But then runs `init_firewall.sh` to set up the firewall to restrict
network access.
Previously, we did this by adding `/usr/local/bin/init_firewall.sh` to
the container and adding a special rule in `/etc/sudoers.d` so the
unprivileged user (`node`) could run the privileged `init_firewall.sh`
script to open up the firewall for `api.openai.com`:
31d0d7a305/codex-cli/Dockerfile (L51-L56)
Though I believe this is unnecessary, as we can use `docker exec --user
root` from _outside_ the container to run
`/usr/local/bin/init_firewall.sh` as `root` without adding a special
case in `/etc/sudoers.d`.
This appears to work as expected, as I tested it by doing the following:
```
./codex-cli/scripts/build_container.sh
./codex-cli/scripts/run_in_container.sh 'what is the output of `curl https://www.openai.com`'
```
This was a bit funny because in some of my runs, Codex wasn't convinced
it had network access, so I had to convince it to try the `curl`
request:

As you can see, when it ran `curl -s https\://www.openai.com`, it a
connection failure, so the network policy appears to be working as
intended.
Note this PR also removes `sudo` from the `apt-get install` list in the
`Dockerfile`.
## Description
The `as AppConfig` type assertion in the constructor may introduce
potential type safety risks. Removing the assertion and making `notify`
an optional parameter could enhance type robustness and prevent
unexpected runtime errors.
close: #605
When using a non-built-in provider with the `--provider` option, users
are prompted:
```
Set the environment variable <provider>_API_KEY and re-run this command.
You can create a <provider>_API_KEY in the <provider> dashboard.
```
However, many users are confused because, even after correctly setting
`<provider>_API_KEY`, authentication may still fail unless
`OPENAI_API_KEY` is _also_ present in the environment. This is not
intuitive and leads to ambiguity about which API key is actually
required and used as a fallback, especially when using custom or
third-party (non-listed) providers.
Furthermore, the original README/documentation did not mention the
requirement to set `<provider>_BASE_URL` for non-built-in providers,
which is necessary for proper client behavior. This omission made the
configuration process more difficult for users trying to integrate with
custom endpoints.
More of a proposal than anything but models seem to struggle with
composing valid patches for `apply_patch` for context matching when
there are unicode look-a-likes involved. This would normalize them.
```
top-level # ASCII
top-level # U+2011 NON-BREAKING HYPHEN
top–level # U+2013 EN DASH
top—level # U+2014 EM DASH
top‒level # U+2012 FIGURE DASH
```
thanks unicode.
Updates the error message for missing Gemini API keys to reference
"Google AI Studio" instead of the generic "GEMINI dashboard". This
provides users with more accurate information about where to obtain
their Gemini API keys.
This could be extended to other providers as well.
The current turn input in the agent loop is being discarded before
consuming the stream events which causes the stream reconnect (after
rate limit failure) to not include the inputs. Since the new stream
includes the previous response ID, it triggers a bad request exception
considering the input doesn't match what OpenAI has stored on the server
side and subsequently a very confusing error message of: `No tool output
found for function call call_xyz`.
This should fix https://github.com/openai/codex/issues/586.
## Testing
I have a personal project that I'm working on that runs multiple Codex
CLIs in parallel and often runs into rate limit errors (as seen in the
OpenAI logs). After making this change, I am no longer experiencing
Codex crashing and it was able to retry and handle everything gracefully
until completion (even though I still see rate limiting in the OpenAI
logs).
This fixes https://github.com/openai/codex/issues/480 where the latest
code was crashing when attempting to be run inside docker since the
update checker attempts to reach out to `npm.antfu.dev` but that DNS is
not allowed in the firewall rules.
I believe the original code was attempting to catch and ignore any
errors when checking for updates but was doing so incorrectly. If you
use await on a promise, you have to use a standard try/catch instead of
`Promise.catch` so this fixes that.
## Testing
### Before
```
$ scripts/run_in_container.sh "explain this project to me"
7d1aa845edf9a36fe4d5b331474b5cb8ba79537b682922b554ea677f14996c6b
Resolving api.openai.com...
Adding 162.159.140.245 for api.openai.com
Adding 172.66.0.243 for api.openai.com
Host network detected as: 172.17.0.0/24
Firewall configuration complete
Verifying firewall rules...
Firewall verification passed - unable to reach https://example.com as expected
Firewall verification passed - able to reach https://api.openai.com as expected
TypeError: fetch failed
at node:internal/deps/undici/undici:13510:13
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async getLatestVersionBatch (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132669:17)
at async getLatestVersion (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132674:19)
at async getUpdateCheckInfo (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132748:20)
at async checkForUpdates (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132772:23)
at async file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:142027:1 {
[cause]: AggregateError [ECONNREFUSED]:
at internalConnectMultiple (node:net:1122:18)
at afterConnectMultiple (node:net:1689:7) {
code: 'ECONNREFUSED',
[errors]: [ [Error], [Error] ]
}
}
```
### After
```
$ scripts/run_in_container.sh "explain this project to me"
91aa716e3d3f86c9cf6013dd567be31b2c44eb5d7ab184d55ef498731020bb8d
Resolving api.openai.com...
Adding 162.159.140.245 for api.openai.com
Adding 172.66.0.243 for api.openai.com
Host network detected as: 172.17.0.0/24
Firewall configuration complete
Verifying firewall rules...
Firewall verification passed - unable to reach https://example.com as expected
Firewall verification passed - able to reach https://api.openai.com as expected
╭──────────────────────────────────────────────────────────────╮
│ ● OpenAI Codex (research preview) v0.1.2504221401 │
╰──────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────╮
│ localhost session: 7c782f196ae04503866e39f071e26a69 │
│ ↳ model: o4-mini │
│ ↳ provider: openai │
│ ↳ approval: full-auto │
╰──────────────────────────────────────────────────────────────╯
user
explain this project to me
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│( ● ) 2s Thinking │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
send q or ctrl+c to exit | send "/clear" to reset | send "/help" for commands | press enter to send | shift+enter for new line — 100% context left
```
### What
- Add support for loading and merging custom provider configurations
from a local `providers.json` file.
- Allow users to override or extend default providers with their own
settings.
### Why
This change enables users to flexibly customize and extend provider
endpoints and API keys without modifying the codebase, making the CLI
more adaptable for various LLM backends and enterprise use cases.
### How
- Introduced `loadProvidersFromFile` and `getMergedProviders` in config
logic.
- Added/updated related tests in [tests/config.test.tsx]
### Checklist
- [x] Lint passes for changed files
- [x] Tests pass for all files
- [x] Documentation/comments updated as needed
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Adding support to be able to run other models in quiet mode
ie: `codex --approval-mode full-auto -q "explain the current directory"
--provider xai --model grok-3-beta`
I haven't seen any actual errors due to this, but it's been bothering me
that I had it defaulted to 1. I think best to leave it undefined and
have each provider do their thing
Gemini's API is finicky, it 400's without an error when you pass
content: null
Also fixed the rate limiting issues by throwing outside of the iterator.
I think there's a separate issue with the second isRateLimit check in
agent-loop - turnInput is cleared by that time, so it retries without
the last message.
apply_patch doesn't create parent directories when creating a new file
leading to confusion and flailing by the agent. This will create parent
directories automatically when absent.
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
## `0.1.2504221401`
### 🚀 Features
- Show actionable errors when api keys are missing (#523)
- Add CLI `--version` flag (#492)
### 🐛 Bug Fixes
- Agent loop for ZDR (`disableResponseStorage`) (#543)
- Fix relative `workdir` check for `apply_patch` (#556)
- Minimal mid-stream #429 retry loop using existing back-off (#506)
- Inconsistent usage of base URL and API key (#507)
- Remove requirement for api key for ollama (#546)
- Support `[provider]_BASE_URL` (#542)
Previously, we were ignoring the `workdir` field in an `ExecInput` when
running it through `canAutoApprove()`. For ordinary `exec()` calls, that
was sufficient, but for `apply_patch`, we need the `workdir` to resolve
relative paths in the `apply_patch` argument so that we can check them
in `isPathConstrainedTowritablePaths()`.
Likewise, we also need the workdir when running `execApplyPatch()`
because the paths need to be resolved again.
Ideally, the `ApplyPatchCommand` returned by `canAutoApprove()` would
not be a simple `patch: string`, but the parsed patch with all of the
paths resolved, in which case `execApplyPatch()` could expect absolute
paths and would not need `workdir`.
Fixes#540
# Skip API key validation for Ollama provider
## Description
This PR modifies the CLI to not require an API key when using Ollama as
the provider
## Changes
- Modified the validation logic to skip API key checks for these
providers
- Updated the README to clarify that Ollama doesn't require an API key
As requested by @tibo-openai at
https://github.com/openai/codex/pull/357#issuecomment-2816554203, this
attempts a more minimal implementation of #357 that preserves as much as
possible of the existing code's exponential backoff logic.
Adds a small retry wrapper around the streaming for‑await loop so that
HTTP 429s which occur *after* the stream has started no longer crash the
CLI.
Highlights
• Re‑uses existing RATE_LIMIT_RETRY_WAIT_MS constant and 5‑attempt
limit.
• Exponential back‑off identical to initial request handling.
This comment is probably more useful here in the PR:
// The OpenAI SDK may raise a 429 (rate‑limit) *after* the stream has
// started. Prior logic already retries the initial `responses.create`
// call, but we need to add equivalent resilience for mid‑stream
// failures. We keep the implementation minimal by wrapping the
// existing `for‑await` loop in a small retry‑for‑loop that re‑creates
// the stream with exponential back‑off.
A recent commit introduced the ability to use third-party model
providers. (Really appreciate it!)
However, the usage is inconsistent: some pieces of code use the custom
providers, whereas others still have the old behavior. Additionally,
`OPENAI_BASE_URL` is now being disregarded when it shouldn't be.
This PR normalizes the usage to `getApiKey` and `getBaseUrl`, and
enables the use of `OPENAI_BASE_URL` if present.
---------
Co-authored-by: Gabriel Bianconi <GabrielBianconi@users.noreply.github.com>
Without this I get an issue running codex it in a docker container. I
receive:
```
{
"answer": "{\"role\":\"user\",\"content\":[{\"type\":\"input_text\",\"text\":\"\\\"Say hello world\\\"\"}],\"type\":\"message\"}\n{\"id\":\"error-1745325184914\",\"type\":\"message\",\"role\":\"system\",\"content\":[{\"type\":\"input_text\",\"text\":\"⚠️ OpenAI rejected the request (request ID: req_f9027b59ebbce00061e9cd2dbb2d529a). Error details: Status: 400, Code: invalid_function_parameters, Type: invalid_request_error, Message: 400 Invalid schema for function 'shell': In context=(), 'required' is required to be supplied and to be an array including every key in properties. Missing 'workdir'.. Please verify your settings and try again.\"}]}\n"
}
```
This fix makes it work.
I saw cases where the first chunk of output from `ls -R` could be large
enough to exceed `MAX_OUTPUT_BYTES` or `MAX_OUTPUT_LINES`, in which case
the loop would exit early in `createTruncatingCollector()` such that
nothing was appended to the `chunks` array. As a result, the reported
`stdout` of `ls -R` would be empty.
I asked Codex to add logic to handle this edge case and write a unit
test. I used this as my test:
```
./codex-cli/dist/cli.js -q 'what is the output of `ls -R`'
```
now it appears to include a ton of stuff whereas before this change, I
saw:
```
{"type":"function_call_output","call_id":"call_a2QhVt7HRJYKjb3dIc8w1aBB","output":"{\"output\":\"\\n\\n[Output truncated: too many lines or bytes]\",\"metadata\":{\"exit_code\":0,\"duration_seconds\":0.5}}"}
```
## `0.1.2504211509`
### 🚀 Features
- Support multiple providers via Responses-Completion transformation
(#247)
- Add user-defined safe commands configuration and approval logic #380
(#386)
- Allow switching approval modes when prompted to approve an
edit/command (#400)
- Add support for `/diff` command autocomplete in TerminalChatInput
(#431)
- Auto-open model selector if user selects deprecated model (#427)
- Read approvalMode from config file (#298)
- `/diff` command to view git diff (#426)
- Tab completions for file paths (#279)
- Add /command autocomplete (#317)
- Allow multi-line input (#438)
### 🐛 Bug Fixes
- `full-auto` support in quiet mode (#374)
- Enable shell option for child process execution (#391)
- Configure husky and lint-staged for pnpm monorepo (#384)
- Command pipe execution by improving shell detection (#437)
- Name of the file not matching the name of the component (#354)
- Allow proper exit from new Switch approval mode dialog (#453)
- Ensure /clear resets context and exclude system messages from
approximateTokenUsed count (#443)
- `/clear` now clears terminal screen and resets context left indicator
(#425)
- Correct fish completion function name in CLI script (#485)
- Auto-open model-selector when model is not found (#448)
- Remove unnecessary isLoggingEnabled() checks (#420)
- Improve test reliability for `raw-exec` (#434)
- Unintended tear down of agent loop (#483)
- Remove extraneous type casts (#462)
Updates `isSafeCommand()` so that an invocation of `find` is not
auto-approved if it contains any of: `-exec`, `-execdir`, `-ok`,
`-okdir`, `-delete`, `-fls`, `-fprint`, `-fprint0`, `-fprintf`.
Publish shouldn't need the source files published along with the
distributable bin.
`src` is being shipped to the registry rn:
https://www.npmjs.com/package/@openai/codex?activeTab=code
You can verify that the src is not needed by packing the project
manually after removing src from the files:
```sh
# from the codex-cli dir
rm -rf dist # just for hygiene
pnpm run build
pnpm pack
mkdir /tmp/codex-tar-test
mv openai-codex-0.1.2504181820.tgz /tmp/codex-tar-test
cd /tmp/codex-tar-test
pnpm init
pnpm add ./openai-codex-0.1.2504181820.tgz /tmp/codex-tar-test
pnpm exec codex --full-auto "run a bash -c command to echo hello world"
```
The cli is operational
> noticed this when checking the screenshot included in
https://github.com/openai/codex/pull/461
Reverts https://github.com/openai/codex/pull/386 because:
* The parsing logic for shell commands was unsafe (`split(/\s+/)`
instead of something like `shell-quote`)
* We have a different plan for supporting auto-approved commands.
## What does this PR do?
* Implements the full `/clear` command in **codex‑cli**:
* Resets chat history **and** wipes the terminal screen.
* Shows a single system message: `Context cleared`.
* Adds comprehensive unit tests for the new behaviour.
## Why is it needed?
* Fixes user‑reported bugs:
* **#395**
* **#405**
## How is it implemented?
* **Code** – Adds `process.stdout.write('\x1b[3J\x1b[H\x1b[2J')` in
`terminal.tsx`. Removed reference to `prev` in `
setItems((prev) => [
...prev,
` in `terminal-chat-new-input.tsx` & `terminal-chat-input.tsx`.
## CI / QA
All commands pass locally:
```bash
pnpm test # green
pnpm run lint # green
pnpm run typecheck # zero TS errors
```
## Results
https://github.com/user-attachments/assets/11dcf05c-e054-495a-8ecb-ac6ef21a9da4
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Another one that I noticed.
The dist structure is very simple rn, so unlikely to run into orphaned
files as you're emitting a single built artifact which wil be
overwritten on build, but I always prefer to do clean builds as
"hygiene".
I had a dirty dist personally after local development and testing some
things, as an example.
Alternatives could be to create a `clean` script with cross platform
`rimraf dist`
This PR tidies up primitives under storage/.
**Noop changes:**
* Promote logger implementation to top-level utility outside of agent/
* Use logger within storage primitives
* Cleanup doc strings and comments
**Functional changes:**
* Increase command history size to 10_000
* Remove unnecessary debounce implementation and ensure a session ID is
created only once per agent loop
---------
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
## Background
Addressing feedback from
https://github.com/openai/codex/pull/333#discussion_r2050893224, this PR
adds support for Bun alongside npm, pnpm while keeping the code simple.
## Summary
The update‑check flow is refactored to use a direct registry lookup
(`fast-npm-meta` + `semver`) instead of shelling out to `npm outdated`,
and adds a lightweight installer‑detection mechanism that:
1. Checks if the invoked script lives under a known global‑bin directory
(npm, pnpm, or bun)
2. If not, falls back to local detection via `getUserAgent()` (the
`package‑manager‑detector` library)
## What’s Changed
- **Registry‑based version check**
- Replace `execFile("npm", ["outdated"])` with `getLatestVersion()` and
`semver.gt()`
- **Multi‑manager support**
- New `renderUpdateCommand` handles update commands for `npm`, `pnpm`,
and `bun`.
- Detect global installer first via `detectInstallerByPath()`
- Fallback to local detection via `getUserAgent()`
- **Module cleanup**
- Extract `detectInstallerByPath` into
`utils/package-manager-detector.ts`
- Remove legacy `checkOutdated`, `getNPMCommandPath`, and child‑process
JSON parsing
- **Flow improvements in `checkForUpdates`**
1. Short‑circuit by `UPDATE_CHECK_FREQUENCY`
3. Fetch & compare versions
4. Persist new timestamp immediately
5. Render & display styled box only when an update exists
- **Maintain simplicity**
- All multi‑manager logic lives in one small helper and a concise lookup
rather than a complex adapter hierarchy
- Core `checkForUpdates` remains a single, easy‑to‑follow async function
- **Dependencies added**
- `fast-npm-meta`, `semver`, `package-manager-detector`, `@types/semver`
## Considerations
If we decide to drop the interactive update‑message (`npm install -g
@openai/codex`) rendering altogether, we could remove most of the
installer‑detection code and dependencies, which would simplify the
codebase further but result in a less friendly UX.
## Preview
* npm

* bun

## Simple Flow Chart
```mermaid
flowchart TD
A(Start) --> B[Read state]
B --> C{Recent check?}
C -- Yes --> Z[End]
C -- No --> D[Fetch latest version]
D --> E[Save check time]
E --> F{Version data OK?}
F -- No --> Z
F -- Yes --> G{Update available?}
G -- No --> Z
G -- Yes --> H{Global install?}
H -- Yes --> I[Select global manager]
H -- No --> K{Local install?}
K -- No --> Z
K -- Yes --> L[Select local manager]
I & L --> M[Render update message]
M --> N[Format with boxen]
N --> O[Print update]
O --> Z
```
This PR removes always included files and folders from the
[`package.json#files`
field](https://docs.npmjs.com/cli/v11/configuring-npm/package-json#files):
> Certain files are always included, regardless of settings:
> - package.json
> - README
> - LICENSE / LICENCE
> - The file in the "main" field
> - The file(s) in the "bin" field
Validated by running `pnpm i && cd codex-cli && pnpm build && pnpm
release:readme && pnpm pack` and confirming both the `README.md` file
and `bin` directory are still included in the tarball:
<img width="227" alt="image"
src="https://github.com/user-attachments/assets/ecd90a07-73c7-4940-8c83-cb1d51dfcf96"
/>