Commit Graph

143 Commits

Author SHA1 Message Date
Misha Davidov
9b102965b9 feat: more loosely match context for apply_patch (#610)
More of a proposal than anything but models seem to struggle with
composing valid patches for `apply_patch` for context matching when
there are unicode look-a-likes involved. This would normalize them.

```
top-level          # ASCII
top-level          # U+2011 NON-BREAKING HYPHEN
top–level          # U+2013 EN DASH
top—level          # U+2014 EM DASH
top‒level          # U+2012 FIGURE DASH
```

thanks unicode.
2025-04-24 09:05:19 -07:00
theg1239
ad1e39c903 feat: add specific instructions for creating API keys in error msg (#581)
Updates the error message for missing Gemini API keys to reference
"Google AI Studio" instead of the generic "GEMINI dashboard". This
provides users with more accurate information about where to obtain
their Gemini API keys.

This could be extended to other providers as well.
2025-04-24 06:33:34 -05:00
Connor Christie
622323a59b fix: don't clear turn input before retries (#611)
The current turn input in the agent loop is being discarded before
consuming the stream events which causes the stream reconnect (after
rate limit failure) to not include the inputs. Since the new stream
includes the previous response ID, it triggers a bad request exception
considering the input doesn't match what OpenAI has stored on the server
side and subsequently a very confusing error message of: `No tool output
found for function call call_xyz`.

This should fix https://github.com/openai/codex/issues/586.

## Testing

I have a personal project that I'm working on that runs multiple Codex
CLIs in parallel and often runs into rate limit errors (as seen in the
OpenAI logs). After making this change, I am no longer experiencing
Codex crashing and it was able to retry and handle everything gracefully
until completion (even though I still see rate limiting in the OpenAI
logs).
2025-04-24 06:29:36 -05:00
Connor Christie
c75cb507f0 bug: fix error catching when checking for updates (#597)
This fixes https://github.com/openai/codex/issues/480 where the latest
code was crashing when attempting to be run inside docker since the
update checker attempts to reach out to `npm.antfu.dev` but that DNS is
not allowed in the firewall rules.

I believe the original code was attempting to catch and ignore any
errors when checking for updates but was doing so incorrectly. If you
use await on a promise, you have to use a standard try/catch instead of
`Promise.catch` so this fixes that.

## Testing

### Before

```
$ scripts/run_in_container.sh "explain this project to me"
7d1aa845edf9a36fe4d5b331474b5cb8ba79537b682922b554ea677f14996c6b
Resolving api.openai.com...
Adding 162.159.140.245 for api.openai.com
Adding 172.66.0.243 for api.openai.com
Host network detected as: 172.17.0.0/24
Firewall configuration complete
Verifying firewall rules...
Firewall verification passed - unable to reach https://example.com as expected
Firewall verification passed - able to reach https://api.openai.com as expected
TypeError: fetch failed
    at node:internal/deps/undici/undici:13510:13
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async getLatestVersionBatch (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132669:17)
    at async getLatestVersion (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132674:19)
    at async getUpdateCheckInfo (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132748:20)
    at async checkForUpdates (file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:132772:23)
    at async file:///usr/local/share/npm-global/lib/node_modules/@openai/codex/dist/cli.js:142027:1 {
  [cause]: AggregateError [ECONNREFUSED]: 
      at internalConnectMultiple (node:net:1122:18)
      at afterConnectMultiple (node:net:1689:7) {
    code: 'ECONNREFUSED',
    [errors]: [ [Error], [Error] ]
  }
}
```

### After

```
$ scripts/run_in_container.sh "explain this project to me"
91aa716e3d3f86c9cf6013dd567be31b2c44eb5d7ab184d55ef498731020bb8d
Resolving api.openai.com...
Adding 162.159.140.245 for api.openai.com
Adding 172.66.0.243 for api.openai.com
Host network detected as: 172.17.0.0/24
Firewall configuration complete
Verifying firewall rules...
Firewall verification passed - unable to reach https://example.com as expected
Firewall verification passed - able to reach https://api.openai.com as expected
╭──────────────────────────────────────────────────────────────╮
│ ● OpenAI Codex (research preview) v0.1.2504221401            │
╰──────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────╮
│ localhost session: 7c782f196ae04503866e39f071e26a69          │
│ ↳ model: o4-mini                                             │
│ ↳ provider: openai                                           │
│ ↳ approval: full-auto                                        │
╰──────────────────────────────────────────────────────────────╯
user
explain this project to me
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│( ●    ) 2s  Thinking                                                                                                                                                                                                                                                  │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
  send q or ctrl+c to exit | send "/clear" to reset | send "/help" for commands | press enter to send | shift+enter for new line — 100% context left
```
2025-04-23 15:21:00 -07:00
kshern
146a61b073 feat: add support for custom provider configuration in the user config (#537)
### What

- Add support for loading and merging custom provider configurations
from a local `providers.json` file.
- Allow users to override or extend default providers with their own
settings.

### Why

This change enables users to flexibly customize and extend provider
endpoints and API keys without modifying the codebase, making the CLI
more adaptable for various LLM backends and enterprise use cases.

### How

- Introduced `loadProvidersFromFile` and `getMergedProviders` in config
logic.
- Added/updated related tests in [tests/config.test.tsx]


### Checklist

- [x] Lint passes for changed files
- [x] Tests pass for all files
- [x] Documentation/comments updated as needed

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-23 01:45:56 -04:00
Erick
b428d66f2b feat: added provider to run quiet mode function (#571)
Adding support to be able to run other models in quiet mode

ie: `codex --approval-mode full-auto -q "explain the current directory"
--provider xai --model grok-3-beta`
2025-04-23 01:12:18 -04:00
Connor Christie
cbeb5c3057 fix: remove unreachable "disableResponseStorage" logic flow introduced in #543 (#573)
This PR cleans up unreachable code that was added as a result of
https://github.com/openai/codex/pull/543.

The code being removed is already being handled above:

23f0887df3/codex-cli/src/utils/agent/agent-loop.ts (L535-L539)
2025-04-23 01:08:52 -04:00
Daniel Nakov
4261973467 bug: non-openai mode - don't default temp and top_p (#572)
I haven't seen any actual errors due to this, but it's been bothering me
that I had it defaulted to 1. I think best to leave it undefined and
have each provider do their thing
2025-04-23 01:07:40 -04:00
Daniel Nakov
23f0887df3 bug: non-openai mode - fix for gemini content: null, fix 429 to throw before stream (#563)
Gemini's API is finicky, it 400's without an error when you pass
content: null
Also fixed the rate limiting issues by throwing outside of the iterator.
I think there's a separate issue with the second isRateLimit check in
agent-loop - turnInput is cleared by that time, so it retries without
the last message.
2025-04-22 20:37:48 -04:00
Misha Davidov
20b6ef0de8 feat: create parent directories when creating new files. (#552)
apply_patch doesn't create parent directories when creating a new file
leading to confusion and flailing by the agent. This will create parent
directories automatically when absent.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-22 19:45:17 -04:00
chunterb
750d97e8ad feat: add openai model info configuration (#551)
In reference to [Issue 548](https://github.com/openai/codex/issues/548)
- part 1.
2025-04-22 17:31:25 -04:00
Fouad Matin
12bc2dcc4e bump(version): 0.1.2504221401 (#559)
## `0.1.2504221401`

### 🚀 Features

- Show actionable errors when api keys are missing (#523)
- Add CLI `--version` flag (#492)

### 🐛 Bug Fixes

- Agent loop for ZDR (`disableResponseStorage`) (#543)
- Fix relative `workdir` check for `apply_patch` (#556)
- Minimal mid-stream #429 retry loop using existing back-off (#506)
- Inconsistent usage of base URL and API key (#507)
- Remove requirement for api key for ollama (#546)
- Support `[provider]_BASE_URL` (#542)
2025-04-22 14:18:04 -07:00
Nick Carchedi
dc096302e5 fix typo in prompt (#558) 2025-04-22 17:15:28 -04:00
Michael Bolin
7c1f2d7deb when a shell tool call invokes apply_patch, resolve relative paths against workdir, if specified (#556)
Previously, we were ignoring the `workdir` field in an `ExecInput` when
running it through `canAutoApprove()`. For ordinary `exec()` calls, that
was sufficient, but for `apply_patch`, we need the `workdir` to resolve
relative paths in the `apply_patch` argument so that we can check them
in `isPathConstrainedTowritablePaths()`.

Likewise, we also need the workdir when running `execApplyPatch()`
because the paths need to be resolved again.

Ideally, the `ApplyPatchCommand` returned by `canAutoApprove()` would
not be a simple `patch: string`, but the parsed patch with all of the
paths resolved, in which case `execApplyPatch()` could expect absolute
paths and would not need `workdir`.
2025-04-22 14:07:47 -07:00
Fouad Matin
a30e79b768 fix: agent loop for disable response storage (#543)
- Fixes post-merge of #506

---------

Co-authored-by: Ilan Bigio <ilan@openai.com>
2025-04-22 13:49:10 -07:00
Daniil Davydov
f99c9080fd fix: support [provider]_BASE_URL (#542)
Resolved issue where an OLLAMA_BASE_URL was not properly handled
(openai/codex#516).
2025-04-22 15:05:48 -04:00
moppywhip
549fc650c3 fix: remove requirement for api key for ollama (#546)
Fixes #540 
# Skip API key validation for Ollama provider

## Description
This PR modifies the CLI to not require an API key when using Ollama as
the provider

## Changes
- Modified the validation logic to skip API key checks for these
providers
- Updated the README to clarify that Ollama doesn't require an API key
2025-04-22 13:59:31 -04:00
Naveen Kumar Battula
fcd1d4bdf9 feat: show actionable errors when api keys are missing (#523)
Change errors on missing api key of other providers from
<img width="854" alt="image"
src="https://github.com/user-attachments/assets/f488a247-5040-4b02-92d6-90a2204419ff"
/>
(missing deepseek key but still throws error for openai)
to
<img width="854" alt="image"
src="https://github.com/user-attachments/assets/8333d24a-91f8-4ba8-9a51-ed22a7e8a074"
/>
This should help new users figure out the issue easier and go to the
right place to get api keys

OpenAI key missing would popup with the right link
<img width="854" alt="image"
src="https://github.com/user-attachments/assets/0ecc9320-380f-425c-972e-4312bf610955"
/>
2025-04-22 12:55:08 -04:00
narenoai
dd330646d2 feat: add CLI –version flag (#492)
Adds a new flag to cli `--version` that prints the current version and
exits

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-22 12:28:21 -04:00
Scott Leibrand
ee6e1765fa agent-loop: minimal mid-stream #429 retry loop using existing back-off (#506)
As requested by @tibo-openai at
https://github.com/openai/codex/pull/357#issuecomment-2816554203, this
attempts a more minimal implementation of #357 that preserves as much as
possible of the existing code's exponential backoff logic.

Adds a small retry wrapper around the streaming for‑await loop so that
HTTP 429s which occur *after* the stream has started no longer crash the
CLI.

Highlights
• Re‑uses existing RATE_LIMIT_RETRY_WAIT_MS constant and 5‑attempt
limit.
• Exponential back‑off identical to initial request handling. 

This comment is probably more useful here in the PR:
// The OpenAI SDK may raise a 429 (rate‑limit) *after* the stream has
// started. Prior logic already retries the initial `responses.create`
        // call, but we need to add equivalent resilience for mid‑stream
        // failures.  We keep the implementation minimal by wrapping the
// existing `for‑await` loop in a small retry‑for‑loop that re‑creates
        // the stream with exponential back‑off.
2025-04-22 11:02:10 -04:00
Gabriel Bianconi
98a22273d9 fix: inconsistent usage of base URL and API key (#507)
A recent commit introduced the ability to use third-party model
providers. (Really appreciate it!)

However, the usage is inconsistent: some pieces of code use the custom
providers, whereas others still have the old behavior. Additionally,
`OPENAI_BASE_URL` is now being disregarded when it shouldn't be.

This PR normalizes the usage to `getApiKey` and `getBaseUrl`, and
enables the use of `OPENAI_BASE_URL` if present.

---------

Co-authored-by: Gabriel Bianconi <GabrielBianconi@users.noreply.github.com>
2025-04-22 10:51:26 -04:00
Thomas
d78f77edb7 fix(agent-loop): update required properties to include workdir and ti… (#530)
Without this I get an issue running codex it in a docker container. I
receive:

```
{
    "answer": "{\"role\":\"user\",\"content\":[{\"type\":\"input_text\",\"text\":\"\\\"Say hello world\\\"\"}],\"type\":\"message\"}\n{\"id\":\"error-1745325184914\",\"type\":\"message\",\"role\":\"system\",\"content\":[{\"type\":\"input_text\",\"text\":\"⚠️  OpenAI rejected the request (request ID: req_f9027b59ebbce00061e9cd2dbb2d529a). Error details: Status: 400, Code: invalid_function_parameters, Type: invalid_request_error, Message: 400 Invalid schema for function 'shell': In context=(), 'required' is required to be supplied and to be an array including every key in properties. Missing 'workdir'.. Please verify your settings and try again.\"}]}\n"
}
```

This fix makes it work.
2025-04-22 10:32:36 -04:00
Fouad Matin
2cb8355968 bump(version): 0.1.2504220136 (#518)
## `0.1.2504220136`

### 🚀 Features

- Add support for ZDR orgs (#481)
- Include fractional portion of chunk that exceeds stdout/stderr limit
(#497)
2025-04-22 01:45:30 -07:00
Fouad Matin
9f5ccbb618 feat: add support for ZDR orgs (#481)
- Add `store: boolean` to `AgentLoop` to enable client-side storage of
response items
- Add `--disable-response-storage` arg + `disableResponseStorage` config
2025-04-22 01:30:16 -07:00
Michael Bolin
3eba86a553 include fractional portion of chunk that exceeds stdout/stderr limit (#497)
I saw cases where the first chunk of output from `ls -R` could be large
enough to exceed `MAX_OUTPUT_BYTES` or `MAX_OUTPUT_LINES`, in which case
the loop would exit early in `createTruncatingCollector()` such that
nothing was appended to the `chunks` array. As a result, the reported
`stdout` of `ls -R` would be empty.

I asked Codex to add logic to handle this edge case and write a unit
test. I used this as my test:

```
./codex-cli/dist/cli.js -q 'what is the output of `ls -R`'
```

now it appears to include a ton of stuff whereas before this change, I
saw:

```
{"type":"function_call_output","call_id":"call_a2QhVt7HRJYKjb3dIc8w1aBB","output":"{\"output\":\"\\n\\n[Output truncated: too many lines or bytes]\",\"metadata\":{\"exit_code\":0,\"duration_seconds\":0.5}}"}
```
2025-04-21 19:06:03 -07:00
Fouad Matin
99ed27ad1b bump(version): 0.1.2504211509 (#493)
## `0.1.2504211509`

### 🚀 Features

- Support multiple providers via Responses-Completion transformation
(#247)
- Add user-defined safe commands configuration and approval logic #380
(#386)
- Allow switching approval modes when prompted to approve an
edit/command (#400)
- Add support for `/diff` command autocomplete in TerminalChatInput
(#431)
- Auto-open model selector if user selects deprecated model (#427)
- Read approvalMode from config file (#298)
- `/diff` command to view git diff (#426)
- Tab completions for file paths (#279)
- Add /command autocomplete (#317)
- Allow multi-line input (#438)

### 🐛 Bug Fixes

- `full-auto` support in quiet mode (#374)
- Enable shell option for child process execution (#391)
- Configure husky and lint-staged for pnpm monorepo (#384)
- Command pipe execution by improving shell detection (#437)
- Name of the file not matching the name of the component (#354)
- Allow proper exit from new Switch approval mode dialog (#453)
- Ensure /clear resets context and exclude system messages from
approximateTokenUsed count (#443)
- `/clear` now clears terminal screen and resets context left indicator
(#425)
- Correct fish completion function name in CLI script (#485)
- Auto-open model-selector when model is not found (#448)
- Remove unnecessary isLoggingEnabled() checks (#420)
- Improve test reliability for `raw-exec` (#434)
- Unintended tear down of agent loop (#483)
- Remove extraneous type casts (#462)
2025-04-21 15:13:33 -07:00
Daniel Nakov
e7a3eec942 docs: Add note about non-openai providers; add --provider cli flag to the help (#484) 2025-04-21 14:53:09 -07:00
Mitchell Kutchuk
09f0ae3899 fix: unintended tear down of agent loop (#483)
fixes #465
2025-04-21 15:01:09 -04:00
Benny Yen
f72cfd7ef3 fix: correct fish completion function name in CLI script (#485)
Missing an underscore.

fish function:
https://github.com/fish-shell/fish-shell/blob/master/share/functions/__fish_complete_path.fish

fixes https://github.com/openai/codex/issues/469
2025-04-21 14:45:49 -04:00
Michael Bolin
12ec57b330 do not auto-approve the find command if it contains options that write files or spawn commands (#482)
Updates `isSafeCommand()` so that an invocation of `find` is not
auto-approved if it contains any of: `-exec`, `-execdir`, `-ok`,
`-okdir`, `-delete`, `-fls`, `-fprint`, `-fprint0`, `-fprintf`.
2025-04-21 10:29:58 -07:00
Jon Church
7346f4388e chore: drop src from publish (#474)
Publish shouldn't need the source files published along with the
distributable bin.

`src` is being shipped to the registry rn:
https://www.npmjs.com/package/@openai/codex?activeTab=code

You can verify that the src is not needed by packing the project
manually after removing src from the files:

```sh
# from the codex-cli dir
rm -rf dist # just for hygiene
pnpm run build
pnpm pack

mkdir /tmp/codex-tar-test
mv openai-codex-0.1.2504181820.tgz /tmp/codex-tar-test
cd /tmp/codex-tar-test

pnpm init
pnpm add ./openai-codex-0.1.2504181820.tgz /tmp/codex-tar-test
pnpm exec codex --full-auto "run a bash -c command to echo hello world"
```

The cli is operational

> noticed this when checking the screenshot included in
https://github.com/openai/codex/pull/461
2025-04-21 09:56:00 -07:00
Michael Bolin
d36d295a1a revert #386 due to unsafe shell command parsing (#478)
Reverts https://github.com/openai/codex/pull/386 because:

* The parsing logic for shell commands was unsafe (`split(/\s+/)`
instead of something like `shell-quote`)
* We have a different plan for supporting auto-approved commands.
2025-04-21 12:52:11 -04:00
nerdielol
797eba4930 fix: /clear now clears terminal screen and resets context left indicator (#425)
## What does this PR do?
* Implements the full `/clear` command in **codex‑cli**:
  * Resets chat history **and** wipes the terminal screen.
  * Shows a single system message: `Context cleared`.
* Adds comprehensive unit tests for the new behaviour.

## Why is it needed?
* Fixes user‑reported bugs:  
  * **#395**  
  * **#405**

## How is it implemented?
* **Code** – Adds `process.stdout.write('\x1b[3J\x1b[H\x1b[2J')` in
`terminal.tsx`. Removed reference to `prev` in `
        setItems((prev) => [
          ...prev,
` in `terminal-chat-new-input.tsx` & `terminal-chat-input.tsx`.

## CI / QA
All commands pass locally:
```bash
pnpm test      # green
pnpm run lint  # green
pnpm run typecheck  # zero TS errors
```

## Results



https://github.com/user-attachments/assets/11dcf05c-e054-495a-8ecb-ac6ef21a9da4

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 12:39:46 -04:00
Jon Church
5b19451770 chore(build): cleanup dist before build (#477)
Another one that I noticed.

The dist structure is very simple rn, so unlikely to run into orphaned
files as you're emitting a single built artifact which wil be
overwritten on build, but I always prefer to do clean builds as
"hygiene".

I had a dirty dist personally after local development and testing some
things, as an example.

Alternatives could be to create a `clean` script with cross platform
`rimraf dist`
2025-04-21 12:35:25 -04:00
Thibault Sottiaux
3c4f1fea9b chore: consolidate model utils and drive-by cleanups (#476)
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 12:33:57 -04:00
Thibault Sottiaux
dc276999a9 chore: improve storage/ implementation; use log(...) consistently (#473)
This PR tidies up primitives under storage/.

**Noop changes:**

* Promote logger implementation to top-level utility outside of agent/
* Use logger within storage primitives
* Cleanup doc strings and comments

**Functional changes:**

* Increase command history size to 10_000
* Remove unnecessary debounce implementation and ensure a session ID is
created only once per agent loop

---------

Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 09:51:34 -04:00
Aaron Casanova
8f1ea7fa85 fix: remove extraneous type casts (#462) 2025-04-21 00:00:56 -07:00
Benny Yen
3e71c87708 refactor(updates): fetch version from registry instead of npm CLI to support multiple managers (#446)
## Background  
Addressing feedback from
https://github.com/openai/codex/pull/333#discussion_r2050893224, this PR
adds support for Bun alongside npm, pnpm while keeping the code simple.

## Summary  
The update‑check flow is refactored to use a direct registry lookup
(`fast-npm-meta` + `semver`) instead of shelling out to `npm outdated`,
and adds a lightweight installer‑detection mechanism that:

1. Checks if the invoked script lives under a known global‑bin directory
(npm, pnpm, or bun)
2. If not, falls back to local detection via `getUserAgent()` (the
`package‑manager‑detector` library)

## What’s Changed  
- **Registry‑based version check**  
- Replace `execFile("npm", ["outdated"])` with `getLatestVersion()` and
`semver.gt()`
- **Multi‑manager support**  
- New `renderUpdateCommand` handles update commands for `npm`, `pnpm`,
and `bun`.
  - Detect global installer first via `detectInstallerByPath()`  
  - Fallback to local detection via `getUserAgent()`  
- **Module cleanup**  
- Extract `detectInstallerByPath` into
`utils/package-manager-detector.ts`
- Remove legacy `checkOutdated`, `getNPMCommandPath`, and child‑process
JSON parsing
- **Flow improvements in `checkForUpdates`**  
  1. Short‑circuit by `UPDATE_CHECK_FREQUENCY`  
  3. Fetch & compare versions  
  4. Persist new timestamp immediately  
  5. Render & display styled box only when an update exists  
- **Maintain simplicity**
- All multi‑manager logic lives in one small helper and a concise lookup
rather than a complex adapter hierarchy
- Core `checkForUpdates` remains a single, easy‑to‑follow async function
- **Dependencies added**  
- `fast-npm-meta`, `semver`, `package-manager-detector`, `@types/semver`

## Considerations
If we decide to drop the interactive update‑message (`npm install -g
@openai/codex`) rendering altogether, we could remove most of the
installer‑detection code and dependencies, which would simplify the
codebase further but result in a less friendly UX.

## Preview

* npm

![refactor-update-check-flow-npm](https://github.com/user-attachments/assets/57320114-3fb6-4985-8780-3388a1d1ec85)

* bun

![refactor-update-check-flow-bun](https://github.com/user-attachments/assets/d93bf0ae-a687-412a-ab92-581b4f967307)

## Simple Flow Chart

```mermaid
flowchart TD
  A(Start) --> B[Read state]
  B --> C{Recent check?}
  C -- Yes --> Z[End]
  C -- No --> D[Fetch latest version]
  D --> E[Save check time]
  E --> F{Version data OK?}
  F -- No --> Z
  F -- Yes --> G{Update available?}
  G -- No --> Z
  G -- Yes --> H{Global install?}
  H -- Yes --> I[Select global manager]
  H -- No --> K{Local install?}
  K -- No --> Z
  K -- Yes --> L[Select local manager]
  I & L --> M[Render update message]
  M --> N[Format with boxen]
  N --> O[Print update]
  O --> Z
```
2025-04-21 00:00:20 -07:00
Abdelrhman Kamal Mahmoud Ali Slim
655564f25d fix: auto-open model-selector when model is not found (#448)
Change the check from checking if the model has been deprecated to check
if the model_not_found


https://github.com/user-attachments/assets/ad0f7daf-5eb4-4e4b-89e5-04240044c64f
2025-04-20 23:56:20 -07:00
Aaron Casanova
ee3a9bc14b Remove README.md and bin from package.json#files field (#461)
This PR removes always included files and folders from the
[`package.json#files`
field](https://docs.npmjs.com/cli/v11/configuring-npm/package-json#files):

> Certain files are always included, regardless of settings:
> - package.json
> - README
> - LICENSE / LICENCE
> - The file in the "main" field
> - The file(s) in the "bin" field

Validated by running `pnpm i && cd codex-cli && pnpm build && pnpm
release:readme && pnpm pack` and confirming both the `README.md` file
and `bin` directory are still included in the tarball:

<img width="227" alt="image"
src="https://github.com/user-attachments/assets/ecd90a07-73c7-4940-8c83-cb1d51dfcf96"
/>
2025-04-20 23:54:27 -07:00
Aiden Cline
ee7ce5b601 feat: tab completions for file paths (#279)
Made a PR as was requested in the #113
2025-04-20 22:34:27 -07:00
Jordan Docherty
f6b12aa994 refactor(history-overlay): split into modular functions & add tests (fixes #402) (#403)
## What
This PR targets #402 and refactors the `history-overlay.tsx`component to
reduce cognitive complexity by splitting the `buildLists` function into
smaller, focused helper functions. It also adds comprehensive test
coverage to ensure the functionality remains intact.

## Why
The original `buildLists` function had high cognitive complexity due to
multiple nested conditionals, complex string manipulation, and mixed
responsibilities. This refactor makes the code more maintainable and
easier to understand while preserving all existing functionality.

## How
- Split `buildLists` into focused helper functions
- Added comprehensive test coverage for all functionality
- Maintained existing behavior and keyboard interactions
- Improved code organization and readability

## Testing
All tests pass, including:
- Command mode functionality
- File mode functionality
- Keyboard interactions
- Error handling
2025-04-20 22:27:06 -07:00
Abdelrhman Kamal Mahmoud Ali Slim
f97557f59f refactor(component): rename component to match its filename (#432)
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-20 22:21:49 -07:00
Scott Leibrand
e3c8ce49ca fix: allow proper exit from new Switch approval mode dialog (#453)
As described in
https://github.com/openai/codex/issues/392#issuecomment-2817090022
introduced by #400

The testing I'd done worked correctly because I was using the (s)
shortcut, but selecting the same option using arrow‑key → Enter on
“Switch approval mode” was preventing the user from subsequently exiting
the Switch approval mode dialog, requiring a ^C to quit codex entirely.
With this fix, both entry methods work correctly in my testing.

Per codex:

Issue

- When you navigated down (↓) to “Switch approval mode (s)” in the Shell
Command review dialog and pressed Enter, the ApprovalModeOverlay would
open—but because the underlying `TerminalChatCommandReview` component
stayed mounted (albeit disabled), its own Ink input handlers immediately
re‑captured the same key events and re‑opened the overlay as soon as you
hit Esc or Enter again. In practice this made it impossible to exit the
submenu.

Root cause

- We only disabled the SelectInput via `isDisabled`, but never fully
unmounted the review UI when an overlay was shown, so its `useInput` and
`<Select>` hooks were still active and “stealing” keys.

Fix

- In `terminal-chat.tsx` we now only render `<TerminalChatInput>` (and
by extension `TerminalChatCommandReview`) when `overlayMode === "none"`.
That unmounts all of its key handlers whenever any overlay (history,
model, approval, help, diff) is open, so no input leaks through.

Files changed

- **src/components/chat/terminal-chat.tsx**: Wrapped the entire
`<TerminalChatInput>` block in `overlayMode === "none" && agent`

With that in place, arrow‑key → Enter on “Switch approval mode”
correctly opens the overlay, and then you can use Enter/Esc inside the
overlay without getting stuck or immediately re‑opening it.
2025-04-20 22:19:53 -07:00
Brayden Moon
8dd1125681 fix: command pipe execution by improving shell detection (#437)
## Description
This PR fixes Issue #421 where commands with pipes (e.g., `grep -R ...
-n | head -n 20`) were failing to execute properly after PR #391 was
merged.

## Changes
- Modified the `requiresShell` function to only enable shell mode when
the command is a single string containing shell operators
- Added logic to handle the case where shell operators are passed as
separate arguments
- Added comprehensive tests to verify the fix

## Root Cause
The issue was that the `requiresShell` function was detecting shell
operators like `|` even when they were passed as separate arguments,
which caused the command to be executed with `shell: true`
unnecessarily. This was causing syntax errors when running commands with
pipes.

## Testing
- Added unit tests to verify the fix
- Manually tested with real commands using pipes
- Ensured all existing tests pass

Fixes #421
2025-04-20 21:11:19 -07:00
Daniel Nakov
eafbc75612 feat: support multiple providers via Responses-Completion transformation (#247)
https://github.com/user-attachments/assets/9ecb51be-fa65-4e99-8512-abb898dda569

Implemented it as a transformation between Responses API and Completion
API so that it supports existing providers that implement the Completion
API and minimizes the changes needed to the codex repo.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Co-authored-by: Fouad Matin <169186268+fouad-openai@users.noreply.github.com>
Co-authored-by: Fouad Matin <fouad@openai.com>
2025-04-20 20:59:34 -07:00
Jordan Docherty
693bd59ecc fix(raw-exec-process-group): improve test reliability (#434)
This PR improves the reliability of `raw-exec-process-group.test`,
addressing [#415](https://github.com/openai/codex/issues/415)

Before: The test would fail sporadically in CI because it checked for
process termination immediately after abort, without accounting for the
time it takes for processes to fully terminate.

Now: We've added a robust `ensureProcessGone` helper that:
- Polls the process status with a 500ms timeout
- Retries every 50ms if the process is still alive
- Provides clear error messages if termination takes too long

We now wait for the child process to fully exit after sending abort
signals, instead of assuming instant death, fixing flakiness caused by
asynchronous process termination.

Changes:
- Added `ensureProcessGone` helper function with retry logic
- Improved error handling and timeout management

See [this bash
demo](https://gist.github.com/jdocherty/a84dbca2fbf7b47e5f95c87a07034ae8)
for a minimal reproduction of why process death is asynchronous and why
the test needs to retry after aborting.
2025-04-20 12:21:02 -07:00
Michael Bolin
b554b522f7 fix: remove unnecessary isLoggingEnabled() checks (#420)
It appears that use of `isLoggingEnabled()` was erroneously copypasta'd
in many places. This PR updates its docstring to clarify that should
only be used to avoid constructing a potentially expensive docstring.
With this change, the only function that merits/uses this check is
`execCommand()`.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/420).
* #423
* __->__ #420
* #419
2025-04-20 09:58:06 -07:00
Abdelrhman Kamal Mahmoud Ali Slim
81cf47e591 feat: auto-open model selector if user selects deprecated model (#427)
https://github.com/user-attachments/assets/d254dff6-f1f9-4492-9dfd-d185c38d3a75
2025-04-20 09:51:49 -07:00
Michael Bolin
e372e4667b Make it so CONFIG_DIR is not in the list of writable roots by default (#419)
To play it safe, let's keep `CONFIG_DIR` out of the default list of
writable roots.

This also fixes an issue where `execWithSeatbelt()` was modifying
`writableRoots` instead of creating a new array.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/419).
* #423
* #420
* __->__ #419
2025-04-20 09:37:07 -07:00