Commit Graph

131 Commits

Author SHA1 Message Date
Misha Davidov
9b102965b9 feat: more loosely match context for apply_patch (#610)
More of a proposal than anything but models seem to struggle with
composing valid patches for `apply_patch` for context matching when
there are unicode look-a-likes involved. This would normalize them.

```
top-level          # ASCII
top-level          # U+2011 NON-BREAKING HYPHEN
top–level          # U+2013 EN DASH
top—level          # U+2014 EM DASH
top‒level          # U+2012 FIGURE DASH
```

thanks unicode.
2025-04-24 09:05:19 -07:00
Connor Christie
622323a59b fix: don't clear turn input before retries (#611)
The current turn input in the agent loop is being discarded before
consuming the stream events which causes the stream reconnect (after
rate limit failure) to not include the inputs. Since the new stream
includes the previous response ID, it triggers a bad request exception
considering the input doesn't match what OpenAI has stored on the server
side and subsequently a very confusing error message of: `No tool output
found for function call call_xyz`.

This should fix https://github.com/openai/codex/issues/586.

## Testing

I have a personal project that I'm working on that runs multiple Codex
CLIs in parallel and often runs into rate limit errors (as seen in the
OpenAI logs). After making this change, I am no longer experiencing
Codex crashing and it was able to retry and handle everything gracefully
until completion (even though I still see rate limiting in the OpenAI
logs).
2025-04-24 06:29:36 -05:00
kshern
146a61b073 feat: add support for custom provider configuration in the user config (#537)
### What

- Add support for loading and merging custom provider configurations
from a local `providers.json` file.
- Allow users to override or extend default providers with their own
settings.

### Why

This change enables users to flexibly customize and extend provider
endpoints and API keys without modifying the codebase, making the CLI
more adaptable for various LLM backends and enterprise use cases.

### How

- Introduced `loadProvidersFromFile` and `getMergedProviders` in config
logic.
- Added/updated related tests in [tests/config.test.tsx]


### Checklist

- [x] Lint passes for changed files
- [x] Tests pass for all files
- [x] Documentation/comments updated as needed

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-23 01:45:56 -04:00
Connor Christie
cbeb5c3057 fix: remove unreachable "disableResponseStorage" logic flow introduced in #543 (#573)
This PR cleans up unreachable code that was added as a result of
https://github.com/openai/codex/pull/543.

The code being removed is already being handled above:

23f0887df3/codex-cli/src/utils/agent/agent-loop.ts (L535-L539)
2025-04-23 01:08:52 -04:00
Daniel Nakov
4261973467 bug: non-openai mode - don't default temp and top_p (#572)
I haven't seen any actual errors due to this, but it's been bothering me
that I had it defaulted to 1. I think best to leave it undefined and
have each provider do their thing
2025-04-23 01:07:40 -04:00
Daniel Nakov
23f0887df3 bug: non-openai mode - fix for gemini content: null, fix 429 to throw before stream (#563)
Gemini's API is finicky, it 400's without an error when you pass
content: null
Also fixed the rate limiting issues by throwing outside of the iterator.
I think there's a separate issue with the second isRateLimit check in
agent-loop - turnInput is cleared by that time, so it retries without
the last message.
2025-04-22 20:37:48 -04:00
Misha Davidov
20b6ef0de8 feat: create parent directories when creating new files. (#552)
apply_patch doesn't create parent directories when creating a new file
leading to confusion and flailing by the agent. This will create parent
directories automatically when absent.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-22 19:45:17 -04:00
chunterb
750d97e8ad feat: add openai model info configuration (#551)
In reference to [Issue 548](https://github.com/openai/codex/issues/548)
- part 1.
2025-04-22 17:31:25 -04:00
Fouad Matin
12bc2dcc4e bump(version): 0.1.2504221401 (#559)
## `0.1.2504221401`

### 🚀 Features

- Show actionable errors when api keys are missing (#523)
- Add CLI `--version` flag (#492)

### 🐛 Bug Fixes

- Agent loop for ZDR (`disableResponseStorage`) (#543)
- Fix relative `workdir` check for `apply_patch` (#556)
- Minimal mid-stream #429 retry loop using existing back-off (#506)
- Inconsistent usage of base URL and API key (#507)
- Remove requirement for api key for ollama (#546)
- Support `[provider]_BASE_URL` (#542)
2025-04-22 14:18:04 -07:00
Nick Carchedi
dc096302e5 fix typo in prompt (#558) 2025-04-22 17:15:28 -04:00
Michael Bolin
7c1f2d7deb when a shell tool call invokes apply_patch, resolve relative paths against workdir, if specified (#556)
Previously, we were ignoring the `workdir` field in an `ExecInput` when
running it through `canAutoApprove()`. For ordinary `exec()` calls, that
was sufficient, but for `apply_patch`, we need the `workdir` to resolve
relative paths in the `apply_patch` argument so that we can check them
in `isPathConstrainedTowritablePaths()`.

Likewise, we also need the workdir when running `execApplyPatch()`
because the paths need to be resolved again.

Ideally, the `ApplyPatchCommand` returned by `canAutoApprove()` would
not be a simple `patch: string`, but the parsed patch with all of the
paths resolved, in which case `execApplyPatch()` could expect absolute
paths and would not need `workdir`.
2025-04-22 14:07:47 -07:00
Fouad Matin
a30e79b768 fix: agent loop for disable response storage (#543)
- Fixes post-merge of #506

---------

Co-authored-by: Ilan Bigio <ilan@openai.com>
2025-04-22 13:49:10 -07:00
Daniil Davydov
f99c9080fd fix: support [provider]_BASE_URL (#542)
Resolved issue where an OLLAMA_BASE_URL was not properly handled
(openai/codex#516).
2025-04-22 15:05:48 -04:00
Scott Leibrand
ee6e1765fa agent-loop: minimal mid-stream #429 retry loop using existing back-off (#506)
As requested by @tibo-openai at
https://github.com/openai/codex/pull/357#issuecomment-2816554203, this
attempts a more minimal implementation of #357 that preserves as much as
possible of the existing code's exponential backoff logic.

Adds a small retry wrapper around the streaming for‑await loop so that
HTTP 429s which occur *after* the stream has started no longer crash the
CLI.

Highlights
• Re‑uses existing RATE_LIMIT_RETRY_WAIT_MS constant and 5‑attempt
limit.
• Exponential back‑off identical to initial request handling. 

This comment is probably more useful here in the PR:
// The OpenAI SDK may raise a 429 (rate‑limit) *after* the stream has
// started. Prior logic already retries the initial `responses.create`
        // call, but we need to add equivalent resilience for mid‑stream
        // failures.  We keep the implementation minimal by wrapping the
// existing `for‑await` loop in a small retry‑for‑loop that re‑creates
        // the stream with exponential back‑off.
2025-04-22 11:02:10 -04:00
Gabriel Bianconi
98a22273d9 fix: inconsistent usage of base URL and API key (#507)
A recent commit introduced the ability to use third-party model
providers. (Really appreciate it!)

However, the usage is inconsistent: some pieces of code use the custom
providers, whereas others still have the old behavior. Additionally,
`OPENAI_BASE_URL` is now being disregarded when it shouldn't be.

This PR normalizes the usage to `getApiKey` and `getBaseUrl`, and
enables the use of `OPENAI_BASE_URL` if present.

---------

Co-authored-by: Gabriel Bianconi <GabrielBianconi@users.noreply.github.com>
2025-04-22 10:51:26 -04:00
Thomas
d78f77edb7 fix(agent-loop): update required properties to include workdir and ti… (#530)
Without this I get an issue running codex it in a docker container. I
receive:

```
{
    "answer": "{\"role\":\"user\",\"content\":[{\"type\":\"input_text\",\"text\":\"\\\"Say hello world\\\"\"}],\"type\":\"message\"}\n{\"id\":\"error-1745325184914\",\"type\":\"message\",\"role\":\"system\",\"content\":[{\"type\":\"input_text\",\"text\":\"⚠️  OpenAI rejected the request (request ID: req_f9027b59ebbce00061e9cd2dbb2d529a). Error details: Status: 400, Code: invalid_function_parameters, Type: invalid_request_error, Message: 400 Invalid schema for function 'shell': In context=(), 'required' is required to be supplied and to be an array including every key in properties. Missing 'workdir'.. Please verify your settings and try again.\"}]}\n"
}
```

This fix makes it work.
2025-04-22 10:32:36 -04:00
Fouad Matin
2cb8355968 bump(version): 0.1.2504220136 (#518)
## `0.1.2504220136`

### 🚀 Features

- Add support for ZDR orgs (#481)
- Include fractional portion of chunk that exceeds stdout/stderr limit
(#497)
2025-04-22 01:45:30 -07:00
Fouad Matin
9f5ccbb618 feat: add support for ZDR orgs (#481)
- Add `store: boolean` to `AgentLoop` to enable client-side storage of
response items
- Add `--disable-response-storage` arg + `disableResponseStorage` config
2025-04-22 01:30:16 -07:00
Michael Bolin
3eba86a553 include fractional portion of chunk that exceeds stdout/stderr limit (#497)
I saw cases where the first chunk of output from `ls -R` could be large
enough to exceed `MAX_OUTPUT_BYTES` or `MAX_OUTPUT_LINES`, in which case
the loop would exit early in `createTruncatingCollector()` such that
nothing was appended to the `chunks` array. As a result, the reported
`stdout` of `ls -R` would be empty.

I asked Codex to add logic to handle this edge case and write a unit
test. I used this as my test:

```
./codex-cli/dist/cli.js -q 'what is the output of `ls -R`'
```

now it appears to include a ton of stuff whereas before this change, I
saw:

```
{"type":"function_call_output","call_id":"call_a2QhVt7HRJYKjb3dIc8w1aBB","output":"{\"output\":\"\\n\\n[Output truncated: too many lines or bytes]\",\"metadata\":{\"exit_code\":0,\"duration_seconds\":0.5}}"}
```
2025-04-21 19:06:03 -07:00
Fouad Matin
99ed27ad1b bump(version): 0.1.2504211509 (#493)
## `0.1.2504211509`

### 🚀 Features

- Support multiple providers via Responses-Completion transformation
(#247)
- Add user-defined safe commands configuration and approval logic #380
(#386)
- Allow switching approval modes when prompted to approve an
edit/command (#400)
- Add support for `/diff` command autocomplete in TerminalChatInput
(#431)
- Auto-open model selector if user selects deprecated model (#427)
- Read approvalMode from config file (#298)
- `/diff` command to view git diff (#426)
- Tab completions for file paths (#279)
- Add /command autocomplete (#317)
- Allow multi-line input (#438)

### 🐛 Bug Fixes

- `full-auto` support in quiet mode (#374)
- Enable shell option for child process execution (#391)
- Configure husky and lint-staged for pnpm monorepo (#384)
- Command pipe execution by improving shell detection (#437)
- Name of the file not matching the name of the component (#354)
- Allow proper exit from new Switch approval mode dialog (#453)
- Ensure /clear resets context and exclude system messages from
approximateTokenUsed count (#443)
- `/clear` now clears terminal screen and resets context left indicator
(#425)
- Correct fish completion function name in CLI script (#485)
- Auto-open model-selector when model is not found (#448)
- Remove unnecessary isLoggingEnabled() checks (#420)
- Improve test reliability for `raw-exec` (#434)
- Unintended tear down of agent loop (#483)
- Remove extraneous type casts (#462)
2025-04-21 15:13:33 -07:00
Michael Bolin
d36d295a1a revert #386 due to unsafe shell command parsing (#478)
Reverts https://github.com/openai/codex/pull/386 because:

* The parsing logic for shell commands was unsafe (`split(/\s+/)`
instead of something like `shell-quote`)
* We have a different plan for supporting auto-approved commands.
2025-04-21 12:52:11 -04:00
nerdielol
797eba4930 fix: /clear now clears terminal screen and resets context left indicator (#425)
## What does this PR do?
* Implements the full `/clear` command in **codex‑cli**:
  * Resets chat history **and** wipes the terminal screen.
  * Shows a single system message: `Context cleared`.
* Adds comprehensive unit tests for the new behaviour.

## Why is it needed?
* Fixes user‑reported bugs:  
  * **#395**  
  * **#405**

## How is it implemented?
* **Code** – Adds `process.stdout.write('\x1b[3J\x1b[H\x1b[2J')` in
`terminal.tsx`. Removed reference to `prev` in `
        setItems((prev) => [
          ...prev,
` in `terminal-chat-new-input.tsx` & `terminal-chat-input.tsx`.

## CI / QA
All commands pass locally:
```bash
pnpm test      # green
pnpm run lint  # green
pnpm run typecheck  # zero TS errors
```

## Results



https://github.com/user-attachments/assets/11dcf05c-e054-495a-8ecb-ac6ef21a9da4

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 12:39:46 -04:00
Thibault Sottiaux
3c4f1fea9b chore: consolidate model utils and drive-by cleanups (#476)
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 12:33:57 -04:00
Thibault Sottiaux
dc276999a9 chore: improve storage/ implementation; use log(...) consistently (#473)
This PR tidies up primitives under storage/.

**Noop changes:**

* Promote logger implementation to top-level utility outside of agent/
* Use logger within storage primitives
* Cleanup doc strings and comments

**Functional changes:**

* Increase command history size to 10_000
* Remove unnecessary debounce implementation and ensure a session ID is
created only once per agent loop

---------

Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 09:51:34 -04:00
Benny Yen
3e71c87708 refactor(updates): fetch version from registry instead of npm CLI to support multiple managers (#446)
## Background  
Addressing feedback from
https://github.com/openai/codex/pull/333#discussion_r2050893224, this PR
adds support for Bun alongside npm, pnpm while keeping the code simple.

## Summary  
The update‑check flow is refactored to use a direct registry lookup
(`fast-npm-meta` + `semver`) instead of shelling out to `npm outdated`,
and adds a lightweight installer‑detection mechanism that:

1. Checks if the invoked script lives under a known global‑bin directory
(npm, pnpm, or bun)
2. If not, falls back to local detection via `getUserAgent()` (the
`package‑manager‑detector` library)

## What’s Changed  
- **Registry‑based version check**  
- Replace `execFile("npm", ["outdated"])` with `getLatestVersion()` and
`semver.gt()`
- **Multi‑manager support**  
- New `renderUpdateCommand` handles update commands for `npm`, `pnpm`,
and `bun`.
  - Detect global installer first via `detectInstallerByPath()`  
  - Fallback to local detection via `getUserAgent()`  
- **Module cleanup**  
- Extract `detectInstallerByPath` into
`utils/package-manager-detector.ts`
- Remove legacy `checkOutdated`, `getNPMCommandPath`, and child‑process
JSON parsing
- **Flow improvements in `checkForUpdates`**  
  1. Short‑circuit by `UPDATE_CHECK_FREQUENCY`  
  3. Fetch & compare versions  
  4. Persist new timestamp immediately  
  5. Render & display styled box only when an update exists  
- **Maintain simplicity**
- All multi‑manager logic lives in one small helper and a concise lookup
rather than a complex adapter hierarchy
- Core `checkForUpdates` remains a single, easy‑to‑follow async function
- **Dependencies added**  
- `fast-npm-meta`, `semver`, `package-manager-detector`, `@types/semver`

## Considerations
If we decide to drop the interactive update‑message (`npm install -g
@openai/codex`) rendering altogether, we could remove most of the
installer‑detection code and dependencies, which would simplify the
codebase further but result in a less friendly UX.

## Preview

* npm

![refactor-update-check-flow-npm](https://github.com/user-attachments/assets/57320114-3fb6-4985-8780-3388a1d1ec85)

* bun

![refactor-update-check-flow-bun](https://github.com/user-attachments/assets/d93bf0ae-a687-412a-ab92-581b4f967307)

## Simple Flow Chart

```mermaid
flowchart TD
  A(Start) --> B[Read state]
  B --> C{Recent check?}
  C -- Yes --> Z[End]
  C -- No --> D[Fetch latest version]
  D --> E[Save check time]
  E --> F{Version data OK?}
  F -- No --> Z
  F -- Yes --> G{Update available?}
  G -- No --> Z
  G -- Yes --> H{Global install?}
  H -- Yes --> I[Select global manager]
  H -- No --> K{Local install?}
  K -- No --> Z
  K -- Yes --> L[Select local manager]
  I & L --> M[Render update message]
  M --> N[Format with boxen]
  N --> O[Print update]
  O --> Z
```
2025-04-21 00:00:20 -07:00
Aiden Cline
ee7ce5b601 feat: tab completions for file paths (#279)
Made a PR as was requested in the #113
2025-04-20 22:34:27 -07:00
Brayden Moon
8dd1125681 fix: command pipe execution by improving shell detection (#437)
## Description
This PR fixes Issue #421 where commands with pipes (e.g., `grep -R ...
-n | head -n 20`) were failing to execute properly after PR #391 was
merged.

## Changes
- Modified the `requiresShell` function to only enable shell mode when
the command is a single string containing shell operators
- Added logic to handle the case where shell operators are passed as
separate arguments
- Added comprehensive tests to verify the fix

## Root Cause
The issue was that the `requiresShell` function was detecting shell
operators like `|` even when they were passed as separate arguments,
which caused the command to be executed with `shell: true`
unnecessarily. This was causing syntax errors when running commands with
pipes.

## Testing
- Added unit tests to verify the fix
- Manually tested with real commands using pipes
- Ensured all existing tests pass

Fixes #421
2025-04-20 21:11:19 -07:00
Daniel Nakov
eafbc75612 feat: support multiple providers via Responses-Completion transformation (#247)
https://github.com/user-attachments/assets/9ecb51be-fa65-4e99-8512-abb898dda569

Implemented it as a transformation between Responses API and Completion
API so that it supports existing providers that implement the Completion
API and minimizes the changes needed to the codex repo.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Co-authored-by: Fouad Matin <169186268+fouad-openai@users.noreply.github.com>
Co-authored-by: Fouad Matin <fouad@openai.com>
2025-04-20 20:59:34 -07:00
Michael Bolin
b554b522f7 fix: remove unnecessary isLoggingEnabled() checks (#420)
It appears that use of `isLoggingEnabled()` was erroneously copypasta'd
in many places. This PR updates its docstring to clarify that should
only be used to avoid constructing a potentially expensive docstring.
With this change, the only function that merits/uses this check is
`execCommand()`.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/420).
* #423
* __->__ #420
* #419
2025-04-20 09:58:06 -07:00
Michael Bolin
e372e4667b Make it so CONFIG_DIR is not in the list of writable roots by default (#419)
To play it safe, let's keep `CONFIG_DIR` out of the default list of
writable roots.

This also fixes an issue where `execWithSeatbelt()` was modifying
`writableRoots` instead of creating a new array.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/419).
* #423
* #420
* __->__ #419
2025-04-20 09:37:07 -07:00
Abdelrhman Kamal Mahmoud Ali Slim
425430debb fix(cli): ensure /clear resets context and exclude system messages from approximateTokenUsed count (#443)
https://github.com/user-attachments/assets/58b8f261-a359-4cd8-8c84-90d8d91a5592
2025-04-20 08:52:14 -07:00
Fouad Matin
b3b195351e feat: /diff command to view git diff (#426)
Adds `/diff` command to view git diff
2025-04-19 16:23:27 -07:00
Thomas
081786eaa6 feat: add /command autocomplete (#317)
Add interactive slash‑command autocomplete & navigation in chat input

    Description
This PR enhances the chat input component by adding first‑class support
for slash commands (/help, /clear, /compact, etc.)
    with:

* **Live filtering:** As soon as the user types leading `/`, a list of
matching commands is shown below the prompt.
* **Arrow‑key navigation:** Up/Down arrows cycle through suggestions.
* **Enter to autocomplete:** Pressing Enter on a partial command will
fill it (without submitting) so you can add
    arguments or simply press Enter again to execute.
* **Type‑safe registry:** A new `slash‑commands.ts` file declares all
supported commands in one place, along with
    TypeScript types to prevent drift.
* **Validation:** Only registered commands will ever autocomplete or be
suggested; unknown single‑word slash inputs still
    show an “Invalid command” system message.
        * **Automated tests:**
            * Unit tests for the command registry and prefix filtering

            * Existing tests continue passing with no regressions

    Motivation
Slash commands provide a quick, discoverable way to control the agent
(clearing history, compacting context, opening overlays,
etc.). Before, users had to memorize the exact command or rely on the
generic /help list—autocomplete makes them far more
    accessible and reduces typos.

    Changes

* `src/utils/slash‑commands.ts` – defines `SlashCommand` and exports a
flat list of supported commands + descriptions
        * `terminal‑chat‑input.tsx`
            * Import and type the command registry

* Render filtered suggestions under the prompt when input starts with
`/`

* Hook into `useInput` to handle Up/Down and Enter for selection & fill

* Flag to swallow the first Enter (autocomplete) and only submit on the
next
* Updated tests in `tests/slash‑commands.test.ts` to cover registry
contents and filtering logic
        * Removed old JS version and fixed stray `@ts‑expect‑error`

    How to test locally

        1. Type `/` in the prompt—you should see matching commands.
2. Use arrows to move the highlight, press Enter to fill, then Enter
again to execute.
3. Run the full test suite (`npm test`) to verify no regressions.

    Notes

* Future work could include fuzzy matching, paging long lists, or more
visual styling.
* This change is purely additive and does not affect non‑slash inputs or
existing slash handlers.

---------

Co-authored-by: Fouad Matin <169186268+fouad-openai@users.noreply.github.com>
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-19 07:25:46 -07:00
John Gardner
965420cfc5 feat: read approvalMode from config file (#298)
This PR implements support for reading the approvalMode setting from the
user's config file (`~/.codex/config.json` or `~/.codex/config.yaml`),
allowing users to set a persistent default approval mode without needing
to specify command-line flags for each session.

Changes:
- Added approvalMode to the AppConfig type in config.ts
- Updated loadConfig() to read the approval mode from the config file
- Modified saveConfig() to persist the approval mode setting
- Updated CLI logic to respect the config-defined approval mode (while
maintaining CLI flag priority)
- Added comprehensive tests for approval mode config functionality
- Updated README to document the new config option in both YAML and JSON
formats
- additions to `.gitignore` for other CLI tools

Motivation:
As a user who regularly works with CLI-tools, I found it odd to have to
alias this with the command flags I wanted when `approvalMode` simply
wasn't being parsed even though it was an optional prop in `config.ts`.
This change allows me (and other users) to set the preference once in
the config file, streamlining daily usage while maintaining the ability
to override via command-line flags when needed.

Testing:
I've added a new test case loads and saves approvalMode correctly that
verifies:
- Reading the approvalMode from the config file works correctly
- Saving the approvalMode to the config file works as expected
- The value persists through load/save operations

All tests related to the implementation are passing.
2025-04-19 07:25:25 -07:00
Suyash-K
b37b257e63 gracefully handle SSE parse errors and suppress raw parser code (#367)
Closes #187
Closes #358

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-19 07:24:29 -07:00
Robbie Hammond
8e2760e83d Add fallback text for missing images (#397)
# What?
* When a prompt references an image path that doesn’t exist, replace it
with
  ```[missing image: <path>]``` instead of throwing an ENOENT.
* Adds a few unit tests for input-utils as there weren't any beforehand.

# Why?
Right now if you enter an invalid image path (e.g. it doesn't exist),
codex immediately crashes with a ENOENT error like so:
```
Error: ENOENT: no such file or directory, open 'test.png'
   ...
 {
  errno: -2,
  code: 'ENOENT',
  syscall: 'open',
  path: 'test.png'
}
```
This aborts the entire session. A soft fallback lets the rest of the
input continue.

# How?
Wraps the image encoding + inputItem content pushing in a try-catch. 

This is a minimal patch to avoid completely crashing — future work could
surface a warning to the user when this happens, or something to that
effect.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-18 22:55:24 -07:00
Shuto Otaki
b46b596e5f fix: enable shell option for child process execution (#391)
## Changes

- Added a `requiresShell` function to detect when a command contains
shell operators
- In the `exec` function, enabled the `shell: true` option if shell
operators are present

## Why This Is Necessary

See the discussion in this issue comment:  
https://github.com/openai/codex/issues/320#issuecomment-2816528014

## Code Explanation

The `requiresShell` function parses the command arguments and checks for
any shell‑specific operators. If it finds shell operators, it adds the
`shell: true` option when running the command so that it’s executed
through a shell interpreter.
2025-04-18 22:42:19 -07:00
autotaker
ca7ab76569 feat: add user-defined safe commands configuration and approval logic #380 (#386)
This pull request adds a feature that allows users to configure
auto-approved commands via a `safeCommands` array in the configuration
file.

## Related Issue
#380 

## Changes
- Added loading and validation of the `safeCommands` array in
`src/utils/config.ts`
- Implemented auto-approval logic for commands matching `safeCommands`
prefixes in `src/approvals.ts`
- Added test cases in `src/tests/approvals.test.ts` to verify
`safeCommands` behavior
- Updated documentation with examples and explanations of the
configuration
2025-04-18 22:35:32 -07:00
salama-openai
1a8610cd9e feat: add flex mode option for cost savings (#372)
Adding in an option to turn on flex processing mode to reduce costs when
running the agent.

Bumped the openai typescript version to add the new feature.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-18 22:15:01 -07:00
Fouad Matin
6f2271e8cd bump(version): 0.1.2504181820 (#385) 2025-04-18 18:25:33 -07:00
Fouad Matin
aa32e22d4b fix: /bug report command, thinking indicator (#381)
- Fix `/bug` report command
- Fix thinking indicator
2025-04-18 18:13:34 -07:00
Benny Yen
d61da89ed3 feat: notify when a newer version is available (#333)
**Summary**  
This change introduces a new startup check that notifies users if a
newer `@openai/codex` version is available. To avoid spamming, it writes
a small state file recording the last check time and will only re‑check
once every 24 hours.

**What’s Changed**  
- **New file** `src/utils/check-updates.ts`  
  - Runs `npm outdated --global @openai/codex`  
  - Reads/writes `codex-state.json` under `CONFIG_DIR`  
  - Limits checks to once per day (`UPDATE_CHECK_FREQUENCY = 24h`)  
- Uses `boxen` for a styled alert and `which` to locate the npm binary
- **Hooked into** `src/cli.tsx` entrypoint:
  ```ts
  import { checkForUpdates } from "./utils/check-updates";
  // …
  // after loading config
  await checkForUpdates().catch();
  ```
- **Dependencies**  
  - Added `boxen@^8.0.1`, `which@^5.0.0`, `@types/which@^3.0.4`  
- **Tests**  
  - Vitest suite under `tests/check-updates.test.ts`  
  - Snapshot in `__snapshots__/check-updates.test.ts.snap`  

**Motivation**  
Addresses issue #244. Users running a stale global install will now see
a friendly reminder—at most once per day—to upgrade and enjoy the latest
features.

**Test Plan**  
- `getNPMCommandPath()` resolves npm correctly  
- `checkOutdated()` parses `npm outdated` JSON  
- State file prevents repeat alerts within 24h  
- Boxen snapshot matches expected output  
- No console output when state indicates a recent check  

**Related Issue**  
try resolves #244


**Preview**
Prompt a pnpm‑style alert when outdated  

![outdated‑alert](https://github.com/user-attachments/assets/294dad45-d858-45d1-bf34-55e672ab883a)

Let me know if you’d tweak any of the messaging, throttle frequency,
placement in the startup flow, or anything else.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-18 17:00:45 -07:00
Alpha Diop
e2fe2572ba chore: migrate to pnpm for improved monorepo management (#287)
# Migrate to pnpm for improved monorepo management

## Summary
This PR migrates the Codex repository from npm to pnpm, providing faster
dependency installation, better disk space usage, and improved monorepo
management.

## Changes
- Added `pnpm-workspace.yaml` to define workspace packages
- Added `.npmrc` with optimal pnpm configuration
- Updated root package.json with workspace scripts
- Moved resolutions and overrides to the root package.json
- Updated scripts to use pnpm instead of npm
- Added documentation for the migration
- Updated GitHub Actions workflow for pnpm

## Benefits
- **Faster installations**: pnpm is significantly faster than npm
- **Disk space savings**: pnpm's content-addressable store avoids
duplication
- **Strict dependency management**: prevents phantom dependencies
- **Simplified monorepo management**: better workspace coordination
- **Preparation for Turborepo**: as discussed, this is the first step
before adding Turborepo

## Testing
- Verified that `pnpm install` works correctly
- Verified that `pnpm run build` completes successfully
- Ensured all existing functionality is preserved

## Documentation
Added a detailed migration guide in `PNPM_MIGRATION.md` explaining:
- Why we're migrating to pnpm
- How to use pnpm with this repository
- Common commands and workspace-specific commands
- Monorepo structure and configuration

## Next Steps
As discussed, once this change is stable, we can consider adding
Turborepo as a follow-up enhancement.
2025-04-18 16:25:15 -07:00
Jon Church
9a046dfcaa Revert "fix: canonicalize the writeable paths used in seatbelt policy… (#370)
This reverts commit 3356ac0aef.

related #330
2025-04-18 16:11:34 -07:00
Fouad Matin
8e2e77fafb feat: add /bug report command (#312)
Add `/bug` command for chat session
2025-04-18 14:09:35 -07:00
Fouad Matin
6f3278eae8 bump(version): 0.1.2504172351 (#310)
Release `@openai/codex@0.1.2504172351`
2025-04-17 23:53:41 -07:00
Fouad Matin
a7edfb0444 add: changelog (#308)
- Release `@openai/codex@0.1.2504172304`
- Add changelog
2025-04-17 23:34:05 -07:00
Jon Church
3356ac0aef fix: canonicalize the writeable paths used in seatbelt policy (#275)
closes #207

I'd be lying if I said I was familiar with these particulars more than a
couple hours ago, but after investigating and testing locally, this does
fix the go issue, I prefer it over #272 which is a lot of code and a one
off fix
---- 

cc @bolinfest do you mind taking a look here?

1. Seatbelt compares the paths it gets from the kernal to its policies
1. Go is attempting to write to the os.tmpdir, which we have
allowlisted.
1. The kernel rewrites /var/… to /private/var/… before the sandbox
check.
1. The policy still said /var/…, so writes were denied.

Fix: canonicalise every writable root we feed into the policy
(realpathSync(...)).
We do not have to touch runtime file paths—the kernel already
canonicalises those.



### before
see that the command exited 1, and that the command was reported to be
prohibited, despite using the allowlisted tmpdir


https://github.com/user-attachments/assets/23911101-0ec0-4a59-a0a1-423be04063f0


### after
command exits 0


https://github.com/user-attachments/assets/6ab2bcd6-68bd-4f89-82bb-2c8612e39ac3
2025-04-17 23:01:15 -07:00
Thomas
9a948836bf feat: add /compact (#289)
Added the ability to compact. Not sure if I should switch the model over
to gpt-4.1 for longer context or if keeping the current model is fine.
Also I'm not sure if setting the compacted to system is best practice,
would love feedback 😄

Mentioned in this issue: https://github.com/openai/codex/issues/230
2025-04-17 22:48:30 -07:00
Fouad Matin
35d47a5ab4 bump(version): 0.1.2504161551 (#254)
Bump version

---------

Signed-off-by: Fouad Matin <fouad@openai.com>
Co-authored-by: Jon Church <me@jonchurch.com>
2025-04-17 20:54:40 -07:00