Commit Graph

190 Commits

Author SHA1 Message Date
Michael Bolin
6cf4b96f9d fix: check flags to ripgrep when deciding whether the invocation is "trusted" (#1644)
With this change, if any of `--pre`, `--hostname-bin`, `--search-zip`, or `-z` are used with a proposed invocation of `rg`, do not auto-approve.
2025-07-21 22:38:50 -07:00
Eric Wright
ed5e848f3e add: responses api support for azure (#1321)
- Use Responses API for Azure provider endpoints
- Added a unit test to catch regression on the change from
`/chat/completions` to `/responses`
- Updated the default AOAI api version from `2025-03-01-preview` to
`2025-04-01-preview` to avoid user/400 errors due to missing summary
support in the March API version.
- Changes have been tested locally on AOAI endpoints
2025-06-22 18:01:13 -07:00
Govind Kamtamneni
5aafe190e2 feat(ts): provider‑specific API‑key discovery and clearer Azure guidance (#1324)
## Summary

This PR refactors the Codex CLI authentication flow so that
**non-OpenAI** providers (for example **azure**, or any future addition)
can supply their API key through a dedicated environment variable
without triggering the OpenAI login flow.

Key behaviours introduced:

* When `provider !== "openai"` the CLI consults `src/utils/providers.ts`
to locate the correct environment variable (`AZURE_OPENAI_API_KEY`,
`GEMINI_API_KEY`, and so on) before considering any interactive login.
* Credit redemption (`--free`) and PKCE login now run **only** when the
provider is OpenAI, eliminating unwanted browser prompts for Azure and
others.
* User-facing error messages are revamped to guide Azure users to
**[https://ai.azure.com/](https://ai.azure.com)** and show the exact
variable name they must set.
* All code paths still export `OPENAI_API_KEY` so legacy scripts
continue to operate unchanged.

---

## Example `config.json`

```jsonc
{
  "model": "codex-mini",
  "provider": "azure",
  "providers": {
    "azure": {
      "name": "AzureOpenAI",
      "baseURL": "https://ai-<project-name>.openai.azure.com/openai",
      "envKey": "AZURE_OPENAI_API_KEY"
    }
  },
  "history": {
    "maxSize": 1000,
    "saveHistory": true,
    "sensitivePatterns": []
  }
}
```

With this file in `~/.codex/config.json`, a single command line is
enough:

```bash
export AZURE_OPENAI_API_KEY="<your-key>"
codex "Hello from Azure"
```

No browser window opens, and the CLI works in entirely non-interactive
mode.

---

## Rationale

The new flow enables Codex to run **asynchronously** in sandboxed
environments such as GitHub Actions pipelines. By passing `--provider
azure` (or setting it in `config.json`) and exporting the correct key,
CI/CD jobs can invoke Codex without any ChatGPT-style login or PKCE
round-trip. This unlocks fully automated testing and deployment
scenarios.

---

## What’s changed

| File | Type | Description |
| ------------------------ | ------------------- |
-----------------------------------------------------------------------------------------------------------------------------
|
| `codex-cli/src/cli.tsx` | **feat / refactor** | +43 / -20 lines.
Imports `providers`, adds early provider-specific key lookup, gates
`--free` redemption, rewrites help text. |
| `src/utils/providers.ts` | **chore** | Now consumed by CLI for env-var
discovery. |

---

## How to test

```bash
# Azure example
export AZURE_OPENAI_API_KEY="<your-key>"
codex --provider azure "Automated run in CI"

# OpenAI example (unchanged behaviour)
codex --provider openai --login "Standard OpenAI flow"
```

Expected outcomes:

* Azure and other provider paths are non-interactive when provider flag
is passed.
* The CLI always sets `OPENAI_API_KEY` for backward compatibility.

---

## Checklist

* [x] Logic behind provider-specific env-var lookup added.
* [x] Redundant OpenAI login steps removed for other providers.
* [x] Unit tests cover new branches.
* [x] README and sample config updated.
* [x] CI passes on all supported Node versions.

---

**Related work**

* #92
* #769 
* #1321



I have read the CLA Document and I hereby sign the CLA.
2025-06-22 17:56:36 -07:00
Michael Bolin
515b6331bd feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.

**What works**

* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.

**What does not work**

* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:


a67a67f325/codex-cli/src/utils/get-api-key.tsx (L84-L89)

**Implementation**

Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.

As such, the most significant files in this PR are:

```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```

Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:

```
codex-rs/core/src/openai_api_key.rs
```

Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.

Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.

**Testing**

Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:

```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```

For reference:

* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
Fouad Matin
a86270f581 fix: add node version check (#1007) 2025-05-17 21:27:41 -07:00
Fouad Matin
835eb77a7d fix: persist token after refresh (#1006)
After a token refresh/exchange, persist the new refresh and id token
2025-05-17 21:27:02 -07:00
Fouad Matin
9b4c2984d4 add: codex --login + codex --free (#998)
## Summary
- add `--login` and `--free` flags to cli help
- handle `--login` and `--free` logic in cli
- factor out redeem flow into `maybeRedeemCredits`
- call new helper from login callback
2025-05-17 16:13:12 -07:00
Fouad Matin
3e19e8fd59 add: sign in with chatgpt credits (#974) 2025-05-16 17:55:08 -07:00
Fouad Matin
c6e08ad8c1 add: sign in with chatgpt (#963)
Sign in with ChatGPT to get an API key (flow to grant API credits for Plus/Pro coming later today!)
2025-05-16 12:28:54 -07:00
Fouad Matin
cabf83f2ed add: session history viewer (#912)
- A new “/sessions” command is available for browsing previous sessions,
as shown in the updated slash command list

- The CLI now documents and parses a new “--history” flag to browse past
sessions from the command line

- A dedicated `SessionsOverlay` component loads session metadata and
allows toggling between viewing and resuming sessions

- When the sessions overlay is opened during a chat, selecting a session
can either show the saved rollout or resume it
2025-05-16 12:28:22 -07:00
hanson-openai
7edfbae062 fix: diff command for filenames with special characters (#954)
## Summary
- fix quoting issues in `/diff` to correctly handle files with special
characters
- add regression test for `getGitDiff` when filenames contain `$`
- relax timeout in raw-exec-process-group test

Fixes https://github.com/openai/codex/issues/943

## Testing
- `pnpm test`
2025-05-16 09:10:44 -07:00
Fouad Matin
070499f534 add: codex-mini-latest (#951)
💽

---------

Co-authored-by: Trevor Creech <tcreech@openai.com>
2025-05-16 08:04:00 -07:00
Yaroslav Halchenko
327cf41f0f Add codespell support (config, workflow to detect/not fix) and make it fix some typos (#903)
More about codespell: https://github.com/codespell-project/codespell .

I personally introduced it to dozens if not hundreds of projects already
and so far only positive feedback.

CI workflow has 'permissions' set only to 'read' so also should be safe.

Let me know if just want to take typo fixes in and get rid of the CI

---------

Signed-off-by: Yaroslav O. Halchenko <debian@onerussian.com>
2025-05-14 09:39:49 -07:00
Fouad Matin
73259351ff fix: reasoning default to medium, show workdir when supplied (#931) 2025-05-14 08:38:41 -07:00
Fouad Matin
77347d268d fix: gpt-4.1 apply_patch handling (#930) 2025-05-14 08:34:09 -07:00
Fouad Matin
678f0dbfec add: dynamic instructions (#927) 2025-05-14 01:27:46 -07:00
Michael Bolin
a786c1d188 feat: auto-approve nl and support piping to sed (#920)
Auto-approved:

```
["nl", "-ba", "README.md"]
["sed", "-n", "1,200p", "filename.txt"]
["bash", "-lc", "sed -n '1,200p' filename.txt"]
["bash", "-lc", "nl -ba README.md | sed -n '1,200p'"]
```

Not auto approved:

```
["sed", "-n", "'1,200p'", "filename.txt"]
["sed", "-n", "1,200p", "file1.txt", "file2.txt"]
```
2025-05-13 13:06:35 -07:00
Michael Bolin
0ac7e8d55b fix: tweak the label for citations for better rendering (#919)
Adds a space so that sequential citations have some more breathing room.

As I had to update the tests for this change, I also introduced a
`toDiffableString()` helper to make the test easier to update as we make
formatting changes to the output.
2025-05-13 12:46:21 -07:00
Michael Bolin
dd354e2134 fix: remember to set lastIndex = 0 on shared RegExp (#918)
I had not observed an issue in the wild because of this yet, but it
feels like it was only a matter of time...
2025-05-13 12:01:06 -07:00
Michael Bolin
557f608f25 fix: add support for fileOpener in config.json (#911)
This PR introduces the following type:

```typescript
export type FileOpenerScheme = "vscode" | "cursor" | "windsurf";
```

and uses it as the new type for a `fileOpener` option in `config.json`.
If set, this will be used to linkify file annotations in the output
using the URI-based file opener supported in VS Code-based IDEs.

Currently, this does not pass:

Updated `codex-cli/tests/markdown.test.tsx` to verify the new behavior.
Note it required mocking `supports-hyperlinks` and temporarily modifying
`chalk.level` to yield the desired output.
2025-05-13 09:45:46 -07:00
Michael Bolin
05bb5d7d46 fix: always load version from package.json at runtime (#909)
Note the high-level motivation behind this change is to avoid the need
to make temporary changes in the source tree in order to cut a release
build since that runs the risk of leaving things in an inconsistent
state in the event of a failure. The existing code:

```
import pkg from "../../package.json" assert { type: "json" };
```

did not work as intended because, as written, ESBuild would bake the
contents of the local `package.json` into the release build at build
time whereas we want it to read the contents at runtime so we can use
the `package.json` in the tree to build the code and later inject a
modified version into the release package with a timestamped build
version.

Changes:

* move `CLI_VERSION` out of `src/utils/session.ts` and into
`src/version.ts` so `../package.json` is a correct relative path both
from `src/version.ts` in the source tree and also in the final
`dist/cli.js` build output
* change `assert` to `with` in `import pkg` as apparently `with` became
standard in Node 22
* mark `"../package.json"` as external in `build.mjs` so the version is
not baked into the `.js` at build time

After using `pnpm stage-release` to build a release version, if I use
Node 22.0 to run Codex, I see the following printed to stderr at
startup:

```
(node:71308) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
```

Note it is a warning and does not prevent Codex from running.

In Node 22.12, the warning goes away, but the warning still appears in
Node 22.11. For Node 22, 22.15.0 is the current LTS version, so LTS
users will not see this.

Also, something about moving the definition of `CLI_VERSION` caused a
problem with the mocks in `check-updates.test.ts`. I asked Codex to fix
it, and it came up with the change to the test configs. I don't know
enough about vitest to understand what it did, but the tests seem
healthy again, so I'm going with it.
2025-05-12 21:27:15 -07:00
Avi Rosenberg
ab4cb94227 fix: Normalize paths in resolvePathAgainstWorkdir to prevent path traversal vulnerability (#895)
This PR fixes a potential path traversal vulnerability by ensuring all
paths are properly normalized in the `resolvePathAgainstWorkdir`
function.

## Changes
- Added path normalization for both absolute and relative paths
- Ensures normalized paths are used in all subsequent operations
- Prevents potential path traversal attacks through non-normalized paths

This minimal change addresses the security concern without adding
unnecessary complexity, while maintaining compatibility with existing
code.
2025-05-12 13:44:00 -07:00
Corry Haines
b42ad670f1 fix: flex-mode via config/flag (#813)
* Add flexMode to stored config, and use it during config loading unless
the flag is explicitly passed.
* If the config asks for flexMode and the model doesn't support it,
silently disable flexMode.

Resolves #803
2025-05-10 16:18:20 -07:00
Pranav
646e7e9c11 feat: added arceeai as a provider (#818)
- Added ArceeAI as a provider  - https://conductor.arcee.ai/v1
- Compatible with ArceeAI SLMs (Virtuoso, Maestro)
- Works with ArceeAI's Conductor auto‑router models (auto, auto‑tool),
once #817 is merged
2025-05-10 16:16:28 -07:00
Pranav
19262f632f fix: guard against missing choices (#817)
- Fixes guard by using optional chaining to safely check
chunk.choices?.[0] before accessing.
- Currently, accessing chunk.choices[0] without checking could throw if
choices was missing from the chunk.
2025-05-10 16:16:19 -07:00
Corry Haines
fcc76cf3e7 Add reasoning effort option to CLI help text (#815)
Reasoning effort was already available, but not expressed into the help
text, so it was non-discoverable.

Other issues discovered, but will fix in separate PR since they are
larger:
* #816 reasoningEffort isn't displayed in the terminal-header, making it
rather hard to see the state of configuration
* I don't think the config file setting works, as the CLI option always
"wins" and overwrites it
2025-05-10 15:58:59 -07:00
Fouad Matin
3104d81b7b fix: migrate to AGENTS.md (#764)
Migrate from `codex.md` to `AGENTS.md`
2025-05-10 15:57:49 -07:00
Tomas Cupr
e307d007aa fix: retry on OpenAI server_error even without status code (#814)
Fix: retry on server_error responses that lack an HTTP status code

### What happened

1. An OpenAI endpoint returned a **5xx** (transient server-side
failure).
2. The SDK surfaced it as an `APIError` with

{ "type": "server_error", "message": "...", "status": undefined }

           (The SDK does not always populate `status` for these cases.)
3. Our retry logic in `src/utils/agent/agent-loop.ts` determined

isServerError = typeof status === "number" && status >= 500;

Because `status` was *undefined*, the error was **not** recognised as
retriable, the exception bubbled out, and the CLI crashed with a stack
           trace similar to:

               Error: An error occurred while processing the request.
                   at .../cli.js:474:1514

### Root cause

The transient-error detector ignored the semantic flag type ===
"server_error" that the SDK provides when the numeric status is missing.

#### Fix (1 loc + comment)

Extend the check:

const status = errCtx?.status ?? errCtx?.httpStatus ??
errCtx?.statusCode;

const isServerError = (typeof status === "number" && status >= 500) ||
// classic 5xx
errCtx?.type === "server_error";                   // <-- NEW

Now the agent:

* Retries up to **5** times (existing logic) when the backend reports a
transient failure, even if `status` is absent.
* If all retries fail, surfaces the existing friendly system message
instead of an uncaught exception.

### Tests & validation

pnpm test # all suites green (17 agent-level tests now include this
path)
pnpm run lint    # 0 errors / warnings
pnpm run typecheck

A new unit-test file isn’t required—the behaviour is already covered by
tests/agent-server-retry.test.ts, which stubs type: "server_error" and
now passes with the updated logic.

### Impact

* No API-surface changes.
* Prevents CLI crashes on intermittent OpenAI outages.
* Adds robust handling for other providers that may follow the same
error-shape.
2025-05-10 15:43:03 -07:00
Govind Kamtamneni
7795272282 Adds Azure OpenAI support (#769)
## Summary

This PR introduces support for Azure OpenAI as a provider within the
Codex CLI. Users can now configure the tool to leverage their Azure
OpenAI deployments by specifying `"azure"` as the provider in
`config.json` and setting the corresponding `AZURE_OPENAI_API_KEY` and
`AZURE_OPENAI_API_VERSION` environment variables. This functionality is
added alongside the existing provider options (OpenAI, OpenRouter,
etc.).

Related to #92

**Note:** This PR is currently in **Draft** status because tests on the
`main` branch are failing. It will be marked as ready for review once
the `main` branch is stable and tests are passing.

---

## What’s Changed

-   **Configuration (`config.ts`, `providers.ts`, `README.md`):**
- Added `"azure"` to the supported `providers` list in `providers.ts`,
specifying its name, default base URL structure, and environment
variable key (`AZURE_OPENAI_API_KEY`).
- Defined the `AZURE_OPENAI_API_VERSION` environment variable in
`config.ts` with a default value (`2025-03-01-preview`).
    -   Updated `README.md` to:
        -   Include "azure" in the list of providers.
- Add a configuration section for Azure OpenAI, detailing the required
environment variables (`AZURE_OPENAI_API_KEY`,
`AZURE_OPENAI_API_VERSION`) with examples.
- **Client Instantiation (`terminal-chat.tsx`, `singlepass-cli-app.tsx`,
`agent-loop.ts`, `compact-summary.ts`, `model-utils.ts`):**
- Modified various components and utility functions where the OpenAI
client is initialized.
- Added conditional logic to check if the configured `provider` is
`"azure"`.
- If the provider is Azure, the `AzureOpenAI` client from the `openai`
package is instantiated, using the configured `baseURL`, `apiKey` (from
`AZURE_OPENAI_API_KEY`), and `apiVersion` (from
`AZURE_OPENAI_API_VERSION`).
- Otherwise, the standard `OpenAI` client is instantiated as before.
-   **Dependencies:**
- Relies on the `openai` package's built-in support for `AzureOpenAI`.
No *new* external dependencies were added specifically for this Azure
implementation beyond the `openai` package itself.

---

## How to Test

*This has been tested locally and confirmed working with Azure OpenAI.*

1.  **Configure `config.json`:**
Ensure your `~/.codex/config.json` (or project-specific config) includes
Azure and sets it as the active provider:
    ```json
    {
      "providers": {
        // ... other providers
        "azure": {
          "name": "AzureOpenAI",
"baseURL": "https://YOUR_RESOURCE_NAME.openai.azure.com", // Replace
with your Azure endpoint
          "envKey": "AZURE_OPENAI_API_KEY"
        }
      },
      "provider": "azure", // Set Azure as the active provider
      "model": "o4-mini" // Use your Azure deployment name here
      // ... other config settings
    }
    ```
2.  **Set up Environment Variables:**
    ```bash
    # Set the API Key for your Azure OpenAI resource
    export AZURE_OPENAI_API_KEY="your-azure-api-key-here"

# Set the API Version (Optional - defaults to `2025-03-01-preview` if
not set)
# Ensure this version is supported by your Azure deployment and endpoint
    export AZURE_OPENAI_API_VERSION="2025-03-01-preview"
    ```
3.  **Get the Codex CLI by building from this PR branch:**
Clone your fork, checkout this branch (`feat/azure-openai`), navigate to
`codex-cli`, and build:
    ```bash
    # cd /path/to/your/fork/codex
    git checkout feat/azure-openai # Or your branch name
    cd codex-cli
    corepack enable
    pnpm install
    pnpm build
    ```
4.  **Invoke Codex:**
Run the locally built CLI using `node` from the `codex-cli` directory:
    ```bash
    node ./dist/cli.js "Explain the purpose of this PR"
    ```
*(Alternatively, if you ran `pnpm link` after building, you can use
`codex "Explain the purpose of this PR"` from anywhere)*.
5. **Verify:** Confirm that the command executes successfully and
interacts with your configured Azure OpenAI deployment.

---

## Tests

- [x] Tested locally against an Azure OpenAI deployment using API Key
authentication. Basic commands and interactions confirmed working.

---

## Checklist

- [x] Added Azure provider details to configuration files
(`providers.ts`, `config.ts`).
- [x] Implemented conditional `AzureOpenAI` client initialization based
on provider setting.
-   [x] Ensured `apiVersion` is passed correctly to the Azure client.
-   [x] Updated `README.md` with Azure OpenAI setup instructions.
- [x] Manually tested core functionality against a live Azure OpenAI
endpoint.
- [x] Add/update automated tests for the Azure code path (pending `main`
stability).

cc @theabhinavdas @nikodem-wrona @fouad-openai @tibo-openai (adjust as
needed)

---

I have read the CLA Document and I hereby sign the CLA
2025-05-09 18:11:32 -07:00
Anil Karaka
76a979007e fix: increase output limits for truncating collector (#575)
This Pull Request addresses an issue where the output of commands
executed in the raw-exec utility was being truncated due to restrictive
limits on the number of lines and bytes collected. The truncation caused
the message [Output truncated: too many lines or bytes] to appear when
processing large outputs, which could hinder the functionality of the
CLI.

Changes Made

Increased the maximum output limits in the
[createTruncatingCollector](https://github.com/openai/codex/pull/575)
utility:
Bytes: Increased from 10 KB to 100 KB.
Lines: Increased from 256 lines to 1024 lines.
Installed the @types/node package to resolve missing type definitions
for [NodeJS](https://github.com/openai/codex/pull/575) and
[Buffer](https://github.com/openai/codex/pull/575).
Verified and fixed any related errors in the
[createTruncatingCollector](https://github.com/openai/codex/pull/575)
implementation.

Issue Solved: 

This PR ensures that larger outputs can be processed without truncation,
improving the usability of the CLI for commands that generate extensive
output. https://github.com/openai/codex/issues/509

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-05-05 10:26:55 -07:00
anup-openai
f6b1ce2e3a Configure HTTPS agent for proxies (#775)
- Some workflows require you to route openAI API traffic through a proxy
- See
https://github.com/openai/openai-node/tree/v4?tab=readme-ov-file#configuring-an-https-agent-eg-for-proxies
for more details

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Co-authored-by: Fouad Matin <fouad@openai.com>
2025-05-02 12:08:13 -07:00
Michael Bolin
a4b51f6b67 feat: use Landlock for sandboxing on Linux in TypeScript CLI (#763)
Building on top of https://github.com/openai/codex/pull/757, this PR
updates Codex to use the Landlock executor binary for sandboxing in the
Node.js CLI. Note that Codex has to be invoked with either `--full-auto`
or `--auto-edit` to activate sandboxing. (Using `--suggest` or
`--dangerously-auto-approve-everything` ensures the sandboxing codepath
will not be exercised.)

When I tested this on a Linux host (specifically, `Ubuntu 24.04.1 LTS`),
things worked as expected: I ran Codex CLI with `--full-auto` and then
asked it to do `echo 'hello mbolin' into hello_world.txt` and it
succeeded without prompting me.

However, in my testing, I discovered that the sandboxing did *not* work
when using `--full-auto` in a Linux Docker container from a macOS host.
I updated the code to throw a detailed error message when this happens:


![image](https://github.com/user-attachments/assets/e5b99def-f00e-4ade-a0c5-2394d30df52e)
2025-05-01 12:34:56 -07:00
Fouad Matin
985fd44ec0 fix: input keyboard shortcut opt+delete (#685) 2025-04-30 17:17:13 -07:00
moppywhip
bc4e6db749 feat: @mention files in codex (#701)
Solves #700

## State of the World Before

Prior to this PR, when users wanted to share file contents with Codex,
they had two options:
- Manually copy and paste file contents into the chat
- Wait for the assistant to use the shell tool to view the file

The second approach required the assistant to:
1. Recognize the need to view a file
2. Execute a shell tool call
3. Wait for the tool call to complete
4. Process the file contents

This consumed extra tokens and reduced user control over which files
were shared with the model.

## State of the World After

With this PR, users can now:
- Reference files directly in their chat input using the `@path` syntax
- Have file contents automatically expanded into XML blocks before being
sent to the LLM

For example, users can type `@src/utils/config.js` in their message, and
the file contents will be included in context. Within the terminal chat
history, these file blocks will be collapsed back to `@path` format in
the UI for clean presentation.

Tag File suggestions:
<img width="857" alt="file-suggestions"
src="https://github.com/user-attachments/assets/397669dc-ad83-492d-b5f0-164fab2ff4ba"
/>

Tagging files in action:
<img width="858" alt="tagging-files"
src="https://github.com/user-attachments/assets/0de9d559-7b7f-4916-aeff-87ae9b16550a"
/>

Demo video of file tagging:
[![Demo video of file
tagging](https://img.youtube.com/vi/vL4LqtBnqt8/0.jpg)](https://www.youtube.com/watch?v=vL4LqtBnqt8)

## Implementation Details

This PR consists of 2 main components:

1. **File Tag Utilities**:
- New `file-tag-utils.ts` utility module that handles both expansion and
collapsing of file tags
- `expandFileTags()` identifies `@path` tokens and replaces them with
XML blocks containing file contents
- `collapseXmlBlocks()` reverses the process, converting XML blocks back
to `@path` format for UI display
- Tokens are only expanded if they point to valid files (directories are
ignored)
   - Expansion happens just before sending input to the model

2. **Terminal Chat Integration**:
- Leveraged the existing file system completion system for tabbing to
support the `@path` syntax
   - Added `updateFsSuggestions` helper to manage filesystem suggestions
- Added `replaceFileSystemSuggestion` to replace input with filesystem
suggestions
- Applied `collapseXmlBlocks` in the chat response rendering so that
tagged files are shown as simple `@path` tags

The PR also includes test coverage for both the UI and the file tag
utilities.

## Next Steps

Some ideas I'd like to implement if this feature gets merged:

- Line selection: `@path[50:80]` to grab specific sections of files
- Method selection: `@path#methodName` to grab just one function/class
- Visual improvements: highlight file tags in the UI to make them more
noticeable
2025-04-30 16:19:55 -07:00
Kevin Alwell
bd82101859 fix: insufficient quota message (#758)
This pull request includes a change to improve the error message
displayed when there is insufficient quota in the `AgentLoop` class. The
updated message provides more detailed information and a link for
managing or purchasing credits.

Error message improvement:

*
[`codex-cli/src/utils/agent/agent-loop.ts`](diffhunk://#diff-b15957eac2720c3f1f55aa32f172cdd0ac6969caf4e7be87983df747a9f97083L1140-R1140):
Updated the error message in the `AgentLoop` class to include the
specific error message (if available) and a link to manage or purchase
credits.


Fixes #751
2025-04-30 16:00:50 -07:00
Michael Bolin
033d379eca fix: remove unused _writableRoots arg to exec() function (#762)
I suspect this was done originally so that `execForSandbox()` had a
consistent signature for both the `SandboxType.NONE` and
`SandboxType.MACOS_SEATBELT` cases, but that is not really necessary and
turns out to make the upcoming Landlock support a bit more complicated
to implement, so I had Codex remove it and clean up the call sites.
2025-04-30 14:08:27 -07:00
Michael Bolin
2f1d96e77d fix: remove errant eslint-disable so pnpm run lint passes again (#756)
My bad: introduced in https://github.com/openai/codex/pull/753.
2025-04-30 11:37:11 -07:00
Michael Bolin
84aaefa102 fix: read version from package.json instead of modifying session.ts (#753)
I am working to simplify the build process. As a first step, update
`session.ts` so it reads the `version` from `package.json` at runtime so
we no longer have to modify it during the build process. I want to get
to a place where the build looks like:

```
cd codex-cli
pnpm i
pnpm build
RELEASE_DIR=$(mktemp -d)
cp -r bin "$RELEASE_DIR/bin"
cp -r dist "$RELEASE_DIR/dist"
cp -r src "$RELEASE_DIR/src" # important if we want sourcemaps to continue to work
cp ../README.md "$RELEASE_DIR"
VERSION=$(printf '0.1.%d' $(date +%y%m%d%H%M))
jq --arg version "$VERSION" '.version = $version' package.json > "$RELEASE_DIR/package.json"
```

Then the contents of `$RELEASE_DIR` should be good to `npm publish`, no?
2025-04-30 11:03:10 -07:00
Kevin Alwell
a6ed7ff103 Fixes issue #726 by adding config to configToSave object (#728)
The saveConfig() function only includes a hardcoded subset of properties
when writing the config file. Any property not explicitly listed (like
disableResponseStorage) will be dropped.
I have added `disableResponseStorage` to the `configToSave` object as
the immediate fix.

[Linking Issue this fixes.](https://github.com/openai/codex/issues/726)
2025-04-29 13:10:16 -04:00
Rashim
892242ef7c feat: add --reasoning CLI flag (#314)
This PR adds a new CLI flag: `--reasoning`, which allows users to
customize the reasoning effort level (`low`, `medium`, or `high`) used
by OpenAI's `o` models.
By introducing the `--reasoning` flag, users gain more flexibility when
working with the models. It enables optimization for either speed or
depth of reasoning, depending on specific use cases.
This PR resolves #107

- **Flag**: `--reasoning`
- **Accepted Values**: `low`, `medium`, `high`
- **Default Behavior**: If not specified, the model uses the default
reasoning level.

## Example Usage

```bash
codex --reasoning=low "Write a simple function to calculate factorial"

---------

Co-authored-by: Fouad Matin <169186268+fouad-openai@users.noreply.github.com>
Co-authored-by: yashrwealthy <yash.rastogi@wealthy.in>
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-29 07:30:49 -07:00
Thibault Sottiaux
d09dbba7ec feat: lower default retry wait time and increase number of tries (#720)
In total we now guarantee that we will wait for at least 60s before
giving up.

---------

Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-28 21:11:30 -07:00
Michael Bolin
40460faf2a fix: tighten up check for /usr/bin/sandbox-exec (#710)
* In both TypeScript and Rust, we now invoke `/usr/bin/sandbox-exec`
explicitly rather than whatever `sandbox-exec` happens to be on the
`PATH`.
* Changed `isSandboxExecAvailable` to use `access()` rather than
`command -v` so that:
  *  We only do the check once over the lifetime of the Codex process.
  * The check is specific to `/usr/bin/sandbox-exec`.
* We now do a syscall rather than incur the overhead of spawning a
process, dealing with timeouts, etc.

I think there is still room for improvement here where we should move
the `isSandboxExecAvailable` check earlier in the CLI, ideally right
after we do arg parsing to verify that we can provide the Seatbelt
sandbox if that is what the user has requested.
2025-04-28 13:42:04 -07:00
Thibault Sottiaux
fa5fa8effc fix: only allow running without sandbox if explicitly marked in safe container (#699)
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-28 07:48:38 -07:00
Thibault Sottiaux
e9d16d3c2b fix: check if sandbox-exec is available (#696)
- Introduce `isSandboxExecAvailable()` helper and tidy import ordering
in `handle-exec-command.ts`.
- Add runtime check for the `sandbox-exec` binary on macOS; fall back to
`SandboxType.NONE` with a warning if it’s missing, preventing crashes.

---------

Signed-off-by: Thibault Sottiaux <tibo@openai.com>
Co-authored-by: Fouad Matin <fouad@openai.com>
2025-04-27 17:04:47 -07:00
Fouad Matin
523996b5cb fix: /diff should include untracked files (#686) 2025-04-26 12:43:51 -07:00
Tomas Cupr
bc500d3009 feat: user config api key (#569)
Adds support for reading OPENAI_API_KEY (and other variables) from a
user‑wide dotenv file (~/.codex.config). Precedence order is now:
  1. explicit environment variable
  2. project‑local .env (loaded earlier)
  3. ~/.codex.config

Also adds a regression test that ensures the multiline editor correctly
handles cases where printable text and the CSI‑u Shift+Enter sequence
arrive in the same input chunk.

House‑kept with Prettier; removed stray temp.json artifact.
2025-04-26 10:13:30 -07:00
moppywhip
9b0ccf9aeb fix: duplicate messages in quiet mode (#680)
Addressing #600 and #664 (partially)

## Bug
Codex was staging duplicate items in output running when the same
response item appeared in both the streaming events. Specifically:

1. Items would be staged once when received as a
`response.output_item.done` event
2. The same items would be staged again when included in the final
`response.completed` payload

This duplication would result in each message being sent several times
in the quiet mode output.

## Changes
- Added a Set (`alreadyStagedItemIds`) to track items that have already
been staged
- Modified the `stageItem` function to check if an item's ID is already
in this set before staging it
- Added a regression test (`agent-dedupe-items.test.ts`) that verifies
items with the same ID are only staged once

## Testing
Like other tests, the included test creates a mock OpenAI stream that
emits the same message twice (once as an incremental event and once in
the final response) and verifies the item is only passed to `onItem`
once.
2025-04-26 09:14:50 -07:00
Fouad Matin
103093f793 bump(version): 0.1.2504251709 (#660)
## `0.1.2504251709`

### 🚀 Features

- Add openai model info configuration (#551)
- Added provider to run quiet mode function (#571)
- Create parent directories when creating new files (#552)
- Print bug report URL in terminal instead of opening browser (#510)
(#528)
- Add support for custom provider configuration in the user config
(#537)
- Add support for OpenAI-Organization and OpenAI-Project headers (#626)
- Add specific instructions for creating API keys in error msg (#581)
- Enhance toCodePoints to prevent potential unicode 14 errors (#615)
- More native keyboard navigation in multiline editor (#655)
- Display error on selection of invalid model (#594)

### 🪲 Bug Fixes

- Model selection (#643)
- Nits in apply patch (#640)
- Input keyboard shortcuts (#676)
- `apply_patch` unicode characters (#625)
- Don't clear turn input before retries (#611)
- More loosely match context for apply_patch (#610)
- Update bug report template - there is no --revision flag (#614)
- Remove outdated copy of text input and external editor feature (#670)
- Remove unreachable "disableResponseStorage" logic flow introduced in
#543 (#573)
- Non-openai mode - fix for gemini content: null, fix 429 to throw
before stream (#563)
- Only allow going up in history when not already in history if input is
empty (#654)
- Do not grant "node" user sudo access when using run_in_container.sh
(#627)
- Update scripts/build_container.sh to use pnpm instead of npm (#631)
- Update lint-staged config to use pnpm --filter (#582)
- Non-openai mode - don't default temp and top_p (#572)
- Fix error catching when checking for updates (#597)
- Close stdin when running an exec tool call (#636)
2025-04-25 17:15:40 -07:00
Fouad Matin
3f4762d969 fix: input keyboard shortcuts (#676)
Fixes keyboard shortcuts:
- ctrl+a/e
- opt+arrow keys
2025-04-25 16:58:09 -07:00
Thibault Sottiaux
44d68f9dbf fix: remove outdated copy of text input and external editor feature (#670)
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-25 16:11:16 -07:00