Commit Graph

30 Commits

Author SHA1 Message Date
Michael Bolin
557f608f25 fix: add support for fileOpener in config.json (#911)
This PR introduces the following type:

```typescript
export type FileOpenerScheme = "vscode" | "cursor" | "windsurf";
```

and uses it as the new type for a `fileOpener` option in `config.json`.
If set, this will be used to linkify file annotations in the output
using the URI-based file opener supported in VS Code-based IDEs.

Currently, this does not pass:

Updated `codex-cli/tests/markdown.test.tsx` to verify the new behavior.
Note it required mocking `supports-hyperlinks` and temporarily modifying
`chalk.level` to yield the desired output.
2025-05-13 09:45:46 -07:00
Michael Bolin
05bb5d7d46 fix: always load version from package.json at runtime (#909)
Note the high-level motivation behind this change is to avoid the need
to make temporary changes in the source tree in order to cut a release
build since that runs the risk of leaving things in an inconsistent
state in the event of a failure. The existing code:

```
import pkg from "../../package.json" assert { type: "json" };
```

did not work as intended because, as written, ESBuild would bake the
contents of the local `package.json` into the release build at build
time whereas we want it to read the contents at runtime so we can use
the `package.json` in the tree to build the code and later inject a
modified version into the release package with a timestamped build
version.

Changes:

* move `CLI_VERSION` out of `src/utils/session.ts` and into
`src/version.ts` so `../package.json` is a correct relative path both
from `src/version.ts` in the source tree and also in the final
`dist/cli.js` build output
* change `assert` to `with` in `import pkg` as apparently `with` became
standard in Node 22
* mark `"../package.json"` as external in `build.mjs` so the version is
not baked into the `.js` at build time

After using `pnpm stage-release` to build a release version, if I use
Node 22.0 to run Codex, I see the following printed to stderr at
startup:

```
(node:71308) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
```

Note it is a warning and does not prevent Codex from running.

In Node 22.12, the warning goes away, but the warning still appears in
Node 22.11. For Node 22, 22.15.0 is the current LTS version, so LTS
users will not see this.

Also, something about moving the definition of `CLI_VERSION` caused a
problem with the mocks in `check-updates.test.ts`. I asked Codex to fix
it, and it came up with the change to the test configs. I don't know
enough about vitest to understand what it did, but the tests seem
healthy again, so I'm going with it.
2025-05-12 21:27:15 -07:00
Govind Kamtamneni
7795272282 Adds Azure OpenAI support (#769)
## Summary

This PR introduces support for Azure OpenAI as a provider within the
Codex CLI. Users can now configure the tool to leverage their Azure
OpenAI deployments by specifying `"azure"` as the provider in
`config.json` and setting the corresponding `AZURE_OPENAI_API_KEY` and
`AZURE_OPENAI_API_VERSION` environment variables. This functionality is
added alongside the existing provider options (OpenAI, OpenRouter,
etc.).

Related to #92

**Note:** This PR is currently in **Draft** status because tests on the
`main` branch are failing. It will be marked as ready for review once
the `main` branch is stable and tests are passing.

---

## What’s Changed

-   **Configuration (`config.ts`, `providers.ts`, `README.md`):**
- Added `"azure"` to the supported `providers` list in `providers.ts`,
specifying its name, default base URL structure, and environment
variable key (`AZURE_OPENAI_API_KEY`).
- Defined the `AZURE_OPENAI_API_VERSION` environment variable in
`config.ts` with a default value (`2025-03-01-preview`).
    -   Updated `README.md` to:
        -   Include "azure" in the list of providers.
- Add a configuration section for Azure OpenAI, detailing the required
environment variables (`AZURE_OPENAI_API_KEY`,
`AZURE_OPENAI_API_VERSION`) with examples.
- **Client Instantiation (`terminal-chat.tsx`, `singlepass-cli-app.tsx`,
`agent-loop.ts`, `compact-summary.ts`, `model-utils.ts`):**
- Modified various components and utility functions where the OpenAI
client is initialized.
- Added conditional logic to check if the configured `provider` is
`"azure"`.
- If the provider is Azure, the `AzureOpenAI` client from the `openai`
package is instantiated, using the configured `baseURL`, `apiKey` (from
`AZURE_OPENAI_API_KEY`), and `apiVersion` (from
`AZURE_OPENAI_API_VERSION`).
- Otherwise, the standard `OpenAI` client is instantiated as before.
-   **Dependencies:**
- Relies on the `openai` package's built-in support for `AzureOpenAI`.
No *new* external dependencies were added specifically for this Azure
implementation beyond the `openai` package itself.

---

## How to Test

*This has been tested locally and confirmed working with Azure OpenAI.*

1.  **Configure `config.json`:**
Ensure your `~/.codex/config.json` (or project-specific config) includes
Azure and sets it as the active provider:
    ```json
    {
      "providers": {
        // ... other providers
        "azure": {
          "name": "AzureOpenAI",
"baseURL": "https://YOUR_RESOURCE_NAME.openai.azure.com", // Replace
with your Azure endpoint
          "envKey": "AZURE_OPENAI_API_KEY"
        }
      },
      "provider": "azure", // Set Azure as the active provider
      "model": "o4-mini" // Use your Azure deployment name here
      // ... other config settings
    }
    ```
2.  **Set up Environment Variables:**
    ```bash
    # Set the API Key for your Azure OpenAI resource
    export AZURE_OPENAI_API_KEY="your-azure-api-key-here"

# Set the API Version (Optional - defaults to `2025-03-01-preview` if
not set)
# Ensure this version is supported by your Azure deployment and endpoint
    export AZURE_OPENAI_API_VERSION="2025-03-01-preview"
    ```
3.  **Get the Codex CLI by building from this PR branch:**
Clone your fork, checkout this branch (`feat/azure-openai`), navigate to
`codex-cli`, and build:
    ```bash
    # cd /path/to/your/fork/codex
    git checkout feat/azure-openai # Or your branch name
    cd codex-cli
    corepack enable
    pnpm install
    pnpm build
    ```
4.  **Invoke Codex:**
Run the locally built CLI using `node` from the `codex-cli` directory:
    ```bash
    node ./dist/cli.js "Explain the purpose of this PR"
    ```
*(Alternatively, if you ran `pnpm link` after building, you can use
`codex "Explain the purpose of this PR"` from anywhere)*.
5. **Verify:** Confirm that the command executes successfully and
interacts with your configured Azure OpenAI deployment.

---

## Tests

- [x] Tested locally against an Azure OpenAI deployment using API Key
authentication. Basic commands and interactions confirmed working.

---

## Checklist

- [x] Added Azure provider details to configuration files
(`providers.ts`, `config.ts`).
- [x] Implemented conditional `AzureOpenAI` client initialization based
on provider setting.
-   [x] Ensured `apiVersion` is passed correctly to the Azure client.
-   [x] Updated `README.md` with Azure OpenAI setup instructions.
- [x] Manually tested core functionality against a live Azure OpenAI
endpoint.
- [x] Add/update automated tests for the Azure code path (pending `main`
stability).

cc @theabhinavdas @nikodem-wrona @fouad-openai @tibo-openai (adjust as
needed)

---

I have read the CLA Document and I hereby sign the CLA
2025-05-09 18:11:32 -07:00
sooraj
36a5a02d5c feat: display error on selection of invalid model (#594)
Up-to-date of #78 

Fixes #32

addressed requested changes @tibo-openai :) made sense to me


though, previous rationale with passing the state up was assuming there
could be a future need to have a shared state with all available models
being available to the parent
2025-04-24 16:56:00 -07:00
Luci
e84fa6793d fix(agent-loop): notify type (#608)
## Description

The `as AppConfig` type assertion in the constructor may introduce
potential type safety risks. Removing the assertion and making `notify`
an optional parameter could enhance type robustness and prevent
unexpected runtime errors.

close: #605
2025-04-24 11:08:52 -07:00
kshern
146a61b073 feat: add support for custom provider configuration in the user config (#537)
### What

- Add support for loading and merging custom provider configurations
from a local `providers.json` file.
- Allow users to override or extend default providers with their own
settings.

### Why

This change enables users to flexibly customize and extend provider
endpoints and API keys without modifying the codebase, making the CLI
more adaptable for various LLM backends and enterprise use cases.

### How

- Introduced `loadProvidersFromFile` and `getMergedProviders` in config
logic.
- Added/updated related tests in [tests/config.test.tsx]


### Checklist

- [x] Lint passes for changed files
- [x] Tests pass for all files
- [x] Documentation/comments updated as needed

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-23 01:45:56 -04:00
Gabriel Bianconi
98a22273d9 fix: inconsistent usage of base URL and API key (#507)
A recent commit introduced the ability to use third-party model
providers. (Really appreciate it!)

However, the usage is inconsistent: some pieces of code use the custom
providers, whereas others still have the old behavior. Additionally,
`OPENAI_BASE_URL` is now being disregarded when it shouldn't be.

This PR normalizes the usage to `getApiKey` and `getBaseUrl`, and
enables the use of `OPENAI_BASE_URL` if present.

---------

Co-authored-by: Gabriel Bianconi <GabrielBianconi@users.noreply.github.com>
2025-04-22 10:51:26 -04:00
Fouad Matin
9f5ccbb618 feat: add support for ZDR orgs (#481)
- Add `store: boolean` to `AgentLoop` to enable client-side storage of
response items
- Add `--disable-response-storage` arg + `disableResponseStorage` config
2025-04-22 01:30:16 -07:00
Mitchell Kutchuk
09f0ae3899 fix: unintended tear down of agent loop (#483)
fixes #465
2025-04-21 15:01:09 -04:00
Thibault Sottiaux
3c4f1fea9b chore: consolidate model utils and drive-by cleanups (#476)
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 12:33:57 -04:00
Thibault Sottiaux
dc276999a9 chore: improve storage/ implementation; use log(...) consistently (#473)
This PR tidies up primitives under storage/.

**Noop changes:**

* Promote logger implementation to top-level utility outside of agent/
* Use logger within storage primitives
* Cleanup doc strings and comments

**Functional changes:**

* Increase command history size to 10_000
* Remove unnecessary debounce implementation and ensure a session ID is
created only once per agent loop

---------

Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-21 09:51:34 -04:00
Scott Leibrand
e3c8ce49ca fix: allow proper exit from new Switch approval mode dialog (#453)
As described in
https://github.com/openai/codex/issues/392#issuecomment-2817090022
introduced by #400

The testing I'd done worked correctly because I was using the (s)
shortcut, but selecting the same option using arrow‑key → Enter on
“Switch approval mode” was preventing the user from subsequently exiting
the Switch approval mode dialog, requiring a ^C to quit codex entirely.
With this fix, both entry methods work correctly in my testing.

Per codex:

Issue

- When you navigated down (↓) to “Switch approval mode (s)” in the Shell
Command review dialog and pressed Enter, the ApprovalModeOverlay would
open—but because the underlying `TerminalChatCommandReview` component
stayed mounted (albeit disabled), its own Ink input handlers immediately
re‑captured the same key events and re‑opened the overlay as soon as you
hit Esc or Enter again. In practice this made it impossible to exit the
submenu.

Root cause

- We only disabled the SelectInput via `isDisabled`, but never fully
unmounted the review UI when an overlay was shown, so its `useInput` and
`<Select>` hooks were still active and “stealing” keys.

Fix

- In `terminal-chat.tsx` we now only render `<TerminalChatInput>` (and
by extension `TerminalChatCommandReview`) when `overlayMode === "none"`.
That unmounts all of its key handlers whenever any overlay (history,
model, approval, help, diff) is open, so no input leaks through.

Files changed

- **src/components/chat/terminal-chat.tsx**: Wrapped the entire
`<TerminalChatInput>` block in `overlayMode === "none" && agent`

With that in place, arrow‑key → Enter on “Switch approval mode”
correctly opens the overlay, and then you can use Enter/Esc inside the
overlay without getting stuck or immediately re‑opening it.
2025-04-20 22:19:53 -07:00
Daniel Nakov
eafbc75612 feat: support multiple providers via Responses-Completion transformation (#247)
https://github.com/user-attachments/assets/9ecb51be-fa65-4e99-8512-abb898dda569

Implemented it as a transformation between Responses API and Completion
API so that it supports existing providers that implement the Completion
API and minimizes the changes needed to the codex repo.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Co-authored-by: Fouad Matin <169186268+fouad-openai@users.noreply.github.com>
Co-authored-by: Fouad Matin <fouad@openai.com>
2025-04-20 20:59:34 -07:00
Michael Bolin
b554b522f7 fix: remove unnecessary isLoggingEnabled() checks (#420)
It appears that use of `isLoggingEnabled()` was erroneously copypasta'd
in many places. This PR updates its docstring to clarify that should
only be used to avoid constructing a potentially expensive docstring.
With this change, the only function that merits/uses this check is
`execCommand()`.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/420).
* #423
* __->__ #420
* #419
2025-04-20 09:58:06 -07:00
Abdelrhman Kamal Mahmoud Ali Slim
81cf47e591 feat: auto-open model selector if user selects deprecated model (#427)
https://github.com/user-attachments/assets/d254dff6-f1f9-4492-9dfd-d185c38d3a75
2025-04-20 09:51:49 -07:00
Michael Bolin
63c99e7d82 use spawn instead of exec to avoid injection vulnerability (#416)
https://github.com/openai/codex/pull/160 introduced a call to `exec()`
that takes a format string as an argument, but it is not clear that the
expansions within the format string are escaped safely. As written, it
is possible a carefully crafted command (e.g., if `cwd` were `"; && rm
-rf` or something...) could run arbitrary code.

Moving to `spawn()` makes this a bit better, as now at least `spawn()`
itself won't run an arbitrary process, though I suppose `osascript`
itself still could if the value passed to `-e` were abused. I'm not
clear on the escaping rules for AppleScript to ensure that `safePreview`
and `cwd` are injected safely.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/416).
* #423
* #420
* #419
* __->__ #416
2025-04-19 18:29:00 -07:00
Fouad Matin
b3b195351e feat: /diff command to view git diff (#426)
Adds `/diff` command to view git diff
2025-04-19 16:23:27 -07:00
Michael Bolin
419f085cc4 re-enable Prettier check for codex-cli in CI (#417)
This check was lost in https://github.com/openai/codex/pull/287. Both
the root folder and `codex-cli/` have their own `pnpm format` commands
that check the formatting of different things.

Also ran `pnpm format:fix` to fix the formatting violations that got in
while this was disabled in CI.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/417).
* #420
* #419
* #416
* __->__ #417
2025-04-19 11:22:45 -07:00
Abdelrhman Kamal Mahmoud Ali Slim
f3cab736b4 fix: name of the file not matching the name of the component (#354)
before

![image](https://github.com/user-attachments/assets/e47595bc-8b5b-470d-8cca-1d4c4afbf802)
after

![image](https://github.com/user-attachments/assets/c04bc384-534e-44f0-9917-08a15f7280dc)

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-19 07:23:02 -07:00
Scott Leibrand
9eeb78e54f feat: allow switching approval modes when prompted to approve an edit/command (#400)
Implements https://github.com/openai/codex/issues/392

When the user is in suggest or auto-edit mode and gets an approval
request, they now have an option in the `Shell Command` dialog to:
`Switch approval mode (v)`

That option brings up the standard `Switch approval mode` dialog,
allowing the user to switch into the desired mode, then drops them back
to the `Shell Command` dialog's `Allow command?` prompt, allowing them
to approve the current command and let the agent continue doing the rest
of what it was doing without interruption.
```
╭────────────────────────────────────────────────────────
│Shell Command
│                          
│$ apply_patch << 'PATCH'    
│*** Begin Patch                      
│*** Update File: foo.txt          
│@@ -1 +1 @@                         
│-foo                                          
│+bar                                         
│*** End Patch                         
│PATCH                                    
│                                                
│                                                
│Allow command?                   
│                                                
│    Yes (y)                                
│    Explain this command (x) 
│    Edit or give feedback (e)  
│    Switch approval mode (v)
│    No, and keep going (n)    
│    No, and stop for now (esc)
╰────────────────────────────────────────────────────────╭────────────────────────────────────────────────────────
│ Switch approval mode      
│ Current mode: suggest  
│                                          
│                                          
│                                          
│ ❯ suggest                        
│   auto-edit                       
│   full-auto                        
│ type to search · enter to confirm · esc to cancel 
╰────────────────────────────────────────────────────────
```
2025-04-19 07:21:19 -07:00
salama-openai
1a8610cd9e feat: add flex mode option for cost savings (#372)
Adding in an option to turn on flex processing mode to reduce costs when
running the agent.

Bumped the openai typescript version to add the new feature.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-18 22:15:01 -07:00
Fouad Matin
aa32e22d4b fix: /bug report command, thinking indicator (#381)
- Fix `/bug` report command
- Fix thinking indicator
2025-04-18 18:13:34 -07:00
Fouad Matin
8e2e77fafb feat: add /bug report command (#312)
Add `/bug` command for chat session
2025-04-18 14:09:35 -07:00
Thomas
9a948836bf feat: add /compact (#289)
Added the ability to compact. Not sure if I should switch the model over
to gpt-4.1 for longer context or if keeping the current model is fine.
Also I'm not sure if setting the compacted to system is best practice,
would love feedback 😄

Mentioned in this issue: https://github.com/openai/codex/issues/230
2025-04-17 22:48:30 -07:00
kchro3
0a2e416b7a feat: add notifications for MacOS using Applescript (#160)
yolo'ed it with codex. Let me know if this looks good to you.

https://github.com/openai/codex/issues/148

tested with:
```
npm run build:dev
```

<img width="377" alt="Screenshot 2025-04-16 at 18 12 01"
src="https://github.com/user-attachments/assets/79aa799b-b0b9-479d-84f1-bfb83d34bfb9"
/>
2025-04-17 16:19:26 -07:00
Michael Bolin
ae5b1b5cb5 add support for -w,--writable-root to add more writable roots for sandbox (#263)
This adds support for a new flag, `-w,--writable-root`, that can be
specified multiple times to _amend_ the list of folders that should be
configured as "writable roots" by the sandbox used in `full-auto` mode.
Values that are passed as relative paths will be resolved to absolute
paths.

Incidentally, this required updating a number of the `agent*.test.ts`
files: it feels like some of the setup logic across those tests could be
consolidated.

In my testing, it seems that this might be slightly out of distribution
for the model, as I had to explicitly tell it to run `apply_patch` and
that it had the permissions to write those files (initially, it just
showed me a diff and told me to apply it myself). Nevertheless, I think
this is a good starting point.
2025-04-17 15:39:26 -07:00
Brayden Moon
f3d085aaf8 feat: shell command explanation option (#173)
# Shell Command Explanation Option

## Description
This PR adds an option to explain shell commands when the user is
prompted to approve them (Fixes #110). When reviewing a shell command,
users can now select "Explain this command" to get a detailed
explanation of what the command does before deciding whether to approve
or reject it.

## Changes
- Added a new "EXPLAIN" option to the `ReviewDecision` enum
- Updated the command review UI to include an "Explain this command (x)"
option
- Implemented the logic to send the command to the LLM for explanation
using the same model as the agent
- Added a display for the explanation in the command review UI
- Updated all relevant components to pass the explanation through the
component tree

## Benefits
- Improves user understanding of shell commands before approving them
- Reduces the risk of approving potentially harmful commands
- Enhances the educational aspect of the tool, helping users learn about
shell commands
- Maintains the same workflow with minimal UI changes

## Testing
- Manually tested the explanation feature with various shell commands
- Verified that the explanation is displayed correctly in the UI
- Confirmed that the user can still approve or reject the command after
viewing the explanation

## Screenshots

![improved_shell_explanation_demo](https://github.com/user-attachments/assets/05923481-29db-4eba-9cc6-5e92301d2be0)


## Additional Notes
The explanation is generated using the same model as the agent, ensuring
consistency in the quality and style of explanations.

---------

Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
2025-04-17 13:28:58 -07:00
Brayden Moon
b0ccca5556 fix: allow continuing after interrupting assistant (#178)
## Description
This PR fixes the issue where the CLI can't continue after interrupting
the assistant with ESC ESC (Fixes #114). The problem was caused by
duplicate code in the `cancel()` method and improper state reset after
cancellation.

## Changes
- Fixed duplicate code in the `cancel()` method of the `AgentLoop` class
- Added proper reset of the `currentStream` property in the `cancel()`
method
- Created a new `AbortController` after aborting the current one to
ensure future tool calls work
- Added a system message to indicate the interruption to the user
- Added a comprehensive test to verify the fix

## Benefits
- Users can now continue using the CLI after interrupting the assistant
- Improved user experience by providing feedback when interruption
occurs
- Better state management in the agent loop

## Testing
- Added a dedicated test that verifies the agent can process new input
after cancellation
- Manually tested the fix by interrupting the assistant and confirming
that new input is processed correctly

---------

Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
2025-04-16 22:20:19 -07:00
Michael Bolin
9b733fc48f Back out @lib indirection in tsconfig.json (#111) 2025-04-16 14:16:53 -07:00
Ilan Bigio
59a180ddec Initial commit
Signed-off-by: Ilan Bigio <ilan@openai.com>
2025-04-16 12:56:08 -04:00