Introduces support for slash commands like in the TypeScript CLI. We do
not support the full set of commands yet, but the core abstraction is
there now.
In particular, we have a `SlashCommand` enum and due to thoughtful use
of the [strum](https://crates.io/crates/strum) crate, it requires
minimal boilerplate to add a new command to the list.
The key new piece of UI is `CommandPopup`, though the keyboard events
are still handled by `ChatComposer`. The behavior is roughly as follows:
* if the first character in the composer is `/`, the command popup is
displayed (if you really want to send a message to Codex that starts
with a `/`, simply put a space before the `/`)
* while the popup is displayed, up/down can be used to change the
selection of the popup
* if there is a selection, hitting tab completes the command, but does
not send it
* if there is a selection, hitting enter sends the command
* if the prefix of the composer matches a command, the command will be
visible in the popup so the user can see the description (commands could
take arguments, so additional text may appear after the command name
itself)
https://github.com/user-attachments/assets/39c3e6ee-eeb7-4ef7-a911-466d8184975f
Incidentally, Codex wrote almost all the code for this PR!
`BottomPane` was getting a bit unwieldy because it maintained a
`PaneState` enum with three variants and many of its methods had `match`
statements to handle each variant. To replace the enum, this PR:
* Introduces a `trait BottomPaneView` that has two implementations:
`StatusIndicatorView` and `ApprovalModalView`.
* Migrates `PaneState::TextInput` into its own struct, `ChatComposer`,
that does **not** implement `BottomPaneView`.
* Updates `BottomPane` so it has `composer: ChatComposer` and
`active_view: Option<Box<dyn BottomPaneView<'a> + 'a>>`. The idea is
that `active_view` takes priority and is displayed when it is `Some`;
otherwise, `ChatComposer` is displayed.
* While methods of `BottomPane` often have to check whether
`active_view` is present to decide which component to delegate to, the
code is more straightforward than before and introducing new
implementations of `BottomPaneView` should be less painful.
Because we want to retain the `TextArea` owned by `ChatComposer` even
when another view is displayed, to keep the ownership logic simple, it
seemed best to keep `ChatComposer` distinct from `BottomPaneView`.
More about codespell: https://github.com/codespell-project/codespell .
I personally introduced it to dozens if not hundreds of projects already
and so far only positive feedback.
CI workflow has 'permissions' set only to 'read' so also should be safe.
Let me know if just want to take typo fixes in and get rid of the CI
---------
Signed-off-by: Yaroslav O. Halchenko <debian@onerussian.com>
## `0.1.2505140839`
### 🪲 Bug Fixes
- Gpt-4.1 apply_patch handling (#930)
- Add support for fileOpener in config.json (#911)
- Patch in #366 and #367 for marked-terminal (#916)
- Remember to set lastIndex = 0 on shared RegExp (#918)
- Always load version from package.json at runtime (#909)
- Tweak the label for citations for better rendering (#919)
- Tighten up some logic around session timestamps and ids (#922)
- Change EventMsg enum so every variant takes a single struct (#925)
- Reasoning default to medium, show workdir when supplied (#931)
- Test_dev_null_write() was not using echo as intended (#923)
While the `TextArea` used in the Rust TUI is "multiline," it is not like
an HTML `<textarea>` in that it does not wrap, so there was not much
benefit to setting `MIN_TEXTAREA_ROWS` to `3`, so this PR changes it to
`1`. Though there are now three ways to "increase" the height due to
actual linebreaks:
* paste in multiline content (this worked before this PR)
* pressing `Ctrl+J` will insert a newline
* if you have your terminal emulator set such that it is possible to
press something that `crossterm` interprets as "Enter plus some
modifier," then now that will also work
Now things look a bit more compact on startup:
<img width="745" alt="image"
src="https://github.com/user-attachments/assets/86e2857f-f31c-46f5-a80b-1ab2120b266e"
/>
I believe this test meant to verify that echoing content to `/dev/null`
succeeded, but instead, I believe it was testing the equivalent to `echo
'blah > /dev/null'`.
https://github.com/openai/codex/pull/922 did this for the
`SessionConfigured` enum variant, and I think it is generally helpful to
be able to work with the values as each enum variant as their own type,
so this converts the remaining variants and updates all of the
callsites.
Added a simple unit test to verify that the JSON-serialized version of
`Event` does not have any unexpected nesting.
* update `SessionConfigured` event to include the UUID for the session
* show the UUID in the Rust TUI
* use local timestamps in log files instead of UTC
* include timestamps in log file names for easier discovery
This introduces a much-needed "profile" concept where users can specify
a collection of options under one name and then pass that via
`--profile` to the CLI.
This PR introduces the `ConfigProfile` struct and makes it a field of
`CargoToml`. It further updates
`Config::load_from_base_config_with_overrides()` to respect
`ConfigProfile`, overriding default values where appropriate. A detailed
unit test is added at the end of `config.rs` to verify this behavior.
Details on how to use this feature have also been added to
`codex-rs/README.md`.
Right now since the repo is having two different implementations of
codex, flake was updated to work with both typescript implementation and
rust implementation
Adds a space so that sequential citations have some more breathing room.
As I had to update the tests for this change, I also introduced a
`toDiffableString()` helper to make the test easier to update as we make
formatting changes to the output.
This PR uses [`pnpm
patch`](https://www.petermekhaeil.com/til/pnpm-patch/) to pull in the
following proposed fixes for `marked-terminal`:
* https://github.com/mikaelbr/marked-terminal/pull/366
* https://github.com/mikaelbr/marked-terminal/pull/367
This adds a substantial test to `codex-cli/tests/markdown.test.tsx` to
verify the new behavior.
Note that one of the tests shows two citations being split across a line
even though the rendered version would fit comfortably on one line.
Changing this likely requires a subtle fix to `marked-terminal` to
account for "rendered length" when determining line breaks.
This PR introduces the following type:
```typescript
export type FileOpenerScheme = "vscode" | "cursor" | "windsurf";
```
and uses it as the new type for a `fileOpener` option in `config.json`.
If set, this will be used to linkify file annotations in the output
using the URI-based file opener supported in VS Code-based IDEs.
Currently, this does not pass:
Updated `codex-cli/tests/markdown.test.tsx` to verify the new behavior.
Note it required mocking `supports-hyperlinks` and temporarily modifying
`chalk.level` to yield the desired output.
Note the high-level motivation behind this change is to avoid the need
to make temporary changes in the source tree in order to cut a release
build since that runs the risk of leaving things in an inconsistent
state in the event of a failure. The existing code:
```
import pkg from "../../package.json" assert { type: "json" };
```
did not work as intended because, as written, ESBuild would bake the
contents of the local `package.json` into the release build at build
time whereas we want it to read the contents at runtime so we can use
the `package.json` in the tree to build the code and later inject a
modified version into the release package with a timestamped build
version.
Changes:
* move `CLI_VERSION` out of `src/utils/session.ts` and into
`src/version.ts` so `../package.json` is a correct relative path both
from `src/version.ts` in the source tree and also in the final
`dist/cli.js` build output
* change `assert` to `with` in `import pkg` as apparently `with` became
standard in Node 22
* mark `"../package.json"` as external in `build.mjs` so the version is
not baked into the `.js` at build time
After using `pnpm stage-release` to build a release version, if I use
Node 22.0 to run Codex, I see the following printed to stderr at
startup:
```
(node:71308) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
```
Note it is a warning and does not prevent Codex from running.
In Node 22.12, the warning goes away, but the warning still appears in
Node 22.11. For Node 22, 22.15.0 is the current LTS version, so LTS
users will not see this.
Also, something about moving the definition of `CLI_VERSION` caused a
problem with the mocks in `check-updates.test.ts`. I asked Codex to fix
it, and it came up with the change to the test configs. I don't know
enough about vitest to understand what it did, but the tests seem
healthy again, so I'm going with it.
I had seen issues where `codex-rs` would not always write files without
me pressuring it to do so, and between that and the report of
https://github.com/openai/codex/issues/900, I decided to look into this
further. I found two serious issues with agent instructions:
(1) We were only sending agent instructions on the first turn, but
looking at the TypeScript code, we should be sending them on every turn.
(2) There was a serious issue where the agent instructions were
frequently lost:
* The TypeScript CLI appears to keep writing `~/.codex/instructions.md`:
55142e3e6c/codex-cli/src/utils/config.ts (L586)
* If `instructions.md` is present, the Rust CLI uses the contents of it
INSTEAD OF the default prompt, even if `instructions.md` is empty:
55142e3e6c/codex-rs/core/src/config.rs (L202-L203)
The combination of these two things means that I have been using
`codex-rs` without these key instructions:
https://github.com/openai/codex/blob/main/codex-rs/core/prompt.md
Looking at the TypeScript code, it appears we should be concatenating
these three items every time (if they exist):
* `prompt.md`
* `~/.codex/instructions.md`
* nearest `AGENTS.md`
This PR fixes things so that:
* `Config.instructions` is `None` if `instructions.md` is empty
* `Payload.instructions` is now `&'a str` instead of `Option<&'a
String>` because we should always have _something_ to send
* `Prompt` now has a `get_full_instructions()` helper that returns a
`Cow<str>` that will always include the agent instructions first.
This PR fixes a potential path traversal vulnerability by ensuring all
paths are properly normalized in the `resolvePathAgainstWorkdir`
function.
## Changes
- Added path normalization for both absolute and relative paths
- Ensures normalized paths are used in all subsequent operations
- Prevents potential path traversal attacks through non-normalized paths
This minimal change addresses the security concern without adding
unnecessary complexity, while maintaining compatibility with existing
code.
This PR introduces an optional build flag, `--native`, that will build a
version of the Codex npm module that:
- Includes both the Node.js and native Rust versions (for Mac and Linux)
- Will run the native version if `CODEX_RUST=1` is set
- Runs the TypeScript version otherwise
Note this PR also updates the workflow URL to
https://github.com/openai/codex/actions/runs/14872557396, as that is a
build from today that includes everything up through
https://github.com/openai/codex/pull/843.
Test Plan:
In `~/code/codex/codex-cli`, I ran:
```
pnpm stage-release --native
```
The end of the output was:
```
Staged version 0.1.2505121317 for release in /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN
Test Node:
node /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN/bin/codex.js --help
Test Rust:
CODEX_RUST=1 node /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN/bin/codex.js --help
Next: cd "/var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN" && npm publish --tag native
```
I verified that running each of these commands ran the expected version
of Codex.
While here, I also added `bin` to the `files` list in `package.json`,
which should have been done as part of
https://github.com/openai/codex/pull/757, as that added new entries to
`bin` that were matched by `.gitignore` but should have been included in
a release.
Adds `expect()` as a denied lint. Same deal applies with `unwrap()`
where we now need to put `#[expect(...` on ones that we legit want. Took
care to enable `expect()` in test contexts.
# Tests
```
cargo fmt
cargo clippy --all-features --all-targets --no-deps -- -D warnings
cargo test
```
This PR fixes things so that:
* when the `BottomPane` is in the `StatusIndicator` state, the border
should be dim
* when the `BottomPane` does not have input focus, the border should be
dim
To make it easier to enforce this invariant, this PR introduces
`BottomPane::set_state()` that will:
* update `self.state`
* call `update_border_for_input_focus()`
* request a repaint
This should make it easier to enforce other updates for state changes
going forward.
As shown in the screenshot, we now include reasoning messages from the
model in the TUI under the heading "codex reasoning":

To ensure these are visible by default when using `o4-mini`, this also
changes the default value for `summary` (formerly `generate_summary`,
which is deprecated in favor of `summary` according to the docs) from
unset to `"auto"`.
The TypeScript CLI already has support for including the contents of
`AGENTS.md` in the instructions sent with the first turn of a
conversation. This PR brings this functionality to the Rust CLI.
To be considered, `AGENTS.md` must be in the `cwd` of the session, or in
one of the parent folders up to a Git/filesystem root (whichever is
encountered first).
By default, a maximum of 32 KiB of `AGENTS.md` will be included, though
this is configurable using the new-in-this-PR `project_doc_max_bytes`
option in `config.toml`.
* Add flexMode to stored config, and use it during config loading unless
the flag is explicitly passed.
* If the config asks for flexMode and the model doesn't support it,
silently disable flexMode.
Resolves#803
- Added ArceeAI as a provider - https://conductor.arcee.ai/v1
- Compatible with ArceeAI SLMs (Virtuoso, Maestro)
- Works with ArceeAI's Conductor auto‑router models (auto, auto‑tool),
once #817 is merged
- Fixes guard by using optional chaining to safely check
chunk.choices?.[0] before accessing.
- Currently, accessing chunk.choices[0] without checking could throw if
choices was missing from the chunk.
Reasoning effort was already available, but not expressed into the help
text, so it was non-discoverable.
Other issues discovered, but will fix in separate PR since they are
larger:
* #816 reasoningEffort isn't displayed in the terminal-header, making it
rather hard to see the state of configuration
* I don't think the config file setting works, as the CLI option always
"wins" and overwrites it
Fix: retry on server_error responses that lack an HTTP status code
### What happened
1. An OpenAI endpoint returned a **5xx** (transient server-side
failure).
2. The SDK surfaced it as an `APIError` with
{ "type": "server_error", "message": "...", "status": undefined }
(The SDK does not always populate `status` for these cases.)
3. Our retry logic in `src/utils/agent/agent-loop.ts` determined
isServerError = typeof status === "number" && status >= 500;
Because `status` was *undefined*, the error was **not** recognised as
retriable, the exception bubbled out, and the CLI crashed with a stack
trace similar to:
Error: An error occurred while processing the request.
at .../cli.js:474:1514
### Root cause
The transient-error detector ignored the semantic flag type ===
"server_error" that the SDK provides when the numeric status is missing.
#### Fix (1 loc + comment)
Extend the check:
const status = errCtx?.status ?? errCtx?.httpStatus ??
errCtx?.statusCode;
const isServerError = (typeof status === "number" && status >= 500) ||
// classic 5xx
errCtx?.type === "server_error"; // <-- NEW
Now the agent:
* Retries up to **5** times (existing logic) when the backend reports a
transient failure, even if `status` is absent.
* If all retries fail, surfaces the existing friendly system message
instead of an uncaught exception.
### Tests & validation
pnpm test # all suites green (17 agent-level tests now include this
path)
pnpm run lint # 0 errors / warnings
pnpm run typecheck
A new unit-test file isn’t required—the behaviour is already covered by
tests/agent-server-retry.test.ts, which stubs type: "server_error" and
now passes with the updated logic.
### Impact
* No API-surface changes.
* Prevents CLI crashes on intermittent OpenAI outages.
* Adds robust handling for other providers that may follow the same
error-shape.
When using Codex to develop Codex itself, I noticed that sometimes it
would try to add `#[ignore]` to the following tests:
```
keeps_previous_response_id_between_tasks()
retries_on_early_close()
```
Both of these tests start a `MockServer` that launches an HTTP server on
an ephemeral port and requires network access to hit it, which the
Seatbelt policy associated with `--full-auto` correctly denies. If I
wasn't paying attention to the code that Codex was generating, one of
these `#[ignore]` annotations could have slipped into the codebase,
effectively disabling the test for everyone.
To that end, this PR enables an experimental environment variable named
`CODEX_SANDBOX_NETWORK_DISABLED` that is set to `1` if the
`SandboxPolicy` used to spawn the process does not have full network
access. I say it is "experimental" because I'm not convinced this API is
quite right, but we need to start somewhere. (It might be more
appropriate to have an env var like `CODEX_SANDBOX=full-auto`, but the
challenge is that our newer `SandboxPolicy` abstraction does not map to
a simple set of enums like in the TypeScript CLI.)
We leverage this new functionality by adding the following code to the
aforementioned tests as a way to "dynamically disable" them:
```rust
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
```
We can use the `debug seatbelt --full-auto` command to verify that
`cargo test` fails when run under Seatbelt prior to this change:
```
$ cargo run --bin codex -- debug seatbelt --full-auto -- cargo test
---- keeps_previous_response_id_between_tasks stdout ----
thread 'keeps_previous_response_id_between_tasks' panicked at /Users/mbolin/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/wiremock-0.6.3/src/mock_server/builder.rs:107:46:
Failed to bind an OS port for a mock server.: Os { code: 1, kind: PermissionDenied, message: "Operation not permitted" }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
keeps_previous_response_id_between_tasks
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
error: test failed, to rerun pass `-p codex-core --test previous_response_id`
```
Though after this change, the above command succeeds! This means that,
going forward, when Codex operates on Codex itself, when it runs `cargo
test`, only "real failures" should cause the command to fail.
As part of this change, I decided to tighten up the codepaths for
running `exec()` for shell tool calls. In particular, we do it in `core`
for the main Codex business logic itself, but we also expose this logic
via `debug` subcommands in the CLI in the `cli` crate. The logic for the
`debug` subcommands was not quite as faithful to the true business logic
as I liked, so I:
* refactored a bit of the Linux code, splitting `linux.rs` into
`linux_exec.rs` and `landlock.rs` in the `core` crate.
* gating less code behind `#[cfg(target_os = "linux")]` because such
code does not get built by default when I develop on Mac, which means I
either have to build the code in Docker or wait for CI signal
* introduced `macro_rules! configure_command` in `exec.rs` so we can
have both sync and async versions of this code. The synchronous version
seems more appropriate for straight threads or potentially fork/exec.
## Summary
This PR introduces support for Azure OpenAI as a provider within the
Codex CLI. Users can now configure the tool to leverage their Azure
OpenAI deployments by specifying `"azure"` as the provider in
`config.json` and setting the corresponding `AZURE_OPENAI_API_KEY` and
`AZURE_OPENAI_API_VERSION` environment variables. This functionality is
added alongside the existing provider options (OpenAI, OpenRouter,
etc.).
Related to #92
**Note:** This PR is currently in **Draft** status because tests on the
`main` branch are failing. It will be marked as ready for review once
the `main` branch is stable and tests are passing.
---
## What’s Changed
- **Configuration (`config.ts`, `providers.ts`, `README.md`):**
- Added `"azure"` to the supported `providers` list in `providers.ts`,
specifying its name, default base URL structure, and environment
variable key (`AZURE_OPENAI_API_KEY`).
- Defined the `AZURE_OPENAI_API_VERSION` environment variable in
`config.ts` with a default value (`2025-03-01-preview`).
- Updated `README.md` to:
- Include "azure" in the list of providers.
- Add a configuration section for Azure OpenAI, detailing the required
environment variables (`AZURE_OPENAI_API_KEY`,
`AZURE_OPENAI_API_VERSION`) with examples.
- **Client Instantiation (`terminal-chat.tsx`, `singlepass-cli-app.tsx`,
`agent-loop.ts`, `compact-summary.ts`, `model-utils.ts`):**
- Modified various components and utility functions where the OpenAI
client is initialized.
- Added conditional logic to check if the configured `provider` is
`"azure"`.
- If the provider is Azure, the `AzureOpenAI` client from the `openai`
package is instantiated, using the configured `baseURL`, `apiKey` (from
`AZURE_OPENAI_API_KEY`), and `apiVersion` (from
`AZURE_OPENAI_API_VERSION`).
- Otherwise, the standard `OpenAI` client is instantiated as before.
- **Dependencies:**
- Relies on the `openai` package's built-in support for `AzureOpenAI`.
No *new* external dependencies were added specifically for this Azure
implementation beyond the `openai` package itself.
---
## How to Test
*This has been tested locally and confirmed working with Azure OpenAI.*
1. **Configure `config.json`:**
Ensure your `~/.codex/config.json` (or project-specific config) includes
Azure and sets it as the active provider:
```json
{
"providers": {
// ... other providers
"azure": {
"name": "AzureOpenAI",
"baseURL": "https://YOUR_RESOURCE_NAME.openai.azure.com", // Replace
with your Azure endpoint
"envKey": "AZURE_OPENAI_API_KEY"
}
},
"provider": "azure", // Set Azure as the active provider
"model": "o4-mini" // Use your Azure deployment name here
// ... other config settings
}
```
2. **Set up Environment Variables:**
```bash
# Set the API Key for your Azure OpenAI resource
export AZURE_OPENAI_API_KEY="your-azure-api-key-here"
# Set the API Version (Optional - defaults to `2025-03-01-preview` if
not set)
# Ensure this version is supported by your Azure deployment and endpoint
export AZURE_OPENAI_API_VERSION="2025-03-01-preview"
```
3. **Get the Codex CLI by building from this PR branch:**
Clone your fork, checkout this branch (`feat/azure-openai`), navigate to
`codex-cli`, and build:
```bash
# cd /path/to/your/fork/codex
git checkout feat/azure-openai # Or your branch name
cd codex-cli
corepack enable
pnpm install
pnpm build
```
4. **Invoke Codex:**
Run the locally built CLI using `node` from the `codex-cli` directory:
```bash
node ./dist/cli.js "Explain the purpose of this PR"
```
*(Alternatively, if you ran `pnpm link` after building, you can use
`codex "Explain the purpose of this PR"` from anywhere)*.
5. **Verify:** Confirm that the command executes successfully and
interacts with your configured Azure OpenAI deployment.
---
## Tests
- [x] Tested locally against an Azure OpenAI deployment using API Key
authentication. Basic commands and interactions confirmed working.
---
## Checklist
- [x] Added Azure provider details to configuration files
(`providers.ts`, `config.ts`).
- [x] Implemented conditional `AzureOpenAI` client initialization based
on provider setting.
- [x] Ensured `apiVersion` is passed correctly to the Azure client.
- [x] Updated `README.md` with Azure OpenAI setup instructions.
- [x] Manually tested core functionality against a live Azure OpenAI
endpoint.
- [x] Add/update automated tests for the Azure code path (pending `main`
stability).
cc @theabhinavdas @nikodem-wrona @fouad-openai @tibo-openai (adjust as
needed)
---
I have read the CLA Document and I hereby sign the CLA
Noticed that when pasting multi-line blocks, each newline was treated
like a new submission.
Update tui to handle Paste directly and map newlines to shift+enter.
# Test
Copied this into clipboard:
```
Do nothing.
Explain this repo to me.
```
Pasted in and saw multi-line input. Hitting Enter then submitted the
full block.
This PR is a straight refactor so that creating the `Child` process for
an `shell` tool call and consuming its output can be separate concerns.
For the actual tool call, we will always apply
`consume_truncated_output()`, but for the top-level debug commands in
the CLI (e.g., `debug seatbelt` and `debug landlock`), we only want to
use the `spawn_child()` part of `exec()`.
We want the subcommands to match the `shell` tool call usage as
faithfully as possible. This becomes more important when we introduce a
new parameter to `spawn_child()` in
https://github.com/openai/codex/pull/879.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/878).
* #879
* __->__ #878
I inadvertently regressed support for the Responses API when adding
support for the chat completions API in
https://github.com/openai/codex/pull/862. This should get both APIs
working again, but the chat completions codepath seems more complex than
necessary. I'll try to clean that up shortly, but I want to get things
working again ASAP.
This is a substantial PR to add support for the chat completions API,
which in turn makes it possible to use non-OpenAI model providers (just
like in the TypeScript CLI):
* It moves a number of structs from `client.rs` to `client_common.rs` so
they can be shared.
* It introduces support for the chat completions API in
`chat_completions.rs`.
* It updates `ModelProviderInfo` so that `env_key` is `Option<String>`
instead of `String` (for e.g., ollama) and adds a `wire_api` field
* It updates `client.rs` to choose between `stream_responses()` and
`stream_chat_completions()` based on the `wire_api` for the
`ModelProviderInfo`
* It updates the `exec` and TUI CLIs to no longer fail if the
`OPENAI_API_KEY` environment variable is not set
* It updates the TUI so that `EventMsg::Error` is displayed more
prominently when it occurs, particularly now that it is important to
alert users to the `CodexErr::EnvVar` variant.
* `CodexErr::EnvVar` was updated to include an optional `instructions`
field so we can preserve the behavior where we direct users to
https://platform.openai.com if `OPENAI_API_KEY` is not set.
* Cleaned up the "welcome message" in the TUI to ensure the model
provider is displayed.
* Updated the docs in `codex-rs/README.md`.
To exercise the chat completions API from OpenAI models, I added the
following to my `config.toml`:
```toml
model = "gpt-4o"
model_provider = "openai-chat-completions"
[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
```
Though to test a non-OpenAI provider, I installed ollama with mistral
locally on my Mac because ChatGPT said that would be a good match for my
hardware:
```shell
brew install ollama
ollama serve
ollama pull mistral
```
Then I added the following to my `~/.codex/config.toml`:
```toml
model = "mistral"
model_provider = "ollama"
```
Note this code could certainly use more test coverage, but I want to get
this in so folks can start playing with it.
For reference, I believe https://github.com/openai/codex/pull/247 was
roughly the comparable PR on the TypeScript side.
I installed the GitHub Actions extension for VS Code and it started
giving me lint warnings about this line:
a9adb4175c/.github/workflows/rust-ci.yml (L99)
Using an env var to track the state of individual steps was not great,
so I did some research about GitHub actions, which led to the discovery
of combining `continue-on-error: true` with `if .. steps.STEP.outcome ==
'failure'...`.
Apparently there is also a `failure()` macro that is supposed to make
this simpler, but I saw a number of complains online about it not
working as expected. Checking `outcome` seems maybe more reliable at the
cost of being slightly more verbose.
https://github.com/openai/codex/pull/855 added the clippy warning to
disallow `unwrap()`, but apparently we were not verifying that tests
were "clippy clean" in CI, so I ended up with a lot of local errors in
VS Code.
This turns on the check in CI and fixes the offenders.
I noticed that sometimes I would enter a new message, but it would not
show up in the conversation history. Even if I focused the conversation
history and tried to scroll it to the bottom, I could not bring it into
view. At first, I was concerned that messages were not making it to the
UI layer, but I added debug statements and verified that was not the
issue.
It turned out that, previous to this PR, lines that are wider than the
viewport take up multiple lines of vertical space because `wrap()` was
set on the `Paragraph` inside the scroll pane. Unfortunately, that broke
our "scrollbar math" that assumed each `Line` contributes one line of
height in the UI.
This PR removes the `wrap()`, but introduces a new issue, which is that
now you cannot see long lines without resizing your terminal window. For
now, I filed an issue here:
https://github.com/openai/codex/issues/869
I think the long-term fix is to fix our math so it calculates the height
of a `Line` after it is wrapped given the current width of the viewport.
Sets submodules to use workspace lints. Added denying unwrap as a
workspace level lint, which found a couple of cases where we could have
propagated errors. Also manually labeled ones that were fine by my eye.
This is the first step in supporting other model providers in the Rust
CLI. Specifically, this PR adds support for the new entries in `Config`
and `ConfigOverrides` to specify a `ModelProviderInfo`, which is the
basic config needed for an LLM provider. This PR does not get us all the
way there yet because `client.rs` still categorically appends
`/responses` to the URL and expects the endpoint to support the OpenAI
Responses API. Will fix that next!
I discovered that I accidentally introduced a change in
https://github.com/openai/codex/pull/829 where we load a fresh `Config`
in the middle of `codex.rs`:
c3e10e180a/codex-rs/core/src/codex.rs (L515-L522)
This is not good because the `Config` could differ from the one that has
the user's overrides specified from the CLI. Also, in unit tests, it
means the `Config` was picking up my personal settings as opposed to
using a vanilla config, which was problematic.
This PR cleans things up by moving the common case where
`Op::ConfigureSession` is derived from `Config` (originally done in
`codex_wrapper.rs`) and making it the standard way to initialize `Codex`
by putting it in `Codex::spawn()`. Note this also eliminates quite a bit
of boilerplate from the tests and relieves the caller of the
responsibility of minting out unique IDs when invoking `submit()`.
These abstractions were originally created exclusively for the REPL,
which was removed in https://github.com/openai/codex/pull/754.
Currently, the create some unnecessary Tokio tasks, so we are better off
without them. (We can always bring this back if we have a new use case.)