Adding in an option to turn on flex processing mode to reduce costs when
running the agent.
Bumped the openai typescript version to add the new feature.
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
# Migrate to pnpm for improved monorepo management
## Summary
This PR migrates the Codex repository from npm to pnpm, providing faster
dependency installation, better disk space usage, and improved monorepo
management.
## Changes
- Added `pnpm-workspace.yaml` to define workspace packages
- Added `.npmrc` with optimal pnpm configuration
- Updated root package.json with workspace scripts
- Moved resolutions and overrides to the root package.json
- Updated scripts to use pnpm instead of npm
- Added documentation for the migration
- Updated GitHub Actions workflow for pnpm
## Benefits
- **Faster installations**: pnpm is significantly faster than npm
- **Disk space savings**: pnpm's content-addressable store avoids
duplication
- **Strict dependency management**: prevents phantom dependencies
- **Simplified monorepo management**: better workspace coordination
- **Preparation for Turborepo**: as discussed, this is the first step
before adding Turborepo
## Testing
- Verified that `pnpm install` works correctly
- Verified that `pnpm run build` completes successfully
- Ensured all existing functionality is preserved
## Documentation
Added a detailed migration guide in `PNPM_MIGRATION.md` explaining:
- Why we're migrating to pnpm
- How to use pnpm with this repository
- Common commands and workspace-specific commands
- Monorepo structure and configuration
## Next Steps
As discussed, once this change is stable, we can consider adding
Turborepo as a follow-up enhancement.
This adds support for a new flag, `-w,--writable-root`, that can be
specified multiple times to _amend_ the list of folders that should be
configured as "writable roots" by the sandbox used in `full-auto` mode.
Values that are passed as relative paths will be resolved to absolute
paths.
Incidentally, this required updating a number of the `agent*.test.ts`
files: it feels like some of the setup logic across those tests could be
consolidated.
In my testing, it seems that this might be slightly out of distribution
for the model, as I had to explicitly tell it to run `apply_patch` and
that it had the permissions to write those files (initially, it just
showed me a diff and told me to apply it myself). Nevertheless, I think
this is a good starting point.
# Shell Command Explanation Option
## Description
This PR adds an option to explain shell commands when the user is
prompted to approve them (Fixes#110). When reviewing a shell command,
users can now select "Explain this command" to get a detailed
explanation of what the command does before deciding whether to approve
or reject it.
## Changes
- Added a new "EXPLAIN" option to the `ReviewDecision` enum
- Updated the command review UI to include an "Explain this command (x)"
option
- Implemented the logic to send the command to the LLM for explanation
using the same model as the agent
- Added a display for the explanation in the command review UI
- Updated all relevant components to pass the explanation through the
component tree
## Benefits
- Improves user understanding of shell commands before approving them
- Reduces the risk of approving potentially harmful commands
- Enhances the educational aspect of the tool, helping users learn about
shell commands
- Maintains the same workflow with minimal UI changes
## Testing
- Manually tested the explanation feature with various shell commands
- Verified that the explanation is displayed correctly in the UI
- Confirmed that the user can still approve or reject the command after
viewing the explanation
## Screenshots

## Additional Notes
The explanation is generated using the same model as the agent, ensuring
consistency in the quality and style of explanations.
---------
Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
I think the retry issue is just that the regex is wrong, checkout the
reported error messages folks are seeing:
> message: 'Rate limit reached for o4-mini in organization
org-{redacted} on tokens per min (TPM): Limit 200000, Used 152566,
Requested 60651. Please try again in 3.965s. Visit
https://platform.openai.com/account/rate-limits to learn more.',
The error message uses `try again` not `retry again`
peep this regexpal: https://www.regexpal.com/?fam=155648
**What is added?**
Additional error handling functionality is added before the errors are
thrown to be handled by upstream handlers. The changes improves the user
experience and make the error handling smoother (and more informative).
**Why is it added?**
Before this addition, when a user tried to use a model they needed
previous setup for, the program crashed. This is not necessary here, and
informative message is sufficient and enhances user experience. This
adheres to the specifications stated in the code file as well by not
masking potential logical error detection. Following is before and
after:


Moreover, AFAIK no logic was present to handle this or a similar issue
in upstream handlers.
**How is it scoped? Why won't this mask other errors?**
The new brach triggers *only* for `invalid_request_error` events whose
`code` is model related (`model_not_found`)
This also doesn't prevent the detection (for the case of masking logical
errors) of wrong model names, as they would have been caught earlier on.
The code passes test, lint and type checks. I believe relevant
documentation is added, but I would be more than happy to do further
fixes in the code if necessary.
## Description
This PR fixes the issue where the CLI can't continue after interrupting
the assistant with ESC ESC (Fixes#114). The problem was caused by
duplicate code in the `cancel()` method and improper state reset after
cancellation.
## Changes
- Fixed duplicate code in the `cancel()` method of the `AgentLoop` class
- Added proper reset of the `currentStream` property in the `cancel()`
method
- Created a new `AbortController` after aborting the current one to
ensure future tool calls work
- Added a system message to indicate the interruption to the user
- Added a comprehensive test to verify the fix
## Benefits
- Users can now continue using the CLI after interrupting the assistant
- Improved user experience by providing feedback when interruption
occurs
- Better state management in the agent loop
## Testing
- Added a dedicated test that verifies the agent can process new input
after cancellation
- Manually tested the fix by interrupting the assistant and confirming
that new input is processed correctly
---------
Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
# Description
This PR fixes a typo where the prompt prefix for the agent loop was
missing the word "as"
# Changes
* Added missing word "as" within the agent loop prompt prefix
# Benefits
* The prompt is now grammatically correct and clearer
# Testing
* Manually tested the fix
...and try to parse the suggested time from the error message while we
don't yet have this in a structured way
---------
Signed-off-by: Thibault Sottiaux <tibo@openai.com>