Commit Graph

19 Commits

Author SHA1 Message Date
Suyash-K
b37b257e63 gracefully handle SSE parse errors and suppress raw parser code (#367)
Closes #187
Closes #358

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-19 07:24:29 -07:00
salama-openai
1a8610cd9e feat: add flex mode option for cost savings (#372)
Adding in an option to turn on flex processing mode to reduce costs when
running the agent.

Bumped the openai typescript version to add the new feature.

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-04-18 22:15:01 -07:00
Alpha Diop
e2fe2572ba chore: migrate to pnpm for improved monorepo management (#287)
# Migrate to pnpm for improved monorepo management

## Summary
This PR migrates the Codex repository from npm to pnpm, providing faster
dependency installation, better disk space usage, and improved monorepo
management.

## Changes
- Added `pnpm-workspace.yaml` to define workspace packages
- Added `.npmrc` with optimal pnpm configuration
- Updated root package.json with workspace scripts
- Moved resolutions and overrides to the root package.json
- Updated scripts to use pnpm instead of npm
- Added documentation for the migration
- Updated GitHub Actions workflow for pnpm

## Benefits
- **Faster installations**: pnpm is significantly faster than npm
- **Disk space savings**: pnpm's content-addressable store avoids
duplication
- **Strict dependency management**: prevents phantom dependencies
- **Simplified monorepo management**: better workspace coordination
- **Preparation for Turborepo**: as discussed, this is the first step
before adding Turborepo

## Testing
- Verified that `pnpm install` works correctly
- Verified that `pnpm run build` completes successfully
- Ensured all existing functionality is preserved

## Documentation
Added a detailed migration guide in `PNPM_MIGRATION.md` explaining:
- Why we're migrating to pnpm
- How to use pnpm with this repository
- Common commands and workspace-specific commands
- Monorepo structure and configuration

## Next Steps
As discussed, once this change is stable, we can consider adding
Turborepo as a follow-up enhancement.
2025-04-18 16:25:15 -07:00
Michael Bolin
ae5b1b5cb5 add support for -w,--writable-root to add more writable roots for sandbox (#263)
This adds support for a new flag, `-w,--writable-root`, that can be
specified multiple times to _amend_ the list of folders that should be
configured as "writable roots" by the sandbox used in `full-auto` mode.
Values that are passed as relative paths will be resolved to absolute
paths.

Incidentally, this required updating a number of the `agent*.test.ts`
files: it feels like some of the setup logic across those tests could be
consolidated.

In my testing, it seems that this might be slightly out of distribution
for the model, as I had to explicitly tell it to run `apply_patch` and
that it had the permissions to write those files (initially, it just
showed me a diff and told me to apply it myself). Nevertheless, I think
this is a good starting point.
2025-04-17 15:39:26 -07:00
Brayden Moon
f3d085aaf8 feat: shell command explanation option (#173)
# Shell Command Explanation Option

## Description
This PR adds an option to explain shell commands when the user is
prompted to approve them (Fixes #110). When reviewing a shell command,
users can now select "Explain this command" to get a detailed
explanation of what the command does before deciding whether to approve
or reject it.

## Changes
- Added a new "EXPLAIN" option to the `ReviewDecision` enum
- Updated the command review UI to include an "Explain this command (x)"
option
- Implemented the logic to send the command to the LLM for explanation
using the same model as the agent
- Added a display for the explanation in the command review UI
- Updated all relevant components to pass the explanation through the
component tree

## Benefits
- Improves user understanding of shell commands before approving them
- Reduces the risk of approving potentially harmful commands
- Enhances the educational aspect of the tool, helping users learn about
shell commands
- Maintains the same workflow with minimal UI changes

## Testing
- Manually tested the explanation feature with various shell commands
- Verified that the explanation is displayed correctly in the UI
- Confirmed that the user can still approve or reject the command after
viewing the explanation

## Screenshots

![improved_shell_explanation_demo](https://github.com/user-attachments/assets/05923481-29db-4eba-9cc6-5e92301d2be0)


## Additional Notes
The explanation is generated using the same model as the agent, ensuring
consistency in the quality and style of explanations.

---------

Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
2025-04-17 13:28:58 -07:00
Jon Church
693a6f96cf fix: update regex to better match the retry error messages (#266)
I think the retry issue is just that the regex is wrong, checkout the
reported error messages folks are seeing:

> message: 'Rate limit reached for o4-mini in organization
org-{redacted} on tokens per min (TPM): Limit 200000, Used 152566,
Requested 60651. Please try again in 3.965s. Visit
https://platform.openai.com/account/rate-limits to learn more.',

The error message uses `try again` not `retry again`

peep this regexpal: https://www.regexpal.com/?fam=155648
2025-04-17 13:15:01 -07:00
Mehmet Vecdi Gönül
4e7403e5ea bugfix: additional error handling logic for model errors that occur in stream (#203)
**What is  added?**

Additional error handling functionality is added before the errors are
thrown to be handled by upstream handlers. The changes improves the user
experience and make the error handling smoother (and more informative).

**Why is it added?**
Before this addition, when a user tried to use a model they needed
previous setup for, the program crashed. This is not necessary here, and
informative message is sufficient and enhances user experience. This
adheres to the specifications stated in the code file as well by not
masking potential logical error detection. Following is before and
after:


![first](https://github.com/user-attachments/assets/0ce7c57d-8159-4cf7-8a53-3062cfd04dc8)

![second](https://github.com/user-attachments/assets/a9f24410-d76d-43d4-a0e2-ec513026843d)

Moreover, AFAIK no logic was present to handle this or a similar issue
in upstream handlers.

**How is it scoped? Why won't this mask other errors?**
The new brach triggers *only* for `invalid_request_error` events whose
`code` is model related (`model_not_found`)

This also doesn't prevent the detection (for the case of masking logical
errors) of wrong model names, as they would have been caught earlier on.

The code passes test, lint and type checks. I believe relevant
documentation is added, but I would be more than happy to do further
fixes in the code if necessary.
2025-04-17 08:09:27 -07:00
Brayden Moon
b0ccca5556 fix: allow continuing after interrupting assistant (#178)
## Description
This PR fixes the issue where the CLI can't continue after interrupting
the assistant with ESC ESC (Fixes #114). The problem was caused by
duplicate code in the `cancel()` method and improper state reset after
cancellation.

## Changes
- Fixed duplicate code in the `cancel()` method of the `AgentLoop` class
- Added proper reset of the `currentStream` property in the `cancel()`
method
- Created a new `AbortController` after aborting the current one to
ensure future tool calls work
- Added a system message to indicate the interruption to the user
- Added a comprehensive test to verify the fix

## Benefits
- Users can now continue using the CLI after interrupting the assistant
- Improved user experience by providing feedback when interruption
occurs
- Better state management in the agent loop

## Testing
- Added a dedicated test that verifies the agent can process new input
after cancellation
- Manually tested the fix by interrupting the assistant and confirming
that new input is processed correctly

---------

Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
2025-04-16 22:20:19 -07:00
Jake Kay
b5fad66e2c fix: add missing "as" in prompt prefix in agent loop (#186)
# Description

This PR fixes a typo where the prompt prefix for the agent loop was
missing the word "as"

# Changes

* Added missing word "as" within the agent loop prompt prefix

# Benefits

* The prompt is now grammatically correct and clearer

# Testing

* Manually tested the fix
2025-04-16 22:16:16 -07:00
Thibault Sottiaux
47c683480f (feat) expontential back-off when encountering rate limit errors (#153)
...and try to parse the suggested time from the error message while we
don't yet have this in a structured way

---------

Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-16 17:37:12 -07:00
Michael Bolin
9b733fc48f Back out @lib indirection in tsconfig.json (#111) 2025-04-16 14:16:53 -07:00
Thibault Sottiaux
1c4e2e19ea (feat) basic retries when hitting rate limit errors (#105)
* w

Signed-off-by: Thibault Sottiaux <tibo@openai.com>

* w

Signed-off-by: Thibault Sottiaux <tibo@openai.com>

* w

Signed-off-by: Thibault Sottiaux <tibo@openai.com>

* w

Signed-off-by: Thibault Sottiaux <tibo@openai.com>

* w

Signed-off-by: Thibault Sottiaux <tibo@openai.com>

---------

Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-16 13:47:23 -07:00
Varun Khalate
71a1ff6ee2 fix: prompt typo (#81)
* fix: developer typo

* fix: typo
2025-04-16 12:43:10 -07:00
Thibault Sottiaux
e323b2cc95 remove rg requirement (#50)
Signed-off-by: Thibault Sottiaux <tibo@openai.com>
2025-04-16 11:37:16 -07:00
Adam Montgomery
94889dd76e (feat) add request error details (#31)
Signed-off-by: Adam Montgomery <montgomery.adam@gmail.com>
2025-04-16 11:23:42 -07:00
Yashraj Yadav
e9f84eab01 (fix) o3 instead of o3-mini (#37)
* o3 instead of o3-mini
2025-04-16 11:18:41 -07:00
Trevor Creech
443ffb7373 update summary to auto (#1) 2025-04-16 10:44:19 -07:00
Thibault Sottiaux
1c26c272c8 Add link to cookbook (#2) 2025-04-16 13:15:46 -04:00
Ilan Bigio
59a180ddec Initial commit
Signed-off-by: Ilan Bigio <ilan@openai.com>
2025-04-16 12:56:08 -04:00