Adding in an option to turn on flex processing mode to reduce costs when
running the agent.
Bumped the openai typescript version to add the new feature.
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Fix: Shift + Enter no longer prints “[27;2;13~” in the single‑line
input. Validated as working and necessary in Ghostty on Linux.
## Key points
- src/components/vendor/ink-text-input.tsx
- Added early handler that recognises the two modifyOtherKeys
escape‑sequences
- [13;<mod>u (mode 2 / CSI‑u)
- [27;<mod>;13~ (mode 1 / legacy CSI‑~)
- If Ctrl is held (hasCtrl flag) → call onSubmit() (same as plain
Enter).
- Otherwise → insert a real newline at the caret (same as Option+Enter).
- Prevents the raw sequence from being inserted into the buffer.
- src/components/chat/multiline-editor.tsx
- Replaced non‑breaking spaces with normal spaces to satisfy eslint
no‑irregular‑whitespace rule (no behaviour change).
All unit tests (114) and ESLint now pass:
npm test ✔️
npm run lint ✔️
### What is added?
I extend the if-else blocks with an additional condition where the
commands validity is checked. This only applies for entered inputs that
start with '/' and are a single word. This isn't necessarily restrictive
from the previous behavior of the program. When an invalid command is
detected, an error message is printed with a direction to retrieve
command list.
### Why is it added?
There are three main reasons for this change
**1. Model Hallucination**: When invalid commands are passed as prompts
to models, models hallucinate behavior. Since there was a fall through
in invalid commands, the models took these as prompts and hallucinated
that they completed the prompted task. An example of this behavior is
below. In the case of this example, the model though they had access to
`/clearhistory` tool where in reality that isn't the case.
A before and after effect of this tool is below:


**2. Saves Users Credits and Time**: Since false commands and invalid
commands aren't processed by the model, the user doesn't spend money on
stuff that could have been mitigated much easily. The time argument is
especially applicable for reasoning models.
**3. Saves GPU Time**: GPU time is valuable, and it is not necessary to
spend it on unnecessary/invalid requests.
### Code Explanation
If no command is matched, we check if the inputValue start with `/`
which indicated the input is a command (I will address the case where it
is isn't below). If the inputValue start with `/` we enter the else if
statement. I used a nested if statement for readability and further
extendability in the future.
There are alternative ways to check besides regex, but regex is a short
code and looks clean.
**Check Conditions**: The reason why I only check single word(command)
case is that to allow prompts where the user might decide to start with
`/` and aren't commands. The nested if statements also come in handy
where in the future other contributors might want to extend this
checking.
The code passes type, lint and test checks.
Added the ability to compact. Not sure if I should switch the model over
to gpt-4.1 for longer context or if keeping the current model is fine.
Also I'm not sure if setting the compacted to system is best practice,
would love feedback 😄
Mentioned in this issue: https://github.com/openai/codex/issues/230
Updated the position of the cursor on the user input box to be at the
end of the text when the user uses the arrow keys to navigate through
the input history in order to better match the behavior of a terminal.
This adds support for a new flag, `-w,--writable-root`, that can be
specified multiple times to _amend_ the list of folders that should be
configured as "writable roots" by the sandbox used in `full-auto` mode.
Values that are passed as relative paths will be resolved to absolute
paths.
Incidentally, this required updating a number of the `agent*.test.ts`
files: it feels like some of the setup logic across those tests could be
consolidated.
In my testing, it seems that this might be slightly out of distribution
for the model, as I had to explicitly tell it to run `apply_patch` and
that it had the permissions to write those files (initially, it just
showed me a diff and told me to apply it myself). Nevertheless, I think
this is a good starting point.
# Shell Command Explanation Option
## Description
This PR adds an option to explain shell commands when the user is
prompted to approve them (Fixes#110). When reviewing a shell command,
users can now select "Explain this command" to get a detailed
explanation of what the command does before deciding whether to approve
or reject it.
## Changes
- Added a new "EXPLAIN" option to the `ReviewDecision` enum
- Updated the command review UI to include an "Explain this command (x)"
option
- Implemented the logic to send the command to the LLM for explanation
using the same model as the agent
- Added a display for the explanation in the command review UI
- Updated all relevant components to pass the explanation through the
component tree
## Benefits
- Improves user understanding of shell commands before approving them
- Reduces the risk of approving potentially harmful commands
- Enhances the educational aspect of the tool, helping users learn about
shell commands
- Maintains the same workflow with minimal UI changes
## Testing
- Manually tested the explanation feature with various shell commands
- Verified that the explanation is displayed correctly in the UI
- Confirmed that the user can still approve or reject the command after
viewing the explanation
## Screenshots

## Additional Notes
The explanation is generated using the same model as the agent, ensuring
consistency in the quality and style of explanations.
---------
Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
This PR adds a command history persistence feature to Codex CLI that:
1. **Stores command history**: Commands are saved to
`~/.codex/history.json` and persist between CLI sessions.
2. **Navigates history**: Users can use the up/down arrow keys to
navigate through command history, similar to a traditional shell.
3. **Filters sensitive data**: Built-in regex patterns prevent commands
containing API keys, passwords, or tokens from being saved.
4. **Configurable**: Added configuration options for history size,
enabling/disabling history, and custom regex patterns for sensitive
content.
5. **New command**: Added `/clearhistory` command to clear command
history.
## Code Changes
- Added `src/utils/storage/command-history.ts` with functions for
history management
- Extended config system to support history settings
- Updated terminal input components to use persistent history
- Added help text for the new `/clearhistory` command
- Added CLAUDE.md file for guidance when working with the codebase
## Testing
- All tests are passing
- Core functionality works with both input components (standard and
multiline)
- History navigation behaves correctly at line boundaries with the
multiline editor
I had Codex read #182 and draft a PR to fix it. This is its suggested
approach. I've tested it and it works. It removes the purple `thinking
for 386s` type lines entirely, and replaces them with a single yellow
`thinking for #s` line:
```
thinking for 31s
╭────────────────────────────────────────╮
│( ● ) Thinking..
╰────────────────────────────────────────╯
```
prompt. I've been using it that way via `npm run dev`, and prefer it.
## What
Empty "reasoning" updates were showing up as blank lines in the terminal
chat history. We now short-circuit and return `null` whenever
`message.summary` is empty, so those no-ops are suppressed.
## How
- In `TerminalChatResponseReasoning`, return early if `message.summary`
is falsy or empty.
- In `TerminalMessageHistory`, drop any reasoning items whose
`summary.length === 0`.
- Swapped out the loose `any` cast for a safer `unknown`-based cast.
- Rolled back the temporary Vitest script hacks that were causing stack
overflows.
## Why
Cluttering the chat with empty lines was confusing; this change ensures
only real reasoning text is rendered.
Reference: openai/codex#182
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
## Description
This PR fixes the issue where the CLI can't continue after interrupting
the assistant with ESC ESC (Fixes#114). The problem was caused by
duplicate code in the `cancel()` method and improper state reset after
cancellation.
## Changes
- Fixed duplicate code in the `cancel()` method of the `AgentLoop` class
- Added proper reset of the `currentStream` property in the `cancel()`
method
- Created a new `AbortController` after aborting the current one to
ensure future tool calls work
- Added a system message to indicate the interruption to the user
- Added a comprehensive test to verify the fix
## Benefits
- Users can now continue using the CLI after interrupting the assistant
- Improved user experience by providing feedback when interruption
occurs
- Better state management in the agent loop
## Testing
- Added a dedicated test that verifies the agent can process new input
after cancellation
- Manually tested the fix by interrupting the assistant and confirming
that new input is processed correctly
---------
Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
Previously, `parseToolCall()` was using `computeAutoApproval()`, which
was a somewhat parallel implementation of `canAutoApprove()` in order to
get `SafeCommandReason` metadata for presenting information to the user.
The only function that was using `SafeCommandReason` was
`useMessageGrouping()`, but it turns out that function was unused, so
this PR removes `computeAutoApproval()` and all code related to it.
More importantly, I believe this fixes
https://github.com/openai/codex/issues/87 because
`computeAutoApproval()` was calling `parse()` from `shell-quote` without
wrapping it in a try-catch. This PR updates `canAutoApprove()` to use a
tighter try-catch block that is specific to `parse()` and returns an
appropriate `SafetyAssessment` in the event of an error, based on the
`ApprovalPolicy`.
Signed-off-by: Michael Bolin <mbolin@openai.com>