## Description
This PR implements multi-line input support for Codex when it asks for
user feedback (Issue #344). Users can now use Shift+Enter to add new
lines in their responses, making it easier to provide formatted code
snippets, lists, or other structured content.
## Changes
- Replace the single-line TextInput component with the
MultilineTextEditor component in terminal-chat-input.tsx
- Add support for Shift+Enter to create new lines
- Update key handling logic to properly handle history navigation in a
multi-line context
- Add reference to the editor to access cursor position information
- Update help text to inform users about the Shift+Enter functionality
- Add tests for the new functionality
## Testing
- Added new test file (terminal-chat-input-multiline.test.tsx) to test
the multi-line input functionality
- All existing tests continue to pass
- Manually tested the feature to ensure it works as expected
## Fixes
Closes#344
## Screenshots
N/A
## Additional Notes
This implementation maintains backward compatibility while adding the
requested multi-line input functionality. The UI remains clean and
intuitive, with a simple hint about using Shift+Enter for new lines.
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
https://github.com/openai/codex/pull/160 introduced a call to `exec()`
that takes a format string as an argument, but it is not clear that the
expansions within the format string are escaped safely. As written, it
is possible a carefully crafted command (e.g., if `cwd` were `"; && rm
-rf` or something...) could run arbitrary code.
Moving to `spawn()` makes this a bit better, as now at least `spawn()`
itself won't run an arbitrary process, though I suppose `osascript`
itself still could if the value passed to `-e` were abused. I'm not
clear on the escaping rules for AppleScript to ensure that `safePreview`
and `cwd` are injected safely.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/416).
* #423
* #420
* #419
* __->__ #416
This check was lost in https://github.com/openai/codex/pull/287. Both
the root folder and `codex-cli/` have their own `pnpm format` commands
that check the formatting of different things.
Also ran `pnpm format:fix` to fix the formatting violations that got in
while this was disabled in CI.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/417).
* #420
* #419
* #416
* __->__ #417
Add interactive slash‑command autocomplete & navigation in chat input
Description
This PR enhances the chat input component by adding first‑class support
for slash commands (/help, /clear, /compact, etc.)
with:
* **Live filtering:** As soon as the user types leading `/`, a list of
matching commands is shown below the prompt.
* **Arrow‑key navigation:** Up/Down arrows cycle through suggestions.
* **Enter to autocomplete:** Pressing Enter on a partial command will
fill it (without submitting) so you can add
arguments or simply press Enter again to execute.
* **Type‑safe registry:** A new `slash‑commands.ts` file declares all
supported commands in one place, along with
TypeScript types to prevent drift.
* **Validation:** Only registered commands will ever autocomplete or be
suggested; unknown single‑word slash inputs still
show an “Invalid command” system message.
* **Automated tests:**
* Unit tests for the command registry and prefix filtering
* Existing tests continue passing with no regressions
Motivation
Slash commands provide a quick, discoverable way to control the agent
(clearing history, compacting context, opening overlays,
etc.). Before, users had to memorize the exact command or rely on the
generic /help list—autocomplete makes them far more
accessible and reduces typos.
Changes
* `src/utils/slash‑commands.ts` – defines `SlashCommand` and exports a
flat list of supported commands + descriptions
* `terminal‑chat‑input.tsx`
* Import and type the command registry
* Render filtered suggestions under the prompt when input starts with
`/`
* Hook into `useInput` to handle Up/Down and Enter for selection & fill
* Flag to swallow the first Enter (autocomplete) and only submit on the
next
* Updated tests in `tests/slash‑commands.test.ts` to cover registry
contents and filtering logic
* Removed old JS version and fixed stray `@ts‑expect‑error`
How to test locally
1. Type `/` in the prompt—you should see matching commands.
2. Use arrows to move the highlight, press Enter to fill, then Enter
again to execute.
3. Run the full test suite (`npm test`) to verify no regressions.
Notes
* Future work could include fuzzy matching, paging long lists, or more
visual styling.
* This change is purely additive and does not affect non‑slash inputs or
existing slash handlers.
---------
Co-authored-by: Fouad Matin <169186268+fouad-openai@users.noreply.github.com>
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Implements https://github.com/openai/codex/issues/392
When the user is in suggest or auto-edit mode and gets an approval
request, they now have an option in the `Shell Command` dialog to:
`Switch approval mode (v)`
That option brings up the standard `Switch approval mode` dialog,
allowing the user to switch into the desired mode, then drops them back
to the `Shell Command` dialog's `Allow command?` prompt, allowing them
to approve the current command and let the agent continue doing the rest
of what it was doing without interruption.
```
╭────────────────────────────────────────────────────────
│Shell Command
│
│$ apply_patch << 'PATCH'
│*** Begin Patch
│*** Update File: foo.txt
│@@ -1 +1 @@
│-foo
│+bar
│*** End Patch
│PATCH
│
│
│Allow command?
│
│ Yes (y)
│ Explain this command (x)
│ Edit or give feedback (e)
│ Switch approval mode (v)
│ No, and keep going (n)
│ No, and stop for now (esc)
╰────────────────────────────────────────────────────────╭────────────────────────────────────────────────────────
│ Switch approval mode
│ Current mode: suggest
│
│
│
│ ❯ suggest
│ auto-edit
│ full-auto
│ type to search · enter to confirm · esc to cancel
╰────────────────────────────────────────────────────────
```
Adding in an option to turn on flex processing mode to reduce costs when
running the agent.
Bumped the openai typescript version to add the new feature.
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Fix: Shift + Enter no longer prints “[27;2;13~” in the single‑line
input. Validated as working and necessary in Ghostty on Linux.
## Key points
- src/components/vendor/ink-text-input.tsx
- Added early handler that recognises the two modifyOtherKeys
escape‑sequences
- [13;<mod>u (mode 2 / CSI‑u)
- [27;<mod>;13~ (mode 1 / legacy CSI‑~)
- If Ctrl is held (hasCtrl flag) → call onSubmit() (same as plain
Enter).
- Otherwise → insert a real newline at the caret (same as Option+Enter).
- Prevents the raw sequence from being inserted into the buffer.
- src/components/chat/multiline-editor.tsx
- Replaced non‑breaking spaces with normal spaces to satisfy eslint
no‑irregular‑whitespace rule (no behaviour change).
All unit tests (114) and ESLint now pass:
npm test ✔️
npm run lint ✔️
### What is added?
I extend the if-else blocks with an additional condition where the
commands validity is checked. This only applies for entered inputs that
start with '/' and are a single word. This isn't necessarily restrictive
from the previous behavior of the program. When an invalid command is
detected, an error message is printed with a direction to retrieve
command list.
### Why is it added?
There are three main reasons for this change
**1. Model Hallucination**: When invalid commands are passed as prompts
to models, models hallucinate behavior. Since there was a fall through
in invalid commands, the models took these as prompts and hallucinated
that they completed the prompted task. An example of this behavior is
below. In the case of this example, the model though they had access to
`/clearhistory` tool where in reality that isn't the case.
A before and after effect of this tool is below:


**2. Saves Users Credits and Time**: Since false commands and invalid
commands aren't processed by the model, the user doesn't spend money on
stuff that could have been mitigated much easily. The time argument is
especially applicable for reasoning models.
**3. Saves GPU Time**: GPU time is valuable, and it is not necessary to
spend it on unnecessary/invalid requests.
### Code Explanation
If no command is matched, we check if the inputValue start with `/`
which indicated the input is a command (I will address the case where it
is isn't below). If the inputValue start with `/` we enter the else if
statement. I used a nested if statement for readability and further
extendability in the future.
There are alternative ways to check besides regex, but regex is a short
code and looks clean.
**Check Conditions**: The reason why I only check single word(command)
case is that to allow prompts where the user might decide to start with
`/` and aren't commands. The nested if statements also come in handy
where in the future other contributors might want to extend this
checking.
The code passes type, lint and test checks.
Added the ability to compact. Not sure if I should switch the model over
to gpt-4.1 for longer context or if keeping the current model is fine.
Also I'm not sure if setting the compacted to system is best practice,
would love feedback 😄
Mentioned in this issue: https://github.com/openai/codex/issues/230
This adds support for a new flag, `-w,--writable-root`, that can be
specified multiple times to _amend_ the list of folders that should be
configured as "writable roots" by the sandbox used in `full-auto` mode.
Values that are passed as relative paths will be resolved to absolute
paths.
Incidentally, this required updating a number of the `agent*.test.ts`
files: it feels like some of the setup logic across those tests could be
consolidated.
In my testing, it seems that this might be slightly out of distribution
for the model, as I had to explicitly tell it to run `apply_patch` and
that it had the permissions to write those files (initially, it just
showed me a diff and told me to apply it myself). Nevertheless, I think
this is a good starting point.
# Shell Command Explanation Option
## Description
This PR adds an option to explain shell commands when the user is
prompted to approve them (Fixes#110). When reviewing a shell command,
users can now select "Explain this command" to get a detailed
explanation of what the command does before deciding whether to approve
or reject it.
## Changes
- Added a new "EXPLAIN" option to the `ReviewDecision` enum
- Updated the command review UI to include an "Explain this command (x)"
option
- Implemented the logic to send the command to the LLM for explanation
using the same model as the agent
- Added a display for the explanation in the command review UI
- Updated all relevant components to pass the explanation through the
component tree
## Benefits
- Improves user understanding of shell commands before approving them
- Reduces the risk of approving potentially harmful commands
- Enhances the educational aspect of the tool, helping users learn about
shell commands
- Maintains the same workflow with minimal UI changes
## Testing
- Manually tested the explanation feature with various shell commands
- Verified that the explanation is displayed correctly in the UI
- Confirmed that the user can still approve or reject the command after
viewing the explanation
## Screenshots

## Additional Notes
The explanation is generated using the same model as the agent, ensuring
consistency in the quality and style of explanations.
---------
Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
This PR adds a command history persistence feature to Codex CLI that:
1. **Stores command history**: Commands are saved to
`~/.codex/history.json` and persist between CLI sessions.
2. **Navigates history**: Users can use the up/down arrow keys to
navigate through command history, similar to a traditional shell.
3. **Filters sensitive data**: Built-in regex patterns prevent commands
containing API keys, passwords, or tokens from being saved.
4. **Configurable**: Added configuration options for history size,
enabling/disabling history, and custom regex patterns for sensitive
content.
5. **New command**: Added `/clearhistory` command to clear command
history.
## Code Changes
- Added `src/utils/storage/command-history.ts` with functions for
history management
- Extended config system to support history settings
- Updated terminal input components to use persistent history
- Added help text for the new `/clearhistory` command
- Added CLAUDE.md file for guidance when working with the codebase
## Testing
- All tests are passing
- Core functionality works with both input components (standard and
multiline)
- History navigation behaves correctly at line boundaries with the
multiline editor
I had Codex read #182 and draft a PR to fix it. This is its suggested
approach. I've tested it and it works. It removes the purple `thinking
for 386s` type lines entirely, and replaces them with a single yellow
`thinking for #s` line:
```
thinking for 31s
╭────────────────────────────────────────╮
│( ● ) Thinking..
╰────────────────────────────────────────╯
```
prompt. I've been using it that way via `npm run dev`, and prefer it.
## What
Empty "reasoning" updates were showing up as blank lines in the terminal
chat history. We now short-circuit and return `null` whenever
`message.summary` is empty, so those no-ops are suppressed.
## How
- In `TerminalChatResponseReasoning`, return early if `message.summary`
is falsy or empty.
- In `TerminalMessageHistory`, drop any reasoning items whose
`summary.length === 0`.
- Swapped out the loose `any` cast for a safer `unknown`-based cast.
- Rolled back the temporary Vitest script hacks that were causing stack
overflows.
## Why
Cluttering the chat with empty lines was confusing; this change ensures
only real reasoning text is rendered.
Reference: openai/codex#182
---------
Co-authored-by: Thibault Sottiaux <tibo@openai.com>
## Description
This PR fixes the issue where the CLI can't continue after interrupting
the assistant with ESC ESC (Fixes#114). The problem was caused by
duplicate code in the `cancel()` method and improper state reset after
cancellation.
## Changes
- Fixed duplicate code in the `cancel()` method of the `AgentLoop` class
- Added proper reset of the `currentStream` property in the `cancel()`
method
- Created a new `AbortController` after aborting the current one to
ensure future tool calls work
- Added a system message to indicate the interruption to the user
- Added a comprehensive test to verify the fix
## Benefits
- Users can now continue using the CLI after interrupting the assistant
- Improved user experience by providing feedback when interruption
occurs
- Better state management in the agent loop
## Testing
- Added a dedicated test that verifies the agent can process new input
after cancellation
- Manually tested the fix by interrupting the assistant and confirming
that new input is processed correctly
---------
Signed-off-by: crazywolf132 <crazywolf132@gmail.com>
Previously, `parseToolCall()` was using `computeAutoApproval()`, which
was a somewhat parallel implementation of `canAutoApprove()` in order to
get `SafeCommandReason` metadata for presenting information to the user.
The only function that was using `SafeCommandReason` was
`useMessageGrouping()`, but it turns out that function was unused, so
this PR removes `computeAutoApproval()` and all code related to it.
More importantly, I believe this fixes
https://github.com/openai/codex/issues/87 because
`computeAutoApproval()` was calling `parse()` from `shell-quote` without
wrapping it in a try-catch. This PR updates `canAutoApprove()` to use a
tighter try-catch block that is specific to `parse()` and returns an
appropriate `SafetyAssessment` in the event of an error, based on the
`ApprovalPolicy`.
Signed-off-by: Michael Bolin <mbolin@openai.com>