bugfix: remove redundant thinking updates and put a thinking timer above the prompt instead (#216)

I had Codex read #182 and draft a PR to fix it. This is its suggested
approach. I've tested it and it works. It removes the purple `thinking
for 386s` type lines entirely, and replaces them with a single yellow
`thinking for #s` line:
```
thinking for 31s
╭────────────────────────────────────────╮
│(  ●   )  Thinking..      
╰────────────────────────────────────────╯
```
prompt. I've been using it that way via `npm run dev`, and prefer it.

## What

Empty "reasoning" updates were showing up as blank lines in the terminal
chat history. We now short-circuit and return `null` whenever
`message.summary` is empty, so those no-ops are suppressed.

## How

- In `TerminalChatResponseReasoning`, return early if `message.summary`
is falsy or empty.
- In `TerminalMessageHistory`, drop any reasoning items whose
`summary.length === 0`.
- Swapped out the loose `any` cast for a safer `unknown`-based cast.
- Rolled back the temporary Vitest script hacks that were causing stack
overflows.

## Why

Cluttering the chat with empty lines was confusing; this change ensures
only real reasoning text is rendered.
Reference: openai/codex#182

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
This commit is contained in:
Scott Leibrand
2025-04-17 08:12:38 -07:00
committed by GitHub
parent 4e7403e5ea
commit 4386dfc67b
2 changed files with 13 additions and 27 deletions

View File

@@ -72,30 +72,13 @@ export function TerminalChatResponseReasoning({
}: {
message: ResponseReasoningItem & { duration_ms?: number };
}): React.ReactElement | null {
// prefer the real duration if present
const thinkingTime = message.duration_ms
? Math.round(message.duration_ms / 1000)
: Math.max(
1,
Math.ceil(
(message.summary || [])
.map((t) => t.text.length)
.reduce((a, b) => a + b, 0) / 300,
),
);
if (thinkingTime <= 0) {
// Only render when there is a reasoning summary
if (!message.summary || message.summary.length === 0) {
return null;
}
return (
<Box gap={1} flexDirection="column">
<Box gap={1}>
<Text bold color="magenta">
thinking
</Text>
<Text dimColor>for {thinkingTime}s</Text>
</Box>
{message.summary?.map((summary, key) => {
{message.summary.map((summary, key) => {
const s = summary as { headline?: string; text: string };
return (
<Box key={key} flexDirection="column">

View File

@@ -30,16 +30,14 @@ const MessageHistory: React.FC<MessageHistoryProps> = ({
thinkingSeconds,
fullStdout,
}) => {
const [messages, debug] = useMemo(
() => [batch.map(({ item }) => item!), process.env["DEBUG"]],
[batch],
);
// Flatten batch entries to response items.
const messages = useMemo(() => batch.map(({ item }) => item!), [batch]);
return (
<Box flexDirection="column">
{loading && debug && (
{loading && (
<Box marginTop={1}>
<Text color="yellow">{`(${thinkingSeconds}s)`}</Text>
<Text color="yellow">{`thinking for ${thinkingSeconds}s`}</Text>
</Box>
)}
<Static items={["header", ...messages]}>
@@ -48,8 +46,13 @@ const MessageHistory: React.FC<MessageHistoryProps> = ({
return <TerminalHeader key="header" {...headerProps} />;
}
// After the guard above `item` can only be a ResponseItem.
// After the guard above, item is a ResponseItem
const message = item as ResponseItem;
// Suppress empty reasoning updates (i.e. items with an empty summary).
const msg = message as unknown as { summary?: Array<unknown> };
if (msg.summary?.length === 0) {
return null;
}
return (
<Box
key={`${message.id}-${index}`}