Files
llmx/codex-cli
Scott Leibrand ee6e1765fa agent-loop: minimal mid-stream #429 retry loop using existing back-off (#506)
As requested by @tibo-openai at
https://github.com/openai/codex/pull/357#issuecomment-2816554203, this
attempts a more minimal implementation of #357 that preserves as much as
possible of the existing code's exponential backoff logic.

Adds a small retry wrapper around the streaming for‑await loop so that
HTTP 429s which occur *after* the stream has started no longer crash the
CLI.

Highlights
• Re‑uses existing RATE_LIMIT_RETRY_WAIT_MS constant and 5‑attempt
limit.
• Exponential back‑off identical to initial request handling. 

This comment is probably more useful here in the PR:
// The OpenAI SDK may raise a 429 (rate‑limit) *after* the stream has
// started. Prior logic already retries the initial `responses.create`
        // call, but we need to add equivalent resilience for mid‑stream
        // failures.  We keep the implementation minimal by wrapping the
// existing `for‑await` loop in a small retry‑for‑loop that re‑creates
        // the stream with exponential back‑off.
2025-04-22 11:02:10 -04:00
..
2025-04-16 12:56:08 -04:00
2025-04-16 12:56:08 -04:00
2025-04-17 07:18:43 -07:00
2025-04-16 12:56:08 -04:00
2025-04-16 12:56:08 -04:00