When using Codex to develop Codex itself, I noticed that sometimes it
would try to add `#[ignore]` to the following tests:
```
keeps_previous_response_id_between_tasks()
retries_on_early_close()
```
Both of these tests start a `MockServer` that launches an HTTP server on
an ephemeral port and requires network access to hit it, which the
Seatbelt policy associated with `--full-auto` correctly denies. If I
wasn't paying attention to the code that Codex was generating, one of
these `#[ignore]` annotations could have slipped into the codebase,
effectively disabling the test for everyone.
To that end, this PR enables an experimental environment variable named
`CODEX_SANDBOX_NETWORK_DISABLED` that is set to `1` if the
`SandboxPolicy` used to spawn the process does not have full network
access. I say it is "experimental" because I'm not convinced this API is
quite right, but we need to start somewhere. (It might be more
appropriate to have an env var like `CODEX_SANDBOX=full-auto`, but the
challenge is that our newer `SandboxPolicy` abstraction does not map to
a simple set of enums like in the TypeScript CLI.)
We leverage this new functionality by adding the following code to the
aforementioned tests as a way to "dynamically disable" them:
```rust
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
```
We can use the `debug seatbelt --full-auto` command to verify that
`cargo test` fails when run under Seatbelt prior to this change:
```
$ cargo run --bin codex -- debug seatbelt --full-auto -- cargo test
---- keeps_previous_response_id_between_tasks stdout ----
thread 'keeps_previous_response_id_between_tasks' panicked at /Users/mbolin/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/wiremock-0.6.3/src/mock_server/builder.rs:107:46:
Failed to bind an OS port for a mock server.: Os { code: 1, kind: PermissionDenied, message: "Operation not permitted" }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
keeps_previous_response_id_between_tasks
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
error: test failed, to rerun pass `-p codex-core --test previous_response_id`
```
Though after this change, the above command succeeds! This means that,
going forward, when Codex operates on Codex itself, when it runs `cargo
test`, only "real failures" should cause the command to fail.
As part of this change, I decided to tighten up the codepaths for
running `exec()` for shell tool calls. In particular, we do it in `core`
for the main Codex business logic itself, but we also expose this logic
via `debug` subcommands in the CLI in the `cli` crate. The logic for the
`debug` subcommands was not quite as faithful to the true business logic
as I liked, so I:
* refactored a bit of the Linux code, splitting `linux.rs` into
`linux_exec.rs` and `landlock.rs` in the `core` crate.
* gating less code behind `#[cfg(target_os = "linux")]` because such
code does not get built by default when I develop on Mac, which means I
either have to build the code in Docker or wait for CI signal
* introduced `macro_rules! configure_command` in `exec.rs` so we can
have both sync and async versions of this code. The synchronous version
seems more appropriate for straight threads or potentially fork/exec.
123 lines
4.0 KiB
Rust
123 lines
4.0 KiB
Rust
//! Verifies that the agent retries when the SSE stream terminates before
|
|
//! delivering a `response.completed` event.
|
|
|
|
use std::time::Duration;
|
|
|
|
use codex_core::Codex;
|
|
use codex_core::ModelProviderInfo;
|
|
use codex_core::config::Config;
|
|
use codex_core::exec::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
|
|
use codex_core::protocol::InputItem;
|
|
use codex_core::protocol::Op;
|
|
use tokio::time::timeout;
|
|
use wiremock::Mock;
|
|
use wiremock::MockServer;
|
|
use wiremock::Request;
|
|
use wiremock::Respond;
|
|
use wiremock::ResponseTemplate;
|
|
use wiremock::matchers::method;
|
|
use wiremock::matchers::path;
|
|
|
|
fn sse_incomplete() -> String {
|
|
// Only a single line; missing the completed event.
|
|
"event: response.output_item.done\n\n".to_string()
|
|
}
|
|
|
|
fn sse_completed(id: &str) -> String {
|
|
format!(
|
|
"event: response.completed\n\
|
|
data: {{\"type\":\"response.completed\",\"response\":{{\"id\":\"{}\",\"output\":[]}}}}\n\n\n",
|
|
id
|
|
)
|
|
}
|
|
|
|
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
|
async fn retries_on_early_close() {
|
|
#![allow(clippy::unwrap_used)]
|
|
|
|
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
|
|
println!(
|
|
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
|
|
);
|
|
return;
|
|
}
|
|
|
|
let server = MockServer::start().await;
|
|
|
|
struct SeqResponder;
|
|
impl Respond for SeqResponder {
|
|
fn respond(&self, _: &Request) -> ResponseTemplate {
|
|
use std::sync::atomic::AtomicUsize;
|
|
use std::sync::atomic::Ordering;
|
|
static CALLS: AtomicUsize = AtomicUsize::new(0);
|
|
let n = CALLS.fetch_add(1, Ordering::SeqCst);
|
|
if n == 0 {
|
|
ResponseTemplate::new(200)
|
|
.insert_header("content-type", "text/event-stream")
|
|
.set_body_raw(sse_incomplete(), "text/event-stream")
|
|
} else {
|
|
ResponseTemplate::new(200)
|
|
.insert_header("content-type", "text/event-stream")
|
|
.set_body_raw(sse_completed("resp_ok"), "text/event-stream")
|
|
}
|
|
}
|
|
}
|
|
|
|
Mock::given(method("POST"))
|
|
.and(path("/v1/responses"))
|
|
.respond_with(SeqResponder {})
|
|
.expect(2)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
// Environment
|
|
//
|
|
// As of Rust 2024 `std::env::set_var` has been made `unsafe` because
|
|
// mutating the process environment is inherently racy when other threads
|
|
// are running. We therefore have to wrap every call in an explicit
|
|
// `unsafe` block. These are limited to the test-setup section so the
|
|
// scope is very small and clearly delineated.
|
|
|
|
unsafe {
|
|
std::env::set_var("OPENAI_REQUEST_MAX_RETRIES", "0");
|
|
std::env::set_var("OPENAI_STREAM_MAX_RETRIES", "1");
|
|
std::env::set_var("OPENAI_STREAM_IDLE_TIMEOUT_MS", "2000");
|
|
}
|
|
|
|
let model_provider = ModelProviderInfo {
|
|
name: "openai".into(),
|
|
base_url: format!("{}/v1", server.uri()),
|
|
// Environment variable that should exist in the test environment.
|
|
// ModelClient will return an error if the environment variable for the
|
|
// provider is not set.
|
|
env_key: Some("PATH".into()),
|
|
env_key_instructions: None,
|
|
wire_api: codex_core::WireApi::Responses,
|
|
};
|
|
|
|
let ctrl_c = std::sync::Arc::new(tokio::sync::Notify::new());
|
|
let mut config = Config::load_default_config_for_test();
|
|
config.model_provider = model_provider;
|
|
let (codex, _init_id) = Codex::spawn(config, ctrl_c).await.unwrap();
|
|
|
|
codex
|
|
.submit(Op::UserInput {
|
|
items: vec![InputItem::Text {
|
|
text: "hello".into(),
|
|
}],
|
|
})
|
|
.await
|
|
.unwrap();
|
|
|
|
// Wait until TaskComplete (should succeed after retry).
|
|
loop {
|
|
let ev = timeout(Duration::from_secs(10), codex.next_event())
|
|
.await
|
|
.unwrap()
|
|
.unwrap();
|
|
if matches!(ev.msg, codex_core::protocol::EventMsg::TaskComplete) {
|
|
break;
|
|
}
|
|
}
|
|
}
|