Files
llmx/llmx-rs/core/tests/suite/otel.rs
Sebastian Krüger 3c7efc58c8 feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration
This release represents a comprehensive transformation of the codebase from Codex to LLMX,
enhanced with LiteLLM integration to support 100+ LLM providers through a unified API.

## Major Changes

### Phase 1: Repository & Infrastructure Setup
- Established new repository structure and branching strategy
- Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md)
- Set up development environment and tooling configuration

### Phase 2: Rust Workspace Transformation
- Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates)
- Updated package names, binary names, and workspace members
- Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs
- Updated all internal references, imports, and type names
- Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/
- Fixed all Rust compilation errors after mass rename

### Phase 3: LiteLLM Integration
- Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.)
- Implemented OpenAI-compatible Chat Completions API support
- Added model family detection and provider-specific handling
- Updated authentication to support LiteLLM API keys
- Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL
- Added LLMX_API_KEY for unified authentication
- Enhanced error handling for Chat Completions API responses
- Implemented fallback mechanisms between Responses API and Chat Completions API

### Phase 4: TypeScript/Node.js Components
- Renamed npm package: @codex/codex-cli → @valknar/llmx
- Updated TypeScript SDK to use new LLMX APIs and endpoints
- Fixed all TypeScript compilation and linting errors
- Updated SDK tests to support both API backends
- Enhanced mock server to handle multiple API formats
- Updated build scripts for cross-platform packaging

### Phase 5: Configuration & Documentation
- Updated all configuration files to use LLMX naming
- Rewrote README and documentation for LLMX branding
- Updated config paths: ~/.codex/ → ~/.llmx/
- Added comprehensive LiteLLM setup guide
- Updated all user-facing strings and help text
- Created release plan and migration documentation

### Phase 6: Testing & Validation
- Fixed all Rust tests for new naming scheme
- Updated snapshot tests in TUI (36 frame files)
- Fixed authentication storage tests
- Updated Chat Completions payload and SSE tests
- Fixed SDK tests for new API endpoints
- Ensured compatibility with Claude Sonnet 4.5 model
- Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL)

### Phase 7: Build & Release Pipeline
- Updated GitHub Actions workflows for LLMX binary names
- Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/
- Updated CI/CD pipelines for new package names
- Made Apple code signing optional in release workflow
- Enhanced npm packaging resilience for partial platform builds
- Added Windows sandbox support to workspace
- Updated dotslash configuration for new binary names

### Phase 8: Final Polish
- Renamed all assets (.github images, labels, templates)
- Updated VSCode and DevContainer configurations
- Fixed all clippy warnings and formatting issues
- Applied cargo fmt and prettier formatting across codebase
- Updated issue templates and pull request templates
- Fixed all remaining UI text references

## Technical Details

**Breaking Changes:**
- Binary name changed from `codex` to `llmx`
- Config directory changed from `~/.codex/` to `~/.llmx/`
- Environment variables renamed (CODEX_* → LLMX_*)
- npm package renamed to `@valknar/llmx`

**New Features:**
- Support for 100+ LLM providers via LiteLLM
- Unified authentication with LLMX_API_KEY
- Enhanced model provider detection and handling
- Improved error handling and fallback mechanisms

**Files Changed:**
- 578 files modified across Rust, TypeScript, and documentation
- 30+ Rust crates renamed and updated
- Complete rebrand of UI, CLI, and documentation
- All tests updated and passing

**Dependencies:**
- Updated Cargo.lock with new package names
- Updated npm dependencies in llmx-cli
- Enhanced OpenAPI models for LLMX backend

This release establishes LLMX as a standalone project with comprehensive LiteLLM
integration, maintaining full backward compatibility with existing functionality
while opening support for a wide ecosystem of LLM providers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Sebastian Krüger <support@pivoine.art>
2025-11-12 20:40:44 +01:00

1038 lines
29 KiB
Rust

use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_custom_tool_call;
use core_test_support::responses::ev_function_call;
use core_test_support::responses::mount_sse;
use core_test_support::responses::mount_sse_once;
use core_test_support::responses::sse;
use core_test_support::responses::start_mock_server;
use core_test_support::test_llmx::TestLlmx;
use core_test_support::test_llmx::test_llmx;
use core_test_support::wait_for_event;
use llmx_core::features::Feature;
use llmx_protocol::protocol::AskForApproval;
use llmx_protocol::protocol::EventMsg;
use llmx_protocol::protocol::Op;
use llmx_protocol::protocol::ReviewDecision;
use llmx_protocol::protocol::SandboxPolicy;
use llmx_protocol::user_input::UserInput;
use tracing_test::traced_test;
use core_test_support::responses::ev_local_shell_call;
#[tokio::test]
#[traced_test]
async fn responses_api_emits_api_request_event() {
let server = start_mock_server().await;
mount_sse_once(&server, sse(vec![ev_completed("done")])).await;
let TestLlmx { llmx, .. } = test_llmx().build(&server).await.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| line.contains("llmx.api_request"))
.map(|_| Ok(()))
.unwrap_or_else(|| Err("expected llmx.api_request event".to_string()))
});
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| line.contains("llmx.conversation_starts"))
.map(|_| Ok(()))
.unwrap_or_else(|| Err("expected llmx.conversation_starts event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_emits_tracing_for_output_item() {
let server = start_mock_server().await;
mount_sse_once(
&server,
sse(vec![ev_assistant_message("id1", "hi"), ev_completed("id2")]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx().build(&server).await.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event")
&& line.contains("event.kind=response.output_item.done")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing response.output_item.done event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_emits_failed_event_on_parse_error() {
let server = start_mock_server().await;
mount_sse_once(&server, "data: not-json\n\n".to_string()).await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event")
&& line.contains("error.message")
&& line.contains("expected ident at line 1 column 2")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing llmx.sse_event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_records_failed_event_when_stream_closes_without_completed() {
let server = start_mock_server().await;
mount_sse_once(&server, sse(vec![ev_assistant_message("id", "hi")])).await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event")
&& line.contains("error.message")
&& line.contains("stream closed before response.completed")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing llmx.sse_event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_failed_event_records_response_error_message() {
let server = start_mock_server().await;
mount_sse_once(
&server,
sse(vec![serde_json::json!({
"type": "response.failed",
"response": {
"error": {
"message": "boom",
"code": "bad"
}
}
})]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event")
&& line.contains("event.kind=response.failed")
&& line.contains("error.message")
&& line.contains("boom")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing llmx.sse_event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_failed_event_logs_parse_error() {
let server = start_mock_server().await;
mount_sse_once(
&server,
sse(vec![serde_json::json!({
"type": "response.failed",
"response": {
"error": "not-an-object"
}
})]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event") && line.contains("event.kind=response.failed")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing llmx.sse_event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_failed_event_logs_missing_error() {
let server = start_mock_server().await;
mount_sse_once(
&server,
sse(vec![serde_json::json!({
"type": "response.failed",
"response": {}
})]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event") && line.contains("event.kind=response.failed")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing llmx.sse_event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_failed_event_logs_response_completed_parse_error() {
let server = start_mock_server().await;
mount_sse_once(
&server,
sse(vec![serde_json::json!({
"type": "response.completed",
"response": {}
})]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event")
&& line.contains("event.kind=response.completed")
&& line.contains("error.message")
&& line.contains("failed to parse ResponseCompleted")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing llmx.sse_event".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn process_sse_emits_completed_telemetry() {
let server = start_mock_server().await;
mount_sse_once(
&server,
sse(vec![serde_json::json!({
"type": "response.completed",
"response": {
"id": "resp1",
"usage": {
"input_tokens": 3,
"input_tokens_details": { "cached_tokens": 1 },
"output_tokens": 5,
"output_tokens_details": { "reasoning_tokens": 2 },
"total_tokens": 9
}
}
})]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx().build(&server).await.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| {
line.contains("llmx.sse_event")
&& line.contains("event.kind=response.completed")
&& line.contains("input_token_count=3")
&& line.contains("output_token_count=5")
&& line.contains("cached_token_count=1")
&& line.contains("reasoning_token_count=2")
&& line.contains("tool_token_count=9")
})
.map(|_| Ok(()))
.unwrap_or(Err("missing response.completed telemetry".to_string()))
});
}
#[tokio::test]
#[traced_test]
async fn handle_response_item_records_tool_result_for_custom_tool_call() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_custom_tool_call(
"custom-tool-call",
"unsupported_tool",
"{\"key\":\"value\"}",
),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(|lines: &[&str]| {
let line = lines
.iter()
.find(|line| {
line.contains("llmx.tool_result") && line.contains("call_id=custom-tool-call")
})
.ok_or_else(|| "missing llmx.tool_result event".to_string())?;
if !line.contains("tool_name=unsupported_tool") {
return Err("missing tool_name field".to_string());
}
if !line.contains("arguments={\"key\":\"value\"}") {
return Err("missing arguments field".to_string());
}
if !line.contains("output=unsupported custom tool call: unsupported_tool") {
return Err("missing output field".to_string());
}
if !line.contains("success=false") {
return Err("missing success field".to_string());
}
Ok(())
});
}
#[tokio::test]
#[traced_test]
async fn handle_response_item_records_tool_result_for_function_call() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_function_call("function-call", "nonexistent", "{\"value\":1}"),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(|lines: &[&str]| {
let line = lines
.iter()
.find(|line| {
line.contains("llmx.tool_result") && line.contains("call_id=function-call")
})
.ok_or_else(|| "missing llmx.tool_result event".to_string())?;
if !line.contains("tool_name=nonexistent") {
return Err("missing tool_name field".to_string());
}
if !line.contains("arguments={\"value\":1}") {
return Err("missing arguments field".to_string());
}
if !line.contains("output=unsupported call: nonexistent") {
return Err("missing output field".to_string());
}
if !line.contains("success=false") {
return Err("missing success field".to_string());
}
Ok(())
});
}
#[tokio::test]
#[traced_test]
async fn handle_response_item_records_tool_result_for_local_shell_missing_ids() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "local_shell_call",
"status": "completed",
"action": {
"type": "exec",
"command": vec!["/bin/echo", "hello"],
}
}
}),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(|lines: &[&str]| {
let line = lines
.iter()
.find(|line| {
line.contains("llmx.tool_result")
&& line.contains(&"tool_name=local_shell".to_string())
&& line.contains("output=LocalShellCall without call_id or id")
})
.ok_or_else(|| "missing llmx.tool_result event".to_string())?;
if !line.contains("success=false") {
return Err("missing success field".to_string());
}
Ok(())
});
}
#[cfg(target_os = "macos")]
#[tokio::test]
#[traced_test]
async fn handle_response_item_records_tool_result_for_local_shell_call() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("shell-call", "completed", vec!["/bin/echo", "shell"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(move |config| {
config.features.disable(Feature::GhostCommit);
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(|lines: &[&str]| {
let line = lines
.iter()
.find(|line| line.contains("llmx.tool_result") && line.contains("call_id=shell-call"))
.ok_or_else(|| "missing llmx.tool_result event".to_string())?;
if !line.contains("tool_name=local_shell") {
return Err("missing tool_name field".to_string());
}
if !line.contains("arguments=/bin/echo shell") {
return Err("missing arguments field".to_string());
}
let output_idx = line
.find("output=")
.ok_or_else(|| "missing output field".to_string())?;
if line[output_idx + "output=".len()..].is_empty() {
return Err("empty output field".to_string());
}
if !line.contains("success=false") {
return Err("missing success field".to_string());
}
Ok(())
});
}
fn tool_decision_assertion<'a>(
call_id: &'a str,
expected_decision: &'a str,
expected_source: &'a str,
) -> impl Fn(&[&str]) -> Result<(), String> + 'a {
let call_id = call_id.to_string();
let expected_decision = expected_decision.to_string();
let expected_source = expected_source.to_string();
move |lines: &[&str]| {
let line = lines
.iter()
.find(|line| {
line.contains("llmx.tool_decision") && line.contains(&format!("call_id={call_id}"))
})
.ok_or_else(|| format!("missing llmx.tool_decision event for {call_id}"))?;
let lower = line.to_lowercase();
if !lower.contains("tool_name=local_shell") {
return Err("missing tool_name for local_shell".to_string());
}
if !lower.contains(&format!("decision={expected_decision}")) {
return Err(format!("unexpected decision for {call_id}"));
}
if !lower.contains(&format!("source={expected_source}")) {
return Err(format!("unexpected source for {expected_source}"));
}
Ok(())
}
}
#[tokio::test]
#[traced_test]
async fn handle_container_exec_autoapprove_from_config_records_tool_decision() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("auto_config_call", "completed", vec!["/bin/echo", "hello"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(|config| {
config.approval_policy = AskForApproval::OnRequest;
config.sandbox_policy = SandboxPolicy::DangerFullAccess;
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
// Add a small delay to ensure telemetry is flushed to logs on slower platforms (e.g., ARM)
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
logs_assert(tool_decision_assertion(
"auto_config_call",
"approved",
"config",
));
}
#[tokio::test]
#[traced_test]
async fn handle_container_exec_user_approved_records_tool_decision() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("user_approved_call", "completed", vec!["/bin/date"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(|config| {
config.approval_policy = AskForApproval::UnlessTrusted;
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "approved".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
llmx.submit(Op::ExecApproval {
id: "0".into(),
decision: ReviewDecision::Approved,
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(tool_decision_assertion(
"user_approved_call",
"approved",
"user",
));
}
#[tokio::test]
#[traced_test]
async fn handle_container_exec_user_approved_for_session_records_tool_decision() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("user_approved_session_call", "completed", vec!["/bin/date"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(|config| {
config.approval_policy = AskForApproval::UnlessTrusted;
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "persist".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
llmx.submit(Op::ExecApproval {
id: "0".into(),
decision: ReviewDecision::ApprovedForSession,
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(tool_decision_assertion(
"user_approved_session_call",
"approvedforsession",
"user",
));
}
#[tokio::test]
#[traced_test]
async fn handle_sandbox_error_user_approves_retry_records_tool_decision() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("sandbox_retry_call", "completed", vec!["/bin/date"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(|config| {
config.approval_policy = AskForApproval::UnlessTrusted;
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "retry".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
llmx.submit(Op::ExecApproval {
id: "0".into(),
decision: ReviewDecision::Approved,
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(tool_decision_assertion(
"sandbox_retry_call",
"approved",
"user",
));
}
#[tokio::test]
#[traced_test]
async fn handle_container_exec_user_denies_records_tool_decision() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("user_denied_call", "completed", vec!["/bin/date"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(|config| {
config.approval_policy = AskForApproval::UnlessTrusted;
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "deny".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
llmx.submit(Op::ExecApproval {
id: "0".into(),
decision: ReviewDecision::Denied,
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(tool_decision_assertion(
"user_denied_call",
"denied",
"user",
));
}
#[tokio::test]
#[traced_test]
async fn handle_sandbox_error_user_approves_for_session_records_tool_decision() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("sandbox_session_call", "completed", vec!["/bin/date"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(|config| {
config.approval_policy = AskForApproval::UnlessTrusted;
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "persist".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
llmx.submit(Op::ExecApproval {
id: "0".into(),
decision: ReviewDecision::ApprovedForSession,
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(tool_decision_assertion(
"sandbox_session_call",
"approvedforsession",
"user",
));
}
#[tokio::test]
#[traced_test]
async fn handle_sandbox_error_user_denies_records_tool_decision() {
let server = start_mock_server().await;
mount_sse(
&server,
sse(vec![
ev_local_shell_call("sandbox_deny_call", "completed", vec!["/bin/date"]),
ev_completed("done"),
]),
)
.await;
let TestLlmx { llmx, .. } = test_llmx()
.with_config(|config| {
config.approval_policy = AskForApproval::UnlessTrusted;
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
})
.build(&server)
.await
.unwrap();
llmx.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "deny".into(),
}],
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
llmx.submit(Op::ExecApproval {
id: "0".into(),
decision: ReviewDecision::Denied,
})
.await
.unwrap();
wait_for_event(&llmx, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
logs_assert(tool_decision_assertion(
"sandbox_deny_call",
"denied",
"user",
));
}