feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration
This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
This commit is contained in:
317
llmx-rs/backend-client/src/client.rs
Normal file
317
llmx-rs/backend-client/src/client.rs
Normal file
@@ -0,0 +1,317 @@
|
||||
use crate::types::CodeTaskDetailsResponse;
|
||||
use crate::types::PaginatedListTaskListItem;
|
||||
use crate::types::RateLimitStatusPayload;
|
||||
use crate::types::RateLimitWindowSnapshot;
|
||||
use crate::types::TurnAttemptsSiblingTurnsResponse;
|
||||
use anyhow::Result;
|
||||
use llmx_core::auth::LlmxAuth;
|
||||
use llmx_core::default_client::get_llmx_user_agent;
|
||||
use llmx_protocol::protocol::RateLimitSnapshot;
|
||||
use llmx_protocol::protocol::RateLimitWindow;
|
||||
use reqwest::header::AUTHORIZATION;
|
||||
use reqwest::header::CONTENT_TYPE;
|
||||
use reqwest::header::HeaderMap;
|
||||
use reqwest::header::HeaderName;
|
||||
use reqwest::header::HeaderValue;
|
||||
use reqwest::header::USER_AGENT;
|
||||
use serde::de::DeserializeOwned;
|
||||
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
pub enum PathStyle {
|
||||
/// /api/llmx/…
|
||||
LlmxApi,
|
||||
/// /wham/…
|
||||
ChatGptApi,
|
||||
}
|
||||
|
||||
impl PathStyle {
|
||||
pub fn from_base_url(base_url: &str) -> Self {
|
||||
if base_url.contains("/backend-api") {
|
||||
PathStyle::ChatGptApi
|
||||
} else {
|
||||
PathStyle::LlmxApi
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Client {
|
||||
base_url: String,
|
||||
http: reqwest::Client,
|
||||
bearer_token: Option<String>,
|
||||
user_agent: Option<HeaderValue>,
|
||||
chatgpt_account_id: Option<String>,
|
||||
path_style: PathStyle,
|
||||
}
|
||||
|
||||
impl Client {
|
||||
pub fn new(base_url: impl Into<String>) -> Result<Self> {
|
||||
let mut base_url = base_url.into();
|
||||
// Normalize common ChatGPT hostnames to include /backend-api so we hit the WHAM paths.
|
||||
// Also trim trailing slashes for consistent URL building.
|
||||
while base_url.ends_with('/') {
|
||||
base_url.pop();
|
||||
}
|
||||
if (base_url.starts_with("https://chatgpt.com")
|
||||
|| base_url.starts_with("https://chat.openai.com"))
|
||||
&& !base_url.contains("/backend-api")
|
||||
{
|
||||
base_url = format!("{base_url}/backend-api");
|
||||
}
|
||||
let http = reqwest::Client::builder().build()?;
|
||||
let path_style = PathStyle::from_base_url(&base_url);
|
||||
Ok(Self {
|
||||
base_url,
|
||||
http,
|
||||
bearer_token: None,
|
||||
user_agent: None,
|
||||
chatgpt_account_id: None,
|
||||
path_style,
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn from_auth(base_url: impl Into<String>, auth: &LlmxAuth) -> Result<Self> {
|
||||
let token = auth.get_token().await.map_err(anyhow::Error::from)?;
|
||||
let mut client = Self::new(base_url)?
|
||||
.with_user_agent(get_llmx_user_agent())
|
||||
.with_bearer_token(token);
|
||||
if let Some(account_id) = auth.get_account_id() {
|
||||
client = client.with_chatgpt_account_id(account_id);
|
||||
}
|
||||
Ok(client)
|
||||
}
|
||||
|
||||
pub fn with_bearer_token(mut self, token: impl Into<String>) -> Self {
|
||||
self.bearer_token = Some(token.into());
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_user_agent(mut self, ua: impl Into<String>) -> Self {
|
||||
if let Ok(hv) = HeaderValue::from_str(&ua.into()) {
|
||||
self.user_agent = Some(hv);
|
||||
}
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_chatgpt_account_id(mut self, account_id: impl Into<String>) -> Self {
|
||||
self.chatgpt_account_id = Some(account_id.into());
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_path_style(mut self, style: PathStyle) -> Self {
|
||||
self.path_style = style;
|
||||
self
|
||||
}
|
||||
|
||||
fn headers(&self) -> HeaderMap {
|
||||
let mut h = HeaderMap::new();
|
||||
if let Some(ua) = &self.user_agent {
|
||||
h.insert(USER_AGENT, ua.clone());
|
||||
} else {
|
||||
h.insert(USER_AGENT, HeaderValue::from_static("llmx-cli"));
|
||||
}
|
||||
if let Some(token) = &self.bearer_token {
|
||||
let value = format!("Bearer {token}");
|
||||
if let Ok(hv) = HeaderValue::from_str(&value) {
|
||||
h.insert(AUTHORIZATION, hv);
|
||||
}
|
||||
}
|
||||
if let Some(acc) = &self.chatgpt_account_id
|
||||
&& let Ok(name) = HeaderName::from_bytes(b"ChatGPT-Account-Id")
|
||||
&& let Ok(hv) = HeaderValue::from_str(acc)
|
||||
{
|
||||
h.insert(name, hv);
|
||||
}
|
||||
h
|
||||
}
|
||||
|
||||
async fn exec_request(
|
||||
&self,
|
||||
req: reqwest::RequestBuilder,
|
||||
method: &str,
|
||||
url: &str,
|
||||
) -> Result<(String, String)> {
|
||||
let res = req.send().await?;
|
||||
let status = res.status();
|
||||
let ct = res
|
||||
.headers()
|
||||
.get(CONTENT_TYPE)
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
let body = res.text().await.unwrap_or_default();
|
||||
if !status.is_success() {
|
||||
anyhow::bail!("{method} {url} failed: {status}; content-type={ct}; body={body}");
|
||||
}
|
||||
Ok((body, ct))
|
||||
}
|
||||
|
||||
fn decode_json<T: DeserializeOwned>(&self, url: &str, ct: &str, body: &str) -> Result<T> {
|
||||
match serde_json::from_str::<T>(body) {
|
||||
Ok(v) => Ok(v),
|
||||
Err(e) => {
|
||||
anyhow::bail!("Decode error for {url}: {e}; content-type={ct}; body={body}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn get_rate_limits(&self) -> Result<RateLimitSnapshot> {
|
||||
let url = match self.path_style {
|
||||
PathStyle::LlmxApi => format!("{}/api/llmx/usage", self.base_url),
|
||||
PathStyle::ChatGptApi => format!("{}/wham/usage", self.base_url),
|
||||
};
|
||||
let req = self.http.get(&url).headers(self.headers());
|
||||
let (body, ct) = self.exec_request(req, "GET", &url).await?;
|
||||
let payload: RateLimitStatusPayload = self.decode_json(&url, &ct, &body)?;
|
||||
Ok(Self::rate_limit_snapshot_from_payload(payload))
|
||||
}
|
||||
|
||||
pub async fn list_tasks(
|
||||
&self,
|
||||
limit: Option<i32>,
|
||||
task_filter: Option<&str>,
|
||||
environment_id: Option<&str>,
|
||||
) -> Result<PaginatedListTaskListItem> {
|
||||
let url = match self.path_style {
|
||||
PathStyle::LlmxApi => format!("{}/api/llmx/tasks/list", self.base_url),
|
||||
PathStyle::ChatGptApi => format!("{}/wham/tasks/list", self.base_url),
|
||||
};
|
||||
let req = self.http.get(&url).headers(self.headers());
|
||||
let req = if let Some(lim) = limit {
|
||||
req.query(&[("limit", lim)])
|
||||
} else {
|
||||
req
|
||||
};
|
||||
let req = if let Some(tf) = task_filter {
|
||||
req.query(&[("task_filter", tf)])
|
||||
} else {
|
||||
req
|
||||
};
|
||||
let req = if let Some(id) = environment_id {
|
||||
req.query(&[("environment_id", id)])
|
||||
} else {
|
||||
req
|
||||
};
|
||||
let (body, ct) = self.exec_request(req, "GET", &url).await?;
|
||||
self.decode_json::<PaginatedListTaskListItem>(&url, &ct, &body)
|
||||
}
|
||||
|
||||
pub async fn get_task_details(&self, task_id: &str) -> Result<CodeTaskDetailsResponse> {
|
||||
let (parsed, _body, _ct) = self.get_task_details_with_body(task_id).await?;
|
||||
Ok(parsed)
|
||||
}
|
||||
|
||||
pub async fn get_task_details_with_body(
|
||||
&self,
|
||||
task_id: &str,
|
||||
) -> Result<(CodeTaskDetailsResponse, String, String)> {
|
||||
let url = match self.path_style {
|
||||
PathStyle::LlmxApi => format!("{}/api/llmx/tasks/{}", self.base_url, task_id),
|
||||
PathStyle::ChatGptApi => format!("{}/wham/tasks/{}", self.base_url, task_id),
|
||||
};
|
||||
let req = self.http.get(&url).headers(self.headers());
|
||||
let (body, ct) = self.exec_request(req, "GET", &url).await?;
|
||||
let parsed: CodeTaskDetailsResponse = self.decode_json(&url, &ct, &body)?;
|
||||
Ok((parsed, body, ct))
|
||||
}
|
||||
|
||||
pub async fn list_sibling_turns(
|
||||
&self,
|
||||
task_id: &str,
|
||||
turn_id: &str,
|
||||
) -> Result<TurnAttemptsSiblingTurnsResponse> {
|
||||
let url = match self.path_style {
|
||||
PathStyle::LlmxApi => format!(
|
||||
"{}/api/llmx/tasks/{}/turns/{}/sibling_turns",
|
||||
self.base_url, task_id, turn_id
|
||||
),
|
||||
PathStyle::ChatGptApi => format!(
|
||||
"{}/wham/tasks/{}/turns/{}/sibling_turns",
|
||||
self.base_url, task_id, turn_id
|
||||
),
|
||||
};
|
||||
let req = self.http.get(&url).headers(self.headers());
|
||||
let (body, ct) = self.exec_request(req, "GET", &url).await?;
|
||||
self.decode_json::<TurnAttemptsSiblingTurnsResponse>(&url, &ct, &body)
|
||||
}
|
||||
|
||||
/// Create a new task (user turn) by POSTing to the appropriate backend path
|
||||
/// based on `path_style`. Returns the created task id.
|
||||
pub async fn create_task(&self, request_body: serde_json::Value) -> Result<String> {
|
||||
let url = match self.path_style {
|
||||
PathStyle::LlmxApi => format!("{}/api/llmx/tasks", self.base_url),
|
||||
PathStyle::ChatGptApi => format!("{}/wham/tasks", self.base_url),
|
||||
};
|
||||
let req = self
|
||||
.http
|
||||
.post(&url)
|
||||
.headers(self.headers())
|
||||
.header(CONTENT_TYPE, HeaderValue::from_static("application/json"))
|
||||
.json(&request_body);
|
||||
let (body, ct) = self.exec_request(req, "POST", &url).await?;
|
||||
// Extract id from JSON: prefer `task.id`; fallback to top-level `id` when present.
|
||||
match serde_json::from_str::<serde_json::Value>(&body) {
|
||||
Ok(v) => {
|
||||
if let Some(id) = v
|
||||
.get("task")
|
||||
.and_then(|t| t.get("id"))
|
||||
.and_then(|s| s.as_str())
|
||||
{
|
||||
Ok(id.to_string())
|
||||
} else if let Some(id) = v.get("id").and_then(|s| s.as_str()) {
|
||||
Ok(id.to_string())
|
||||
} else {
|
||||
anyhow::bail!(
|
||||
"POST {url} succeeded but no task id found; content-type={ct}; body={body}"
|
||||
);
|
||||
}
|
||||
}
|
||||
Err(e) => anyhow::bail!("Decode error for {url}: {e}; content-type={ct}; body={body}"),
|
||||
}
|
||||
}
|
||||
|
||||
// rate limit helpers
|
||||
fn rate_limit_snapshot_from_payload(payload: RateLimitStatusPayload) -> RateLimitSnapshot {
|
||||
let Some(details) = payload
|
||||
.rate_limit
|
||||
.and_then(|inner| inner.map(|boxed| *boxed))
|
||||
else {
|
||||
return RateLimitSnapshot {
|
||||
primary: None,
|
||||
secondary: None,
|
||||
};
|
||||
};
|
||||
|
||||
RateLimitSnapshot {
|
||||
primary: Self::map_rate_limit_window(details.primary_window),
|
||||
secondary: Self::map_rate_limit_window(details.secondary_window),
|
||||
}
|
||||
}
|
||||
|
||||
fn map_rate_limit_window(
|
||||
window: Option<Option<Box<RateLimitWindowSnapshot>>>,
|
||||
) -> Option<RateLimitWindow> {
|
||||
let snapshot = match window {
|
||||
Some(Some(snapshot)) => *snapshot,
|
||||
_ => return None,
|
||||
};
|
||||
|
||||
let used_percent = f64::from(snapshot.used_percent);
|
||||
let window_minutes = Self::window_minutes_from_seconds(snapshot.limit_window_seconds);
|
||||
let resets_at = Some(i64::from(snapshot.reset_at));
|
||||
Some(RateLimitWindow {
|
||||
used_percent,
|
||||
window_minutes,
|
||||
resets_at,
|
||||
})
|
||||
}
|
||||
|
||||
fn window_minutes_from_seconds(seconds: i32) -> Option<i64> {
|
||||
if seconds <= 0 {
|
||||
return None;
|
||||
}
|
||||
|
||||
let seconds_i64 = i64::from(seconds);
|
||||
Some((seconds_i64 + 59) / 60)
|
||||
}
|
||||
}
|
||||
9
llmx-rs/backend-client/src/lib.rs
Normal file
9
llmx-rs/backend-client/src/lib.rs
Normal file
@@ -0,0 +1,9 @@
|
||||
mod client;
|
||||
pub mod types;
|
||||
|
||||
pub use client::Client;
|
||||
pub use types::CodeTaskDetailsResponse;
|
||||
pub use types::CodeTaskDetailsResponseExt;
|
||||
pub use types::PaginatedListTaskListItem;
|
||||
pub use types::TaskListItem;
|
||||
pub use types::TurnAttemptsSiblingTurnsResponse;
|
||||
373
llmx-rs/backend-client/src/types.rs
Normal file
373
llmx-rs/backend-client/src/types.rs
Normal file
@@ -0,0 +1,373 @@
|
||||
pub use llmx_backend_openapi_models::models::PaginatedListTaskListItem;
|
||||
pub use llmx_backend_openapi_models::models::PlanType;
|
||||
pub use llmx_backend_openapi_models::models::RateLimitStatusDetails;
|
||||
pub use llmx_backend_openapi_models::models::RateLimitStatusPayload;
|
||||
pub use llmx_backend_openapi_models::models::RateLimitWindowSnapshot;
|
||||
pub use llmx_backend_openapi_models::models::TaskListItem;
|
||||
|
||||
use serde::Deserialize;
|
||||
use serde::de::Deserializer;
|
||||
use serde_json::Value;
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Hand-rolled models for the Cloud Tasks task-details response.
|
||||
/// The generated OpenAPI models are pretty bad. This is a half-step
|
||||
/// towards hand-rolling them.
|
||||
#[derive(Clone, Debug, Deserialize)]
|
||||
pub struct CodeTaskDetailsResponse {
|
||||
#[serde(default)]
|
||||
pub current_user_turn: Option<Turn>,
|
||||
#[serde(default)]
|
||||
pub current_assistant_turn: Option<Turn>,
|
||||
#[serde(default)]
|
||||
pub current_diff_task_turn: Option<Turn>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct Turn {
|
||||
#[serde(default)]
|
||||
pub id: Option<String>,
|
||||
#[serde(default)]
|
||||
pub attempt_placement: Option<i64>,
|
||||
#[serde(default, rename = "turn_status")]
|
||||
pub turn_status: Option<String>,
|
||||
#[serde(default, deserialize_with = "deserialize_vec")]
|
||||
pub sibling_turn_ids: Vec<String>,
|
||||
#[serde(default, deserialize_with = "deserialize_vec")]
|
||||
pub input_items: Vec<TurnItem>,
|
||||
#[serde(default, deserialize_with = "deserialize_vec")]
|
||||
pub output_items: Vec<TurnItem>,
|
||||
#[serde(default)]
|
||||
pub worklog: Option<Worklog>,
|
||||
#[serde(default)]
|
||||
pub error: Option<TurnError>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct TurnItem {
|
||||
#[serde(rename = "type", default)]
|
||||
pub kind: String,
|
||||
#[serde(default)]
|
||||
pub role: Option<String>,
|
||||
#[serde(default, deserialize_with = "deserialize_vec")]
|
||||
pub content: Vec<ContentFragment>,
|
||||
#[serde(default)]
|
||||
pub diff: Option<String>,
|
||||
#[serde(default)]
|
||||
pub output_diff: Option<DiffPayload>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Deserialize)]
|
||||
#[serde(untagged)]
|
||||
pub enum ContentFragment {
|
||||
Structured(StructuredContent),
|
||||
Text(String),
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct StructuredContent {
|
||||
#[serde(rename = "content_type", default)]
|
||||
pub content_type: Option<String>,
|
||||
#[serde(default)]
|
||||
pub text: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct DiffPayload {
|
||||
#[serde(default)]
|
||||
pub diff: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct Worklog {
|
||||
#[serde(default, deserialize_with = "deserialize_vec")]
|
||||
pub messages: Vec<WorklogMessage>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct WorklogMessage {
|
||||
#[serde(default)]
|
||||
pub author: Option<Author>,
|
||||
#[serde(default)]
|
||||
pub content: Option<WorklogContent>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct Author {
|
||||
#[serde(default)]
|
||||
pub role: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct WorklogContent {
|
||||
#[serde(default)]
|
||||
pub parts: Vec<ContentFragment>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
pub struct TurnError {
|
||||
#[serde(default)]
|
||||
pub code: Option<String>,
|
||||
#[serde(default)]
|
||||
pub message: Option<String>,
|
||||
}
|
||||
|
||||
impl ContentFragment {
|
||||
fn text(&self) -> Option<&str> {
|
||||
match self {
|
||||
ContentFragment::Structured(inner) => {
|
||||
if inner
|
||||
.content_type
|
||||
.as_deref()
|
||||
.map(|ct| ct.eq_ignore_ascii_case("text"))
|
||||
.unwrap_or(false)
|
||||
{
|
||||
inner.text.as_deref().filter(|s| !s.is_empty())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
ContentFragment::Text(raw) => {
|
||||
if raw.trim().is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(raw.as_str())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TurnItem {
|
||||
fn text_values(&self) -> Vec<String> {
|
||||
self.content
|
||||
.iter()
|
||||
.filter_map(|fragment| fragment.text().map(str::to_string))
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn diff_text(&self) -> Option<String> {
|
||||
if self.kind == "output_diff" {
|
||||
if let Some(diff) = &self.diff
|
||||
&& !diff.is_empty()
|
||||
{
|
||||
return Some(diff.clone());
|
||||
}
|
||||
} else if self.kind == "pr"
|
||||
&& let Some(payload) = &self.output_diff
|
||||
&& let Some(diff) = &payload.diff
|
||||
&& !diff.is_empty()
|
||||
{
|
||||
return Some(diff.clone());
|
||||
}
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
impl Turn {
|
||||
fn unified_diff(&self) -> Option<String> {
|
||||
self.output_items.iter().find_map(TurnItem::diff_text)
|
||||
}
|
||||
|
||||
fn message_texts(&self) -> Vec<String> {
|
||||
let mut out: Vec<String> = self
|
||||
.output_items
|
||||
.iter()
|
||||
.filter(|item| item.kind == "message")
|
||||
.flat_map(TurnItem::text_values)
|
||||
.collect();
|
||||
|
||||
if let Some(log) = &self.worklog {
|
||||
for message in &log.messages {
|
||||
if message.is_assistant() {
|
||||
out.extend(message.text_values());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
out
|
||||
}
|
||||
|
||||
fn user_prompt(&self) -> Option<String> {
|
||||
let parts: Vec<String> = self
|
||||
.input_items
|
||||
.iter()
|
||||
.filter(|item| item.kind == "message")
|
||||
.filter(|item| {
|
||||
item.role
|
||||
.as_deref()
|
||||
.map(|r| r.eq_ignore_ascii_case("user"))
|
||||
.unwrap_or(true)
|
||||
})
|
||||
.flat_map(TurnItem::text_values)
|
||||
.collect();
|
||||
|
||||
if parts.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(parts.join(
|
||||
"
|
||||
|
||||
",
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
fn error_summary(&self) -> Option<String> {
|
||||
self.error.as_ref().and_then(TurnError::summary)
|
||||
}
|
||||
}
|
||||
|
||||
impl WorklogMessage {
|
||||
fn is_assistant(&self) -> bool {
|
||||
self.author
|
||||
.as_ref()
|
||||
.and_then(|a| a.role.as_deref())
|
||||
.map(|role| role.eq_ignore_ascii_case("assistant"))
|
||||
.unwrap_or(false)
|
||||
}
|
||||
|
||||
fn text_values(&self) -> Vec<String> {
|
||||
self.content
|
||||
.as_ref()
|
||||
.map(|content| {
|
||||
content
|
||||
.parts
|
||||
.iter()
|
||||
.filter_map(|fragment| fragment.text().map(str::to_string))
|
||||
.collect()
|
||||
})
|
||||
.unwrap_or_default()
|
||||
}
|
||||
}
|
||||
|
||||
impl TurnError {
|
||||
fn summary(&self) -> Option<String> {
|
||||
let code = self.code.as_deref().unwrap_or("");
|
||||
let message = self.message.as_deref().unwrap_or("");
|
||||
match (code.is_empty(), message.is_empty()) {
|
||||
(true, true) => None,
|
||||
(false, true) => Some(code.to_string()),
|
||||
(true, false) => Some(message.to_string()),
|
||||
(false, false) => Some(format!("{code}: {message}")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub trait CodeTaskDetailsResponseExt {
|
||||
/// Attempt to extract a unified diff string from the assistant or diff turn.
|
||||
fn unified_diff(&self) -> Option<String>;
|
||||
/// Extract assistant text output messages (no diff) from current turns.
|
||||
fn assistant_text_messages(&self) -> Vec<String>;
|
||||
/// Extract the user's prompt text from the current user turn, when present.
|
||||
fn user_text_prompt(&self) -> Option<String>;
|
||||
/// Extract an assistant error message (if the turn failed and provided one).
|
||||
fn assistant_error_message(&self) -> Option<String>;
|
||||
}
|
||||
|
||||
impl CodeTaskDetailsResponseExt for CodeTaskDetailsResponse {
|
||||
fn unified_diff(&self) -> Option<String> {
|
||||
[
|
||||
self.current_diff_task_turn.as_ref(),
|
||||
self.current_assistant_turn.as_ref(),
|
||||
]
|
||||
.into_iter()
|
||||
.flatten()
|
||||
.find_map(Turn::unified_diff)
|
||||
}
|
||||
|
||||
fn assistant_text_messages(&self) -> Vec<String> {
|
||||
let mut out = Vec::new();
|
||||
for turn in [
|
||||
self.current_diff_task_turn.as_ref(),
|
||||
self.current_assistant_turn.as_ref(),
|
||||
]
|
||||
.into_iter()
|
||||
.flatten()
|
||||
{
|
||||
out.extend(turn.message_texts());
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
fn user_text_prompt(&self) -> Option<String> {
|
||||
self.current_user_turn.as_ref().and_then(Turn::user_prompt)
|
||||
}
|
||||
|
||||
fn assistant_error_message(&self) -> Option<String> {
|
||||
self.current_assistant_turn
|
||||
.as_ref()
|
||||
.and_then(Turn::error_summary)
|
||||
}
|
||||
}
|
||||
|
||||
fn deserialize_vec<'de, D, T>(deserializer: D) -> Result<Vec<T>, D::Error>
|
||||
where
|
||||
D: Deserializer<'de>,
|
||||
T: Deserialize<'de>,
|
||||
{
|
||||
Option::<Vec<T>>::deserialize(deserializer).map(|opt| opt.unwrap_or_default())
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Deserialize)]
|
||||
pub struct TurnAttemptsSiblingTurnsResponse {
|
||||
#[serde(default)]
|
||||
pub sibling_turns: Vec<HashMap<String, Value>>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use pretty_assertions::assert_eq;
|
||||
|
||||
fn fixture(name: &str) -> CodeTaskDetailsResponse {
|
||||
let json = match name {
|
||||
"diff" => include_str!("../tests/fixtures/task_details_with_diff.json"),
|
||||
"error" => include_str!("../tests/fixtures/task_details_with_error.json"),
|
||||
other => panic!("unknown fixture {other}"),
|
||||
};
|
||||
serde_json::from_str(json).expect("fixture should deserialize")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unified_diff_prefers_current_diff_task_turn() {
|
||||
let details = fixture("diff");
|
||||
let diff = details.unified_diff().expect("diff present");
|
||||
assert!(diff.contains("diff --git"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unified_diff_falls_back_to_pr_output_diff() {
|
||||
let details = fixture("error");
|
||||
let diff = details.unified_diff().expect("diff from pr output");
|
||||
assert!(diff.contains("lib.rs"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assistant_text_messages_extracts_text_content() {
|
||||
let details = fixture("diff");
|
||||
let messages = details.assistant_text_messages();
|
||||
assert_eq!(messages, vec!["Assistant response".to_string()]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn user_text_prompt_joins_parts_with_spacing() {
|
||||
let details = fixture("diff");
|
||||
let prompt = details.user_text_prompt().expect("prompt present");
|
||||
assert_eq!(
|
||||
prompt,
|
||||
"First line
|
||||
|
||||
Second line"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assistant_error_message_combines_code_and_message() {
|
||||
let details = fixture("error");
|
||||
let msg = details
|
||||
.assistant_error_message()
|
||||
.expect("error should be present");
|
||||
assert_eq!(msg, "APPLY_FAILED: Patch could not be applied");
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user