feat: Complete LLMX v0.1.0 - Rebrand from Codex with LiteLLM Integration
This release represents a comprehensive transformation of the codebase from Codex to LLMX, enhanced with LiteLLM integration to support 100+ LLM providers through a unified API. ## Major Changes ### Phase 1: Repository & Infrastructure Setup - Established new repository structure and branching strategy - Created comprehensive project documentation (CLAUDE.md, LITELLM-SETUP.md) - Set up development environment and tooling configuration ### Phase 2: Rust Workspace Transformation - Renamed all Rust crates from `codex-*` to `llmx-*` (30+ crates) - Updated package names, binary names, and workspace members - Renamed core modules: codex.rs → llmx.rs, codex_delegate.rs → llmx_delegate.rs - Updated all internal references, imports, and type names - Renamed directories: codex-rs/ → llmx-rs/, codex-backend-openapi-models/ → llmx-backend-openapi-models/ - Fixed all Rust compilation errors after mass rename ### Phase 3: LiteLLM Integration - Integrated LiteLLM for multi-provider LLM support (Anthropic, OpenAI, Azure, Google AI, AWS Bedrock, etc.) - Implemented OpenAI-compatible Chat Completions API support - Added model family detection and provider-specific handling - Updated authentication to support LiteLLM API keys - Renamed environment variables: OPENAI_BASE_URL → LLMX_BASE_URL - Added LLMX_API_KEY for unified authentication - Enhanced error handling for Chat Completions API responses - Implemented fallback mechanisms between Responses API and Chat Completions API ### Phase 4: TypeScript/Node.js Components - Renamed npm package: @codex/codex-cli → @valknar/llmx - Updated TypeScript SDK to use new LLMX APIs and endpoints - Fixed all TypeScript compilation and linting errors - Updated SDK tests to support both API backends - Enhanced mock server to handle multiple API formats - Updated build scripts for cross-platform packaging ### Phase 5: Configuration & Documentation - Updated all configuration files to use LLMX naming - Rewrote README and documentation for LLMX branding - Updated config paths: ~/.codex/ → ~/.llmx/ - Added comprehensive LiteLLM setup guide - Updated all user-facing strings and help text - Created release plan and migration documentation ### Phase 6: Testing & Validation - Fixed all Rust tests for new naming scheme - Updated snapshot tests in TUI (36 frame files) - Fixed authentication storage tests - Updated Chat Completions payload and SSE tests - Fixed SDK tests for new API endpoints - Ensured compatibility with Claude Sonnet 4.5 model - Fixed test environment variables (LLMX_API_KEY, LLMX_BASE_URL) ### Phase 7: Build & Release Pipeline - Updated GitHub Actions workflows for LLMX binary names - Fixed rust-release.yml to reference llmx-rs/ instead of codex-rs/ - Updated CI/CD pipelines for new package names - Made Apple code signing optional in release workflow - Enhanced npm packaging resilience for partial platform builds - Added Windows sandbox support to workspace - Updated dotslash configuration for new binary names ### Phase 8: Final Polish - Renamed all assets (.github images, labels, templates) - Updated VSCode and DevContainer configurations - Fixed all clippy warnings and formatting issues - Applied cargo fmt and prettier formatting across codebase - Updated issue templates and pull request templates - Fixed all remaining UI text references ## Technical Details **Breaking Changes:** - Binary name changed from `codex` to `llmx` - Config directory changed from `~/.codex/` to `~/.llmx/` - Environment variables renamed (CODEX_* → LLMX_*) - npm package renamed to `@valknar/llmx` **New Features:** - Support for 100+ LLM providers via LiteLLM - Unified authentication with LLMX_API_KEY - Enhanced model provider detection and handling - Improved error handling and fallback mechanisms **Files Changed:** - 578 files modified across Rust, TypeScript, and documentation - 30+ Rust crates renamed and updated - Complete rebrand of UI, CLI, and documentation - All tests updated and passing **Dependencies:** - Updated Cargo.lock with new package names - Updated npm dependencies in llmx-cli - Enhanced OpenAPI models for LLMX backend This release establishes LLMX as a standalone project with comprehensive LiteLLM integration, maintaining full backward compatibility with existing functionality while opening support for a wide ecosystem of LLM providers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Sebastian Krüger <support@pivoine.art>
This commit is contained in:
199
llmx-rs/tui/src/file_search.rs
Normal file
199
llmx-rs/tui/src/file_search.rs
Normal file
@@ -0,0 +1,199 @@
|
||||
//! Helper that owns the debounce/cancellation logic for `@` file searches.
|
||||
//!
|
||||
//! `ChatComposer` publishes *every* change of the `@token` as
|
||||
//! `AppEvent::StartFileSearch(query)`.
|
||||
//! This struct receives those events and decides when to actually spawn the
|
||||
//! expensive search (handled in the main `App` thread). It tries to ensure:
|
||||
//!
|
||||
//! - Even when the user types long text quickly, they will start seeing results
|
||||
//! after a short delay using an early version of what they typed.
|
||||
//! - At most one search is in-flight at any time.
|
||||
//!
|
||||
//! It works as follows:
|
||||
//!
|
||||
//! 1. First query starts a debounce timer.
|
||||
//! 2. While the timer is pending, the latest query from the user is stored.
|
||||
//! 3. When the timer fires, it is cleared, and a search is done for the most
|
||||
//! recent query.
|
||||
//! 4. If there is a in-flight search that is not a prefix of the latest thing
|
||||
//! the user typed, it is cancelled.
|
||||
|
||||
use llmx_file_search as file_search;
|
||||
use std::num::NonZeroUsize;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::sync::Mutex;
|
||||
use std::sync::atomic::AtomicBool;
|
||||
use std::sync::atomic::Ordering;
|
||||
use std::thread;
|
||||
use std::time::Duration;
|
||||
|
||||
use crate::app_event::AppEvent;
|
||||
use crate::app_event_sender::AppEventSender;
|
||||
|
||||
const MAX_FILE_SEARCH_RESULTS: NonZeroUsize = NonZeroUsize::new(8).unwrap();
|
||||
const NUM_FILE_SEARCH_THREADS: NonZeroUsize = NonZeroUsize::new(2).unwrap();
|
||||
|
||||
/// How long to wait after a keystroke before firing the first search when none
|
||||
/// is currently running. Keeps early queries more meaningful.
|
||||
const FILE_SEARCH_DEBOUNCE: Duration = Duration::from_millis(100);
|
||||
|
||||
const ACTIVE_SEARCH_COMPLETE_POLL_INTERVAL: Duration = Duration::from_millis(20);
|
||||
|
||||
/// State machine for file-search orchestration.
|
||||
pub(crate) struct FileSearchManager {
|
||||
/// Unified state guarded by one mutex.
|
||||
state: Arc<Mutex<SearchState>>,
|
||||
|
||||
search_dir: PathBuf,
|
||||
app_tx: AppEventSender,
|
||||
}
|
||||
|
||||
struct SearchState {
|
||||
/// Latest query typed by user (updated every keystroke).
|
||||
latest_query: String,
|
||||
|
||||
/// true if a search is currently scheduled.
|
||||
is_search_scheduled: bool,
|
||||
|
||||
/// If there is an active search, this will be the query being searched.
|
||||
active_search: Option<ActiveSearch>,
|
||||
}
|
||||
|
||||
struct ActiveSearch {
|
||||
query: String,
|
||||
cancellation_token: Arc<AtomicBool>,
|
||||
}
|
||||
|
||||
impl FileSearchManager {
|
||||
pub fn new(search_dir: PathBuf, tx: AppEventSender) -> Self {
|
||||
Self {
|
||||
state: Arc::new(Mutex::new(SearchState {
|
||||
latest_query: String::new(),
|
||||
is_search_scheduled: false,
|
||||
active_search: None,
|
||||
})),
|
||||
search_dir,
|
||||
app_tx: tx,
|
||||
}
|
||||
}
|
||||
|
||||
/// Call whenever the user edits the `@` token.
|
||||
pub fn on_user_query(&self, query: String) {
|
||||
{
|
||||
#[expect(clippy::unwrap_used)]
|
||||
let mut st = self.state.lock().unwrap();
|
||||
if query == st.latest_query {
|
||||
// No change, nothing to do.
|
||||
return;
|
||||
}
|
||||
|
||||
// Update latest query.
|
||||
st.latest_query.clear();
|
||||
st.latest_query.push_str(&query);
|
||||
|
||||
// If there is an in-flight search that is definitely obsolete,
|
||||
// cancel it now.
|
||||
if let Some(active_search) = &st.active_search
|
||||
&& !query.starts_with(&active_search.query)
|
||||
{
|
||||
active_search
|
||||
.cancellation_token
|
||||
.store(true, Ordering::Relaxed);
|
||||
st.active_search = None;
|
||||
}
|
||||
|
||||
// Schedule a search to run after debounce.
|
||||
if !st.is_search_scheduled {
|
||||
st.is_search_scheduled = true;
|
||||
} else {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// If we are here, we set `st.is_search_scheduled = true` before
|
||||
// dropping the lock. This means we are the only thread that can spawn a
|
||||
// debounce timer.
|
||||
let state = self.state.clone();
|
||||
let search_dir = self.search_dir.clone();
|
||||
let tx_clone = self.app_tx.clone();
|
||||
thread::spawn(move || {
|
||||
// Always do a minimum debounce, but then poll until the
|
||||
// `active_search` is cleared.
|
||||
thread::sleep(FILE_SEARCH_DEBOUNCE);
|
||||
loop {
|
||||
#[expect(clippy::unwrap_used)]
|
||||
if state.lock().unwrap().active_search.is_none() {
|
||||
break;
|
||||
}
|
||||
thread::sleep(ACTIVE_SEARCH_COMPLETE_POLL_INTERVAL);
|
||||
}
|
||||
|
||||
// The debounce timer has expired, so start a search using the
|
||||
// latest query.
|
||||
let cancellation_token = Arc::new(AtomicBool::new(false));
|
||||
let token = cancellation_token.clone();
|
||||
let query = {
|
||||
#[expect(clippy::unwrap_used)]
|
||||
let mut st = state.lock().unwrap();
|
||||
let query = st.latest_query.clone();
|
||||
st.is_search_scheduled = false;
|
||||
st.active_search = Some(ActiveSearch {
|
||||
query: query.clone(),
|
||||
cancellation_token: token,
|
||||
});
|
||||
query
|
||||
};
|
||||
|
||||
FileSearchManager::spawn_file_search(
|
||||
query,
|
||||
search_dir,
|
||||
tx_clone,
|
||||
cancellation_token,
|
||||
state,
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
fn spawn_file_search(
|
||||
query: String,
|
||||
search_dir: PathBuf,
|
||||
tx: AppEventSender,
|
||||
cancellation_token: Arc<AtomicBool>,
|
||||
search_state: Arc<Mutex<SearchState>>,
|
||||
) {
|
||||
let compute_indices = true;
|
||||
std::thread::spawn(move || {
|
||||
let matches = file_search::run(
|
||||
&query,
|
||||
MAX_FILE_SEARCH_RESULTS,
|
||||
&search_dir,
|
||||
Vec::new(),
|
||||
NUM_FILE_SEARCH_THREADS,
|
||||
cancellation_token.clone(),
|
||||
compute_indices,
|
||||
true,
|
||||
)
|
||||
.map(|res| res.matches)
|
||||
.unwrap_or_default();
|
||||
|
||||
let is_cancelled = cancellation_token.load(Ordering::Relaxed);
|
||||
if !is_cancelled {
|
||||
tx.send(AppEvent::FileSearchResult { query, matches });
|
||||
}
|
||||
|
||||
// Reset the active search state. Do a pointer comparison to verify
|
||||
// that we are clearing the ActiveSearch that corresponds to the
|
||||
// cancellation token we were given.
|
||||
{
|
||||
#[expect(clippy::unwrap_used)]
|
||||
let mut st = search_state.lock().unwrap();
|
||||
if let Some(active_search) = &st.active_search
|
||||
&& Arc::ptr_eq(&active_search.cancellation_token, &cancellation_token)
|
||||
{
|
||||
st.active_search = None;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user