8 Commits

Author SHA1 Message Date
7d2842885a chore: Bump version to 0.1.5
Some checks failed
ci / build-test (push) Failing after 5m45s
Codespell / Check for spelling errors (push) Successful in 11s
sdk / sdks (push) Failing after 5m2s
rust-release / tag-check (push) Successful in 3s
rust-ci / Lint/Build — ubuntu-24.04 - x86_64-unknown-linux-gnu (push) Has been cancelled
rust-ci / Lint/Build — ubuntu-24.04 - x86_64-unknown-linux-musl (push) Has been cancelled
rust-ci / Lint/Build — ubuntu-24.04-arm - aarch64-unknown-linux-gnu (push) Has been cancelled
rust-ci / Lint/Build — ubuntu-24.04-arm - aarch64-unknown-linux-musl (push) Has been cancelled
rust-ci / Lint/Build — windows-11-arm - aarch64-pc-windows-msvc (push) Has been cancelled
rust-ci / Lint/Build — windows-latest - x86_64-pc-windows-msvc (push) Has been cancelled
rust-ci / Lint/Build — macos-14 - aarch64-apple-darwin (release) (push) Has been cancelled
rust-ci / Lint/Build — ubuntu-24.04 - x86_64-unknown-linux-musl (release) (push) Has been cancelled
rust-ci / Lint/Build — windows-11-arm - aarch64-pc-windows-msvc (release) (push) Has been cancelled
rust-ci / Lint/Build — windows-latest - x86_64-pc-windows-msvc (release) (push) Has been cancelled
rust-ci / Tests — macos-14 - aarch64-apple-darwin (push) Has been cancelled
rust-ci / Detect changed areas (push) Has been cancelled
rust-ci / Format / etc (push) Has been cancelled
rust-ci / cargo shear (push) Has been cancelled
rust-ci / Lint/Build — macos-14 - aarch64-apple-darwin (push) Has been cancelled
rust-ci / Lint/Build — macos-14 - x86_64-apple-darwin (push) Has been cancelled
rust-ci / Tests — ubuntu-24.04 - x86_64-unknown-linux-gnu (push) Has been cancelled
rust-ci / Tests — ubuntu-24.04-arm - aarch64-unknown-linux-gnu (push) Has been cancelled
rust-ci / Tests — windows-11-arm - aarch64-pc-windows-msvc (push) Has been cancelled
rust-ci / Tests — windows-latest - x86_64-pc-windows-msvc (push) Has been cancelled
rust-ci / CI results (required) (push) Has been cancelled
rust-release / Build - macos-15-xlarge - aarch64-apple-darwin (push) Has been cancelled
rust-release / Build - macos-15-xlarge - x86_64-apple-darwin (push) Has been cancelled
rust-release / Build - ubuntu-24.04 - x86_64-unknown-linux-gnu (push) Has been cancelled
rust-release / Build - ubuntu-24.04 - x86_64-unknown-linux-musl (push) Has been cancelled
rust-release / Build - ubuntu-24.04-arm - aarch64-unknown-linux-gnu (push) Has been cancelled
rust-release / Build - ubuntu-24.04-arm - aarch64-unknown-linux-musl (push) Has been cancelled
rust-release / Build - windows-11-arm - aarch64-pc-windows-msvc (push) Has been cancelled
rust-release / Build - windows-latest - x86_64-pc-windows-msvc (push) Has been cancelled
rust-release / release (push) Has been cancelled
rust-release / publish-npm (push) Has been cancelled
- Fix orphaned tool_use error by tracking skip state per call_id
- Handle retries with same call_id correctly
- Add comprehensive debug logging for troubleshooting

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:08:14 +01:00
67ff31104f chore: Bump version to 0.1.4
- Fix: Skip empty/whitespace text content blocks
- Fix: Validate function call arguments and skip malformed calls
- Fix: Skip outputs for skipped function calls to maintain consistency
- Resolves Anthropic API errors:
  - "messages: text content blocks must contain non-whitespace text"
  - "Extra data: line 1 column 26 (char 25)" (invalid JSON)
  - "unexpected `tool_use_id` found in `tool_result` blocks"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 19:52:13 +01:00
866ca2a372 chore: Bump version to 0.1.3
- Fix: Skip empty/whitespace-only text content blocks in chat completions
- This resolves Anthropic API errors:
  - "messages: text content blocks must contain non-whitespace text"
  - "messages: text content blocks must be non-empty"
- Updated version strings in all test files and snapshots

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 14:30:46 +01:00
Sebastian Krüger
4b2e4a1d48 fix: Add scope and update npm for OIDC publishing
- Added scope: "@valknarthing" to setup-node action (required for scoped packages)
- Added npm update step to ensure npm CLI v11.5.1+ (required for OIDC support)

Matches the original OpenAI Codex workflow configuration.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 13:24:05 +01:00
Sebastian Krüger
207a0e2333 fix: Remove NPM_TOKEN for OIDC auth and disable alpha branch update
- Removed NODE_AUTH_TOKEN env var from publish-npm job
  OIDC/Trusted Publishers authentication doesn't need NPM_TOKEN secret
- Commented out update-branch job since latest-alpha-cli branch doesn't exist

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:52:52 +01:00
Sebastian Krüger
df6e9f8e0e chore: Bump version to 0.1.2
Updated all version references from 0.1.1 to 0.1.2:
- Workspace version in llmx-rs/Cargo.toml
- Package version in llmx-cli/package.json
- Updated Cargo.lock with all workspace crate versions
- Updated test hardcoded version strings in:
  - mcp-server/tests/common/mcp_process.rs
  - app-server/tests/suite/user_agent.rs
  - app-server/tests/common/mcp_process.rs
- Updated TUI snapshot tests with new version number

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:21:18 +01:00
Sebastian Krüger
c9f903a83e fix: Correct environment variables and documentation links in README
- Fixed environment variable names from LITELLM_* to LLMX_*
  - LITELLM_BASE_URL → LLMX_BASE_URL
  - LITELLM_API_KEY → LLMX_API_KEY
- Updated all documentation links to use absolute GitHub URLs instead of relative paths
  - Fixes 404 errors when viewing README on npm registry
  - All ./docs/ and ./LITELLM-SETUP.md links now point to github.com/valknarthing/llmx

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:18:31 +01:00
Sebastian Krüger
00eed932c0 feat: Use npm Trusted Publishers (OIDC) for automated publishing
- Add id-token: write permission for OIDC authentication
- Add --provenance flag to npm publish for supply chain security
- Use NODE_AUTH_TOKEN environment variable (set by setup-node)
- Remove manual .npmrc token writing (handled by setup-node with OIDC)

This enables automated npm publishing without storing tokens as secrets.
Requires Trusted Publisher to be configured at:
https://www.npmjs.com/package/@valknarthing/llmx/access

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:10:09 +01:00
17 changed files with 225 additions and 123 deletions

View File

@@ -476,7 +476,7 @@ jobs:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
# Publish to npm using authentication token
# Publish to npm using Trusted Publishers (OIDC)
publish-npm:
# Publish to npm for stable releases and alpha pre-releases with numeric suffixes.
if: ${{ needs.release.outputs.should_publish_npm == 'true' }}
@@ -485,6 +485,7 @@ jobs:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write # Required for OIDC authentication
steps:
- name: Setup Node.js
@@ -492,6 +493,10 @@ jobs:
with:
node-version: 22
registry-url: "https://registry.npmjs.org"
scope: "@valknarthing"
- name: Update npm
run: npm install -g npm@latest
- name: Download npm tarballs from release
env:
@@ -511,10 +516,6 @@ jobs:
VERSION: ${{ needs.release.outputs.version }}
NPM_TAG: ${{ needs.release.outputs.npm_tag }}
run: |
# Write auth token to the .npmrc file that setup-node created
echo "//registry.npmjs.org/:_authToken=${{ secrets.NPM_TOKEN }}" >> ${NPM_CONFIG_USERCONFIG}
set -euo pipefail
tag_args=()
if [[ -n "${NPM_TAG}" ]]; then
@@ -526,24 +527,24 @@ jobs:
)
for tarball in "${tarballs[@]}"; do
npm publish "${GITHUB_WORKSPACE}/dist/npm/${tarball}" --access public "${tag_args[@]}"
npm publish "${GITHUB_WORKSPACE}/dist/npm/${tarball}" --provenance --access public "${tag_args[@]}"
done
update-branch:
name: Update latest-alpha-cli branch
permissions:
contents: write
needs: release
runs-on: ubuntu-latest
steps:
- name: Update latest-alpha-cli branch
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
gh api \
repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
-X PATCH \
-f sha="${GITHUB_SHA}" \
-F force=true
# update-branch:
# name: Update latest-alpha-cli branch
# permissions:
# contents: write
# needs: release
# runs-on: ubuntu-latest
#
# steps:
# - name: Update latest-alpha-cli branch
# env:
# GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# run: |
# set -euo pipefail
# gh api \
# repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
# -X PATCH \
# -f sha="${GITHUB_SHA}" \
# -F force=true

View File

@@ -47,56 +47,56 @@ LLMX is powered by [LiteLLM](https://docs.litellm.ai/), which provides access to
```bash
# Set your LiteLLM server URL (default: http://localhost:4000/v1)
export LITELLM_BASE_URL="http://localhost:4000/v1"
export LITELLM_API_KEY="your-api-key"
export LLMX_BASE_URL="http://localhost:4000/v1"
export LLMX_API_KEY="your-api-key"
# Run LLMX
llmx "hello world"
```
**Configuration:** See [LITELLM-SETUP.md](./LITELLM-SETUP.md) for detailed setup instructions.
**Configuration:** See [LITELLM-SETUP.md](https://github.com/valknarthing/llmx/blob/main/LITELLM-SETUP.md) for detailed setup instructions.
You can also use LLMX with ChatGPT or OpenAI API keys. For authentication options, see the [authentication docs](./docs/authentication.md).
You can also use LLMX with ChatGPT or OpenAI API keys. For authentication options, see the [authentication docs](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md).
### Model Context Protocol (MCP)
LLMX can access MCP servers. To configure them, refer to the [config docs](./docs/config.md#mcp_servers).
LLMX can access MCP servers. To configure them, refer to the [config docs](https://github.com/valknarthing/llmx/blob/main/docs/config.md#mcp_servers).
### Configuration
LLMX CLI supports a rich set of configuration options, with preferences stored in `~/.llmx/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
LLMX CLI supports a rich set of configuration options, with preferences stored in `~/.llmx/config.toml`. For full configuration options, see [Configuration](https://github.com/valknarthing/llmx/blob/main/docs/config.md).
---
### Docs & FAQ
- [**Getting started**](./docs/getting-started.md)
- [CLI usage](./docs/getting-started.md#cli-usage)
- [Slash Commands](./docs/slash_commands.md)
- [Running with a prompt as input](./docs/getting-started.md#running-with-a-prompt-as-input)
- [Example prompts](./docs/getting-started.md#example-prompts)
- [Custom prompts](./docs/prompts.md)
- [Memory with AGENTS.md](./docs/getting-started.md#memory-with-agentsmd)
- [**Configuration**](./docs/config.md)
- [Example config](./docs/example-config.md)
- [**Sandbox & approvals**](./docs/sandbox.md)
- [**Authentication**](./docs/authentication.md)
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)
- [Login on a "Headless" machine](./docs/authentication.md#connecting-on-a-headless-machine)
- [**Getting started**](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md)
- [CLI usage](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#cli-usage)
- [Slash Commands](https://github.com/valknarthing/llmx/blob/main/docs/slash_commands.md)
- [Running with a prompt as input](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#running-with-a-prompt-as-input)
- [Example prompts](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#example-prompts)
- [Custom prompts](https://github.com/valknarthing/llmx/blob/main/docs/prompts.md)
- [Memory with AGENTS.md](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#memory-with-agentsmd)
- [**Configuration**](https://github.com/valknarthing/llmx/blob/main/docs/config.md)
- [Example config](https://github.com/valknarthing/llmx/blob/main/docs/example-config.md)
- [**Sandbox & approvals**](https://github.com/valknarthing/llmx/blob/main/docs/sandbox.md)
- [**Authentication**](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md)
- [Auth methods](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md#forcing-a-specific-auth-method-advanced)
- [Login on a "Headless" machine](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md#connecting-on-a-headless-machine)
- **Automating LLMX**
- [GitHub Action](https://github.com/valknarthing/llmx-action)
- [TypeScript SDK](./sdk/typescript/README.md)
- [Non-interactive mode (`llmx exec`)](./docs/exec.md)
- [**Advanced**](./docs/advanced.md)
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)
- [**Zero data retention (ZDR)**](./docs/zdr.md)
- [**Contributing**](./docs/contributing.md)
- [**Install & build**](./docs/install.md)
- [System Requirements](./docs/install.md#system-requirements)
- [DotSlash](./docs/install.md#dotslash)
- [Build from source](./docs/install.md#build-from-source)
- [**FAQ**](./docs/faq.md)
- [TypeScript SDK](https://github.com/valknarthing/llmx/blob/main/sdk/typescript/README.md)
- [Non-interactive mode (`llmx exec`)](https://github.com/valknarthing/llmx/blob/main/docs/exec.md)
- [**Advanced**](https://github.com/valknarthing/llmx/blob/main/docs/advanced.md)
- [Tracing / verbose logging](https://github.com/valknarthing/llmx/blob/main/docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](https://github.com/valknarthing/llmx/blob/main/docs/advanced.md#model-context-protocol-mcp)
- [**Zero data retention (ZDR)**](https://github.com/valknarthing/llmx/blob/main/docs/zdr.md)
- [**Contributing**](https://github.com/valknarthing/llmx/blob/main/docs/contributing.md)
- [**Install & build**](https://github.com/valknarthing/llmx/blob/main/docs/install.md)
- [System Requirements](https://github.com/valknarthing/llmx/blob/main/docs/install.md#system-requirements)
- [DotSlash](https://github.com/valknarthing/llmx/blob/main/docs/install.md#dotslash)
- [Build from source](https://github.com/valknarthing/llmx/blob/main/docs/install.md#build-from-source)
- [**FAQ**](https://github.com/valknarthing/llmx/blob/main/docs/faq.md)
---

View File

@@ -1,6 +1,6 @@
{
"name": "@valknarthing/llmx",
"version": "0.1.1",
"version": "0.1.2",
"license": "Apache-2.0",
"description": "LLMX CLI - Multi-provider coding agent powered by LiteLLM",
"bin": {

94
llmx-rs/Cargo.lock generated
View File

@@ -178,7 +178,7 @@ checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
[[package]]
name = "app_test_support"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -945,7 +945,7 @@ checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "core_test_support"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -1192,7 +1192,7 @@ dependencies = [
[[package]]
name = "deadpool-runtime"
version = "0.1.4"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "092966b41edc516079bdf31ec78a2e0588d1d0c08f78b91d8307215928642b2b"
@@ -2501,7 +2501,7 @@ dependencies = [
[[package]]
name = "inout"
version = "0.1.4"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "879f10e63c20629ecabbb64a8010319738c66a5cd0c29b02d63d272b03751d01"
dependencies = [
@@ -2822,7 +2822,7 @@ checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
[[package]]
name = "llmx-ansi-escape"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"ansi-to-tui",
"ratatui",
@@ -2831,7 +2831,7 @@ dependencies = [
[[package]]
name = "llmx-app-server"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"app_test_support",
@@ -2866,7 +2866,7 @@ dependencies = [
[[package]]
name = "llmx-app-server-protocol"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"clap",
@@ -2884,7 +2884,7 @@ dependencies = [
[[package]]
name = "llmx-apply-patch"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -2899,7 +2899,7 @@ dependencies = [
[[package]]
name = "llmx-arg0"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"dotenvy",
@@ -2912,7 +2912,7 @@ dependencies = [
[[package]]
name = "llmx-async-utils"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"async-trait",
"pretty_assertions",
@@ -2936,7 +2936,7 @@ dependencies = [
[[package]]
name = "llmx-backend-openapi-models"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"serde",
"serde_json",
@@ -2945,7 +2945,7 @@ dependencies = [
[[package]]
name = "llmx-chatgpt"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"clap",
@@ -2960,7 +2960,7 @@ dependencies = [
[[package]]
name = "llmx-cli"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3000,7 +3000,7 @@ dependencies = [
[[package]]
name = "llmx-cloud-tasks"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"async-trait",
@@ -3026,7 +3026,7 @@ dependencies = [
[[package]]
name = "llmx-cloud-tasks-client"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"async-trait",
@@ -3041,7 +3041,7 @@ dependencies = [
[[package]]
name = "llmx-common"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"clap",
"llmx-app-server-protocol",
@@ -3053,7 +3053,7 @@ dependencies = [
[[package]]
name = "llmx-core"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"askama",
@@ -3134,7 +3134,7 @@ dependencies = [
[[package]]
name = "llmx-exec"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3167,7 +3167,7 @@ dependencies = [
[[package]]
name = "llmx-execpolicy"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"allocative",
"anyhow",
@@ -3187,7 +3187,7 @@ dependencies = [
[[package]]
name = "llmx-feedback"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"llmx-protocol",
@@ -3198,7 +3198,7 @@ dependencies = [
[[package]]
name = "llmx-file-search"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"clap",
@@ -3211,7 +3211,7 @@ dependencies = [
[[package]]
name = "llmx-git"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"assert_matches",
"once_cell",
@@ -3227,7 +3227,7 @@ dependencies = [
[[package]]
name = "llmx-keyring-store"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"keyring",
"tracing",
@@ -3235,7 +3235,7 @@ dependencies = [
[[package]]
name = "llmx-linux-sandbox"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"clap",
"landlock",
@@ -3248,7 +3248,7 @@ dependencies = [
[[package]]
name = "llmx-login"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"base64",
@@ -3272,7 +3272,7 @@ dependencies = [
[[package]]
name = "llmx-mcp-server"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3299,7 +3299,7 @@ dependencies = [
[[package]]
name = "llmx-ollama"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"assert_matches",
"async-stream",
@@ -3315,7 +3315,7 @@ dependencies = [
[[package]]
name = "llmx-otel"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"chrono",
"eventsource-stream",
@@ -3336,14 +3336,14 @@ dependencies = [
[[package]]
name = "llmx-process-hardening"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"libc",
]
[[package]]
name = "llmx-protocol"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"base64",
@@ -3369,7 +3369,7 @@ dependencies = [
[[package]]
name = "llmx-responses-api-proxy"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"clap",
@@ -3385,7 +3385,7 @@ dependencies = [
[[package]]
name = "llmx-rmcp-client"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"axum",
@@ -3414,7 +3414,7 @@ dependencies = [
[[package]]
name = "llmx-stdio-to-uds"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3425,7 +3425,7 @@ dependencies = [
[[package]]
name = "llmx-tui"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"arboard",
@@ -3490,7 +3490,7 @@ dependencies = [
[[package]]
name = "llmx-utils-cache"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"lru",
"sha1",
@@ -3499,7 +3499,7 @@ dependencies = [
[[package]]
name = "llmx-utils-image"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"base64",
"image",
@@ -3511,7 +3511,7 @@ dependencies = [
[[package]]
name = "llmx-utils-json-to-toml"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"pretty_assertions",
"serde_json",
@@ -3520,7 +3520,7 @@ dependencies = [
[[package]]
name = "llmx-utils-pty"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"portable-pty",
@@ -3529,7 +3529,7 @@ dependencies = [
[[package]]
name = "llmx-utils-readiness"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"assert_matches",
"async-trait",
@@ -3540,11 +3540,11 @@ dependencies = [
[[package]]
name = "llmx-utils-string"
version = "0.1.1"
version = "0.1.5"
[[package]]
name = "llmx-utils-tokenizer"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"pretty_assertions",
@@ -3655,7 +3655,7 @@ checksum = "47e1ffaa40ddd1f3ed91f717a33c8c0ee23fff369e3aa8772b9605cc1d22f4c3"
[[package]]
name = "mcp-types"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"schemars 0.8.22",
"serde",
@@ -3665,7 +3665,7 @@ dependencies = [
[[package]]
name = "mcp_test_support"
version = "0.1.1"
version = "0.1.5"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3942,7 +3942,7 @@ checksum = "51d515d32fb182ee37cda2ccdcb92950d6a3c2893aa280e540671c2cd0f3b1d9"
[[package]]
name = "num-integer"
version = "0.1.46"
version = "0.1.56"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7969661fd2958a5cb096e56c8e1ad0444ac2bbcd0061bd28660485a44879858f"
dependencies = [
@@ -3951,7 +3951,7 @@ dependencies = [
[[package]]
name = "num-iter"
version = "0.1.45"
version = "0.1.55"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1429034a0490724d0075ebb2bc9e875d6503c3cf69e235a8941aa757d83ef5bf"
dependencies = [
@@ -4520,7 +4520,7 @@ dependencies = [
[[package]]
name = "potential_utf"
version = "0.1.4"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b73949432f5e2a09657003c25bca5e19a0e9c84f8058ca374f49e0ebe605af77"
dependencies = [
@@ -6580,7 +6580,7 @@ checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3"
[[package]]
name = "tracing"
version = "0.1.41"
version = "0.1.51"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "784e0ac535deb450455cbfa28a6f0df145ea1bb7ae51b821cf5e7927fdcfbdd0"
dependencies = [

View File

@@ -43,7 +43,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.1.1"
version = "0.1.5"
# Track the edition for all workspace crates in one place. Individual
# crates can still override this value, but keeping it here means new
# crates created with `cargo new -w ...` automatically inherit the 2024
@@ -191,7 +191,7 @@ tokio-util = "0.7.16"
toml = "0.9.5"
toml_edit = "0.23.4"
tonic = "0.13.1"
tracing = "0.1.41"
tracing = "0.1.51"
tracing-appender = "0.2.3"
tracing-subscriber = "0.3.20"
tracing-test = "0.2.5"

View File

@@ -138,7 +138,7 @@ impl McpProcess {
client_info: ClientInfo {
name: "llmx-app-server-tests".to_string(),
title: None,
version: "0.1.1".to_string(),
version: "0.1.5".to_string(),
},
})?);
let req_id = self.send_request("initialize", params).await?;

View File

@@ -26,7 +26,7 @@ async fn get_user_agent_returns_current_llmx_user_agent() -> Result<()> {
let os_info = os_info::get();
let user_agent = format!(
"llmx_cli_rs/0.1.1 ({} {}; {}) {} (llmx-app-server-tests; 0.1.1)",
"llmx_cli_rs/0.1.5 ({} {}; {}) {} (llmx-app-server-tests; 0.1.5)",
os_info.os_type(),
os_info.version(),
os_info.architecture().unwrap_or("unknown"),

View File

@@ -161,7 +161,65 @@ pub(crate) async fn stream_chat_completions(
// aggregated assistant message was recorded alongside an earlier partial).
let mut last_assistant_text: Option<String> = None;
// Build a map of which call_ids have outputs
// We'll use this to ensure we never send a FunctionCall without its corresponding output
let mut call_ids_with_outputs: std::collections::HashSet<String> = std::collections::HashSet::new();
// First pass: collect all call_ids that have outputs
for item in input.iter() {
if let ResponseItem::FunctionCallOutput { call_id, .. } = item {
call_ids_with_outputs.insert(call_id.clone());
}
}
debug!("=== Chat Completions Request Debug ===");
debug!("Input items count: {}", input.len());
debug!("Call IDs with outputs: {:?}", call_ids_with_outputs);
// Second pass: find the first FunctionCall that doesn't have an output
let mut cutoff_at_idx: Option<usize> = None;
for (idx, item) in input.iter().enumerate() {
if let ResponseItem::FunctionCall { call_id, name, .. } = item {
if !call_ids_with_outputs.contains(call_id) {
debug!("Found unanswered function call '{}' (call_id: {}) at index {}", name, call_id, idx);
cutoff_at_idx = Some(idx);
break;
}
}
}
if let Some(cutoff) = cutoff_at_idx {
debug!("Cutting off at index {} to avoid orphaned tool calls", cutoff);
} else {
debug!("No unanswered function calls found, processing all items");
}
// Track whether the MOST RECENT FunctionCall with each call_id was skipped
// This allows the same call_id to be retried - we only skip outputs for the specific skipped calls
let mut call_id_skip_state: std::collections::HashMap<String, bool> = std::collections::HashMap::new();
for (idx, item) in input.iter().enumerate() {
// Stop processing if we've reached an unanswered function call
if let Some(cutoff) = cutoff_at_idx {
if idx >= cutoff {
debug!("Stopping at index {} due to unanswered function call", idx);
break;
}
}
debug!("Processing item {} of type: {}", idx, match item {
ResponseItem::Message { role, .. } => format!("Message(role={})", role),
ResponseItem::FunctionCall { name, call_id, .. } => format!("FunctionCall(name={}, call_id={})", name, call_id),
ResponseItem::FunctionCallOutput { call_id, .. } => format!("FunctionCallOutput(call_id={})", call_id),
ResponseItem::LocalShellCall { .. } => "LocalShellCall".to_string(),
ResponseItem::CustomToolCall { .. } => "CustomToolCall".to_string(),
ResponseItem::CustomToolCallOutput { .. } => "CustomToolCallOutput".to_string(),
ResponseItem::Reasoning { .. } => "Reasoning".to_string(),
ResponseItem::WebSearchCall { .. } => "WebSearchCall".to_string(),
ResponseItem::GhostSnapshot { .. } => "GhostSnapshot".to_string(),
ResponseItem::Other => "Other".to_string(),
});
match item {
ResponseItem::Message { role, content, .. } => {
// Build content either as a plain string (typical for assistant text)
@@ -175,7 +233,10 @@ pub(crate) async fn stream_chat_completions(
ContentItem::InputText { text: t }
| ContentItem::OutputText { text: t } => {
text.push_str(t);
items.push(json!({"type":"text","text": t}));
// Only add text content blocks that are non-empty
if !t.trim().is_empty() {
items.push(json!({"type":"text","text": t}));
}
}
ContentItem::InputImage { image_url } => {
saw_image = true;
@@ -184,6 +245,11 @@ pub(crate) async fn stream_chat_completions(
}
}
// Skip messages with empty or whitespace-only text content (unless they contain images)
if text.trim().is_empty() && !saw_image {
continue;
}
// Skip exact-duplicate assistant messages.
if role == "assistant" {
if let Some(prev) = &last_assistant_text
@@ -219,6 +285,18 @@ pub(crate) async fn stream_chat_completions(
call_id,
..
} => {
// Validate that arguments is valid JSON before sending to API
// If invalid, skip this function call to avoid API errors
if serde_json::from_str::<serde_json::Value>(arguments).is_err() {
debug!("Skipping malformed function call with invalid JSON arguments: {}", arguments);
// Mark this call_id's most recent state as skipped
call_id_skip_state.insert(call_id.clone(), true);
continue;
}
// Mark this call_id's most recent state as NOT skipped (valid call)
call_id_skip_state.insert(call_id.clone(), false);
let mut msg = json!({
"role": "assistant",
"content": null,
@@ -263,6 +341,12 @@ pub(crate) async fn stream_chat_completions(
messages.push(msg);
}
ResponseItem::FunctionCallOutput { call_id, output } => {
// Skip outputs only if the MOST RECENT FunctionCall with this call_id was skipped
if call_id_skip_state.get(call_id) == Some(&true) {
debug!("Skipping function call output for most recent skipped call_id: {}", call_id);
continue;
}
// Prefer structured content items when available (e.g., images)
// otherwise fall back to the legacy plain-string content.
let content_value = if let Some(items) = &output.content_items {
@@ -328,14 +412,23 @@ pub(crate) async fn stream_chat_completions(
}
}
debug!("Built {} messages for API request", messages.len());
debug!("=== End Chat Completions Request Debug ===");
let tools_json = create_tools_json_for_chat_completions_api(&prompt.tools)?;
let payload = json!({
let mut payload = json!({
"model": model_family.slug,
"messages": messages,
"stream": true,
"tools": tools_json,
});
// Add max_tokens - required by Anthropic Messages API
// Use a sensible default of 8192 if not configured
if let Some(obj) = payload.as_object_mut() {
obj.insert("max_tokens".to_string(), json!(8192));
}
debug!(
"POST to {}: {}",
provider.get_full_url(&None),

View File

@@ -693,7 +693,7 @@ pub(crate) fn create_tools_json_for_chat_completions_api(
// We start with the JSON for the Responses API and than rewrite it to match
// the chat completions tool call format.
let responses_api_tools_json = create_tools_json_for_responses_api(tools)?;
let tools_json = responses_api_tools_json
let mut tools_json = responses_api_tools_json
.into_iter()
.filter_map(|mut tool| {
if tool.get("type") != Some(&serde_json::Value::String("function".to_string())) {
@@ -712,6 +712,14 @@ pub(crate) fn create_tools_json_for_chat_completions_api(
}
})
.collect::<Vec<serde_json::Value>>();
// Add cache_control to the last tool to enable Anthropic prompt caching
if let Some(last_tool) = tools_json.last_mut() {
if let Some(obj) = last_tool.as_object_mut() {
obj.insert("cache_control".to_string(), json!({"type": "ephemeral"}));
}
}
Ok(tools_json)
}

View File

@@ -144,7 +144,7 @@ impl McpProcess {
let initialized = self.read_jsonrpc_message().await?;
let os_info = os_info::get();
let user_agent = format!(
"llmx_cli_rs/0.1.1 ({} {}; {}) {} (elicitation test; 0.0.0)",
"llmx_cli_rs/0.1.5 ({} {}; {}) {} (elicitation test; 0.0.0)",
os_info.os_type(),
os_info.version(),
os_info.architecture().unwrap_or("unknown"),
@@ -163,7 +163,7 @@ impl McpProcess {
"serverInfo": {
"name": "llmx-mcp-server",
"title": "LLMX",
"version": "0.1.1",
"version": "0.1.5",
"user_agent": user_agent
},
"protocolVersion": mcp_types::MCP_SCHEMA_VERSION

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭───────────────────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.5) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭─────────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.5) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭──────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.5) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭──────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.5) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭───────────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.5) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.5) │
│ │
│ Visit https://chatgpt.com/llmx/settings/ │
│ usage for up-to-date │

File diff suppressed because one or more lines are too long