7 Commits

Author SHA1 Message Date
67ff31104f chore: Bump version to 0.1.4
- Fix: Skip empty/whitespace text content blocks
- Fix: Validate function call arguments and skip malformed calls
- Fix: Skip outputs for skipped function calls to maintain consistency
- Resolves Anthropic API errors:
  - "messages: text content blocks must contain non-whitespace text"
  - "Extra data: line 1 column 26 (char 25)" (invalid JSON)
  - "unexpected `tool_use_id` found in `tool_result` blocks"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 19:52:13 +01:00
866ca2a372 chore: Bump version to 0.1.3
- Fix: Skip empty/whitespace-only text content blocks in chat completions
- This resolves Anthropic API errors:
  - "messages: text content blocks must contain non-whitespace text"
  - "messages: text content blocks must be non-empty"
- Updated version strings in all test files and snapshots

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 14:30:46 +01:00
Sebastian Krüger
4b2e4a1d48 fix: Add scope and update npm for OIDC publishing
- Added scope: "@valknarthing" to setup-node action (required for scoped packages)
- Added npm update step to ensure npm CLI v11.5.1+ (required for OIDC support)

Matches the original OpenAI Codex workflow configuration.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 13:24:05 +01:00
Sebastian Krüger
207a0e2333 fix: Remove NPM_TOKEN for OIDC auth and disable alpha branch update
- Removed NODE_AUTH_TOKEN env var from publish-npm job
  OIDC/Trusted Publishers authentication doesn't need NPM_TOKEN secret
- Commented out update-branch job since latest-alpha-cli branch doesn't exist

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:52:52 +01:00
Sebastian Krüger
df6e9f8e0e chore: Bump version to 0.1.2
Updated all version references from 0.1.1 to 0.1.2:
- Workspace version in llmx-rs/Cargo.toml
- Package version in llmx-cli/package.json
- Updated Cargo.lock with all workspace crate versions
- Updated test hardcoded version strings in:
  - mcp-server/tests/common/mcp_process.rs
  - app-server/tests/suite/user_agent.rs
  - app-server/tests/common/mcp_process.rs
- Updated TUI snapshot tests with new version number

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:21:18 +01:00
Sebastian Krüger
c9f903a83e fix: Correct environment variables and documentation links in README
- Fixed environment variable names from LITELLM_* to LLMX_*
  - LITELLM_BASE_URL → LLMX_BASE_URL
  - LITELLM_API_KEY → LLMX_API_KEY
- Updated all documentation links to use absolute GitHub URLs instead of relative paths
  - Fixes 404 errors when viewing README on npm registry
  - All ./docs/ and ./LITELLM-SETUP.md links now point to github.com/valknarthing/llmx

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:18:31 +01:00
Sebastian Krüger
00eed932c0 feat: Use npm Trusted Publishers (OIDC) for automated publishing
- Add id-token: write permission for OIDC authentication
- Add --provenance flag to npm publish for supply chain security
- Use NODE_AUTH_TOKEN environment variable (set by setup-node)
- Remove manual .npmrc token writing (handled by setup-node with OIDC)

This enables automated npm publishing without storing tokens as secrets.
Requires Trusted Publisher to be configured at:
https://www.npmjs.com/package/@valknarthing/llmx/access

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 12:10:09 +01:00
15 changed files with 136 additions and 109 deletions

View File

@@ -476,7 +476,7 @@ jobs:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
# Publish to npm using authentication token
# Publish to npm using Trusted Publishers (OIDC)
publish-npm:
# Publish to npm for stable releases and alpha pre-releases with numeric suffixes.
if: ${{ needs.release.outputs.should_publish_npm == 'true' }}
@@ -485,6 +485,7 @@ jobs:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write # Required for OIDC authentication
steps:
- name: Setup Node.js
@@ -492,6 +493,10 @@ jobs:
with:
node-version: 22
registry-url: "https://registry.npmjs.org"
scope: "@valknarthing"
- name: Update npm
run: npm install -g npm@latest
- name: Download npm tarballs from release
env:
@@ -511,10 +516,6 @@ jobs:
VERSION: ${{ needs.release.outputs.version }}
NPM_TAG: ${{ needs.release.outputs.npm_tag }}
run: |
# Write auth token to the .npmrc file that setup-node created
echo "//registry.npmjs.org/:_authToken=${{ secrets.NPM_TOKEN }}" >> ${NPM_CONFIG_USERCONFIG}
set -euo pipefail
tag_args=()
if [[ -n "${NPM_TAG}" ]]; then
@@ -526,24 +527,24 @@ jobs:
)
for tarball in "${tarballs[@]}"; do
npm publish "${GITHUB_WORKSPACE}/dist/npm/${tarball}" --access public "${tag_args[@]}"
npm publish "${GITHUB_WORKSPACE}/dist/npm/${tarball}" --provenance --access public "${tag_args[@]}"
done
update-branch:
name: Update latest-alpha-cli branch
permissions:
contents: write
needs: release
runs-on: ubuntu-latest
steps:
- name: Update latest-alpha-cli branch
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
gh api \
repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
-X PATCH \
-f sha="${GITHUB_SHA}" \
-F force=true
# update-branch:
# name: Update latest-alpha-cli branch
# permissions:
# contents: write
# needs: release
# runs-on: ubuntu-latest
#
# steps:
# - name: Update latest-alpha-cli branch
# env:
# GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# run: |
# set -euo pipefail
# gh api \
# repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
# -X PATCH \
# -f sha="${GITHUB_SHA}" \
# -F force=true

View File

@@ -47,56 +47,56 @@ LLMX is powered by [LiteLLM](https://docs.litellm.ai/), which provides access to
```bash
# Set your LiteLLM server URL (default: http://localhost:4000/v1)
export LITELLM_BASE_URL="http://localhost:4000/v1"
export LITELLM_API_KEY="your-api-key"
export LLMX_BASE_URL="http://localhost:4000/v1"
export LLMX_API_KEY="your-api-key"
# Run LLMX
llmx "hello world"
```
**Configuration:** See [LITELLM-SETUP.md](./LITELLM-SETUP.md) for detailed setup instructions.
**Configuration:** See [LITELLM-SETUP.md](https://github.com/valknarthing/llmx/blob/main/LITELLM-SETUP.md) for detailed setup instructions.
You can also use LLMX with ChatGPT or OpenAI API keys. For authentication options, see the [authentication docs](./docs/authentication.md).
You can also use LLMX with ChatGPT or OpenAI API keys. For authentication options, see the [authentication docs](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md).
### Model Context Protocol (MCP)
LLMX can access MCP servers. To configure them, refer to the [config docs](./docs/config.md#mcp_servers).
LLMX can access MCP servers. To configure them, refer to the [config docs](https://github.com/valknarthing/llmx/blob/main/docs/config.md#mcp_servers).
### Configuration
LLMX CLI supports a rich set of configuration options, with preferences stored in `~/.llmx/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
LLMX CLI supports a rich set of configuration options, with preferences stored in `~/.llmx/config.toml`. For full configuration options, see [Configuration](https://github.com/valknarthing/llmx/blob/main/docs/config.md).
---
### Docs & FAQ
- [**Getting started**](./docs/getting-started.md)
- [CLI usage](./docs/getting-started.md#cli-usage)
- [Slash Commands](./docs/slash_commands.md)
- [Running with a prompt as input](./docs/getting-started.md#running-with-a-prompt-as-input)
- [Example prompts](./docs/getting-started.md#example-prompts)
- [Custom prompts](./docs/prompts.md)
- [Memory with AGENTS.md](./docs/getting-started.md#memory-with-agentsmd)
- [**Configuration**](./docs/config.md)
- [Example config](./docs/example-config.md)
- [**Sandbox & approvals**](./docs/sandbox.md)
- [**Authentication**](./docs/authentication.md)
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)
- [Login on a "Headless" machine](./docs/authentication.md#connecting-on-a-headless-machine)
- [**Getting started**](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md)
- [CLI usage](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#cli-usage)
- [Slash Commands](https://github.com/valknarthing/llmx/blob/main/docs/slash_commands.md)
- [Running with a prompt as input](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#running-with-a-prompt-as-input)
- [Example prompts](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#example-prompts)
- [Custom prompts](https://github.com/valknarthing/llmx/blob/main/docs/prompts.md)
- [Memory with AGENTS.md](https://github.com/valknarthing/llmx/blob/main/docs/getting-started.md#memory-with-agentsmd)
- [**Configuration**](https://github.com/valknarthing/llmx/blob/main/docs/config.md)
- [Example config](https://github.com/valknarthing/llmx/blob/main/docs/example-config.md)
- [**Sandbox & approvals**](https://github.com/valknarthing/llmx/blob/main/docs/sandbox.md)
- [**Authentication**](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md)
- [Auth methods](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md#forcing-a-specific-auth-method-advanced)
- [Login on a "Headless" machine](https://github.com/valknarthing/llmx/blob/main/docs/authentication.md#connecting-on-a-headless-machine)
- **Automating LLMX**
- [GitHub Action](https://github.com/valknarthing/llmx-action)
- [TypeScript SDK](./sdk/typescript/README.md)
- [Non-interactive mode (`llmx exec`)](./docs/exec.md)
- [**Advanced**](./docs/advanced.md)
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)
- [**Zero data retention (ZDR)**](./docs/zdr.md)
- [**Contributing**](./docs/contributing.md)
- [**Install & build**](./docs/install.md)
- [System Requirements](./docs/install.md#system-requirements)
- [DotSlash](./docs/install.md#dotslash)
- [Build from source](./docs/install.md#build-from-source)
- [**FAQ**](./docs/faq.md)
- [TypeScript SDK](https://github.com/valknarthing/llmx/blob/main/sdk/typescript/README.md)
- [Non-interactive mode (`llmx exec`)](https://github.com/valknarthing/llmx/blob/main/docs/exec.md)
- [**Advanced**](https://github.com/valknarthing/llmx/blob/main/docs/advanced.md)
- [Tracing / verbose logging](https://github.com/valknarthing/llmx/blob/main/docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](https://github.com/valknarthing/llmx/blob/main/docs/advanced.md#model-context-protocol-mcp)
- [**Zero data retention (ZDR)**](https://github.com/valknarthing/llmx/blob/main/docs/zdr.md)
- [**Contributing**](https://github.com/valknarthing/llmx/blob/main/docs/contributing.md)
- [**Install & build**](https://github.com/valknarthing/llmx/blob/main/docs/install.md)
- [System Requirements](https://github.com/valknarthing/llmx/blob/main/docs/install.md#system-requirements)
- [DotSlash](https://github.com/valknarthing/llmx/blob/main/docs/install.md#dotslash)
- [Build from source](https://github.com/valknarthing/llmx/blob/main/docs/install.md#build-from-source)
- [**FAQ**](https://github.com/valknarthing/llmx/blob/main/docs/faq.md)
---

View File

@@ -1,6 +1,6 @@
{
"name": "@valknarthing/llmx",
"version": "0.1.1",
"version": "0.1.2",
"license": "Apache-2.0",
"description": "LLMX CLI - Multi-provider coding agent powered by LiteLLM",
"bin": {

82
llmx-rs/Cargo.lock generated
View File

@@ -178,7 +178,7 @@ checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
[[package]]
name = "app_test_support"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",
@@ -945,7 +945,7 @@ checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "core_test_support"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",
@@ -2822,7 +2822,7 @@ checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
[[package]]
name = "llmx-ansi-escape"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"ansi-to-tui",
"ratatui",
@@ -2831,7 +2831,7 @@ dependencies = [
[[package]]
name = "llmx-app-server"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"app_test_support",
@@ -2866,7 +2866,7 @@ dependencies = [
[[package]]
name = "llmx-app-server-protocol"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"clap",
@@ -2884,7 +2884,7 @@ dependencies = [
[[package]]
name = "llmx-apply-patch"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",
@@ -2899,7 +2899,7 @@ dependencies = [
[[package]]
name = "llmx-arg0"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"dotenvy",
@@ -2912,7 +2912,7 @@ dependencies = [
[[package]]
name = "llmx-async-utils"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"async-trait",
"pretty_assertions",
@@ -2936,7 +2936,7 @@ dependencies = [
[[package]]
name = "llmx-backend-openapi-models"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"serde",
"serde_json",
@@ -2945,7 +2945,7 @@ dependencies = [
[[package]]
name = "llmx-chatgpt"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"clap",
@@ -2960,7 +2960,7 @@ dependencies = [
[[package]]
name = "llmx-cli"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3000,7 +3000,7 @@ dependencies = [
[[package]]
name = "llmx-cloud-tasks"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"async-trait",
@@ -3026,7 +3026,7 @@ dependencies = [
[[package]]
name = "llmx-cloud-tasks-client"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"async-trait",
@@ -3041,7 +3041,7 @@ dependencies = [
[[package]]
name = "llmx-common"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"clap",
"llmx-app-server-protocol",
@@ -3053,7 +3053,7 @@ dependencies = [
[[package]]
name = "llmx-core"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"askama",
@@ -3134,7 +3134,7 @@ dependencies = [
[[package]]
name = "llmx-exec"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3167,7 +3167,7 @@ dependencies = [
[[package]]
name = "llmx-execpolicy"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"allocative",
"anyhow",
@@ -3187,7 +3187,7 @@ dependencies = [
[[package]]
name = "llmx-feedback"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"llmx-protocol",
@@ -3198,7 +3198,7 @@ dependencies = [
[[package]]
name = "llmx-file-search"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"clap",
@@ -3211,7 +3211,7 @@ dependencies = [
[[package]]
name = "llmx-git"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"assert_matches",
"once_cell",
@@ -3227,7 +3227,7 @@ dependencies = [
[[package]]
name = "llmx-keyring-store"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"keyring",
"tracing",
@@ -3235,7 +3235,7 @@ dependencies = [
[[package]]
name = "llmx-linux-sandbox"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"clap",
"landlock",
@@ -3248,7 +3248,7 @@ dependencies = [
[[package]]
name = "llmx-login"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"base64",
@@ -3272,7 +3272,7 @@ dependencies = [
[[package]]
name = "llmx-mcp-server"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3299,7 +3299,7 @@ dependencies = [
[[package]]
name = "llmx-ollama"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"assert_matches",
"async-stream",
@@ -3315,7 +3315,7 @@ dependencies = [
[[package]]
name = "llmx-otel"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"chrono",
"eventsource-stream",
@@ -3336,14 +3336,14 @@ dependencies = [
[[package]]
name = "llmx-process-hardening"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"libc",
]
[[package]]
name = "llmx-protocol"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"base64",
@@ -3369,7 +3369,7 @@ dependencies = [
[[package]]
name = "llmx-responses-api-proxy"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"clap",
@@ -3385,7 +3385,7 @@ dependencies = [
[[package]]
name = "llmx-rmcp-client"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"axum",
@@ -3414,7 +3414,7 @@ dependencies = [
[[package]]
name = "llmx-stdio-to-uds"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",
@@ -3425,7 +3425,7 @@ dependencies = [
[[package]]
name = "llmx-tui"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"arboard",
@@ -3490,7 +3490,7 @@ dependencies = [
[[package]]
name = "llmx-utils-cache"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"lru",
"sha1",
@@ -3499,7 +3499,7 @@ dependencies = [
[[package]]
name = "llmx-utils-image"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"base64",
"image",
@@ -3511,7 +3511,7 @@ dependencies = [
[[package]]
name = "llmx-utils-json-to-toml"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"pretty_assertions",
"serde_json",
@@ -3520,7 +3520,7 @@ dependencies = [
[[package]]
name = "llmx-utils-pty"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"portable-pty",
@@ -3529,7 +3529,7 @@ dependencies = [
[[package]]
name = "llmx-utils-readiness"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"assert_matches",
"async-trait",
@@ -3540,11 +3540,11 @@ dependencies = [
[[package]]
name = "llmx-utils-string"
version = "0.1.1"
version = "0.1.4"
[[package]]
name = "llmx-utils-tokenizer"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"pretty_assertions",
@@ -3655,7 +3655,7 @@ checksum = "47e1ffaa40ddd1f3ed91f717a33c8c0ee23fff369e3aa8772b9605cc1d22f4c3"
[[package]]
name = "mcp-types"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"schemars 0.8.22",
"serde",
@@ -3665,7 +3665,7 @@ dependencies = [
[[package]]
name = "mcp_test_support"
version = "0.1.1"
version = "0.1.4"
dependencies = [
"anyhow",
"assert_cmd",

View File

@@ -43,7 +43,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.1.1"
version = "0.1.4"
# Track the edition for all workspace crates in one place. Individual
# crates can still override this value, but keeping it here means new
# crates created with `cargo new -w ...` automatically inherit the 2024

View File

@@ -138,7 +138,7 @@ impl McpProcess {
client_info: ClientInfo {
name: "llmx-app-server-tests".to_string(),
title: None,
version: "0.1.1".to_string(),
version: "0.1.4".to_string(),
},
})?);
let req_id = self.send_request("initialize", params).await?;

View File

@@ -26,7 +26,7 @@ async fn get_user_agent_returns_current_llmx_user_agent() -> Result<()> {
let os_info = os_info::get();
let user_agent = format!(
"llmx_cli_rs/0.1.1 ({} {}; {}) {} (llmx-app-server-tests; 0.1.1)",
"llmx_cli_rs/0.1.4 ({} {}; {}) {} (llmx-app-server-tests; 0.1.4)",
os_info.os_type(),
os_info.version(),
os_info.architecture().unwrap_or("unknown"),

View File

@@ -161,6 +161,9 @@ pub(crate) async fn stream_chat_completions(
// aggregated assistant message was recorded alongside an earlier partial).
let mut last_assistant_text: Option<String> = None;
// Track call_ids of skipped function calls so we can also skip their outputs
let mut skipped_call_ids: std::collections::HashSet<String> = std::collections::HashSet::new();
for (idx, item) in input.iter().enumerate() {
match item {
ResponseItem::Message { role, content, .. } => {
@@ -175,7 +178,10 @@ pub(crate) async fn stream_chat_completions(
ContentItem::InputText { text: t }
| ContentItem::OutputText { text: t } => {
text.push_str(t);
items.push(json!({"type":"text","text": t}));
// Only add text content blocks that are non-empty
if !t.trim().is_empty() {
items.push(json!({"type":"text","text": t}));
}
}
ContentItem::InputImage { image_url } => {
saw_image = true;
@@ -184,6 +190,11 @@ pub(crate) async fn stream_chat_completions(
}
}
// Skip messages with empty or whitespace-only text content (unless they contain images)
if text.trim().is_empty() && !saw_image {
continue;
}
// Skip exact-duplicate assistant messages.
if role == "assistant" {
if let Some(prev) = &last_assistant_text
@@ -219,6 +230,15 @@ pub(crate) async fn stream_chat_completions(
call_id,
..
} => {
// Validate that arguments is valid JSON before sending to API
// If invalid, skip this function call to avoid API errors
if serde_json::from_str::<serde_json::Value>(arguments).is_err() {
debug!("Skipping malformed function call with invalid JSON arguments: {}", arguments);
// Track this call_id so we can also skip its corresponding output
skipped_call_ids.insert(call_id.clone());
continue;
}
let mut msg = json!({
"role": "assistant",
"content": null,
@@ -263,6 +283,12 @@ pub(crate) async fn stream_chat_completions(
messages.push(msg);
}
ResponseItem::FunctionCallOutput { call_id, output } => {
// Skip outputs for function calls that were skipped due to malformed arguments
if skipped_call_ids.contains(call_id) {
debug!("Skipping function call output for skipped call_id: {}", call_id);
continue;
}
// Prefer structured content items when available (e.g., images)
// otherwise fall back to the legacy plain-string content.
let content_value = if let Some(items) = &output.content_items {

View File

@@ -144,7 +144,7 @@ impl McpProcess {
let initialized = self.read_jsonrpc_message().await?;
let os_info = os_info::get();
let user_agent = format!(
"llmx_cli_rs/0.1.1 ({} {}; {}) {} (elicitation test; 0.0.0)",
"llmx_cli_rs/0.1.4 ({} {}; {}) {} (elicitation test; 0.0.0)",
os_info.os_type(),
os_info.version(),
os_info.architecture().unwrap_or("unknown"),
@@ -163,7 +163,7 @@ impl McpProcess {
"serverInfo": {
"name": "llmx-mcp-server",
"title": "LLMX",
"version": "0.1.1",
"version": "0.1.4",
"user_agent": user_agent
},
"protocolVersion": mcp_types::MCP_SCHEMA_VERSION

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭───────────────────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.4) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭─────────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.4) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭──────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.4) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭──────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.4) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭───────────────────────────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.4) │
│ │
│ Visit https://chatgpt.com/llmx/settings/usage for up-to-date │
│ information on rate limits and credits │

View File

@@ -5,7 +5,7 @@ expression: sanitized
/status
╭────────────────────────────────────────────╮
│ >_ LLMX (v0.1.1) │
│ >_ LLMX (v0.1.4) │
│ │
│ Visit https://chatgpt.com/llmx/settings/ │
│ usage for up-to-date │