Skip to content

Conversation

@hksdpc255
Copy link
Contributor

@hksdpc255 hksdpc255 commented Nov 2, 2025

Generalized and streaming-capable XML-style tool-call parsing with grammar enforcement and automatic template fixing.

Based on PR #15904, this patch introduces a generalized implementation for almost all XML-style tool-call formats.

Supported models

  • GLM 4.5/4.6
  • MiniMax M2
  • SeedOSS
  • Kimi-K2 (Thinking and non-thinking)
  • Qwen3-Coder (Thinking and non-thinking)
  • Apriel-1.5
  • Xiaomi-MiMo

Grammar-constrained tool-call outputs

Tool-call messages generated by the model are now strictly validated against a defined grammar.
A new automatic grammar generator simplifies the process of creating grammars for new models.
This ensures that all tool-call outputs are well-formed, structurally consistent, and reliably parsed.

Streaming support for tool-call parsing

The parser now supports streaming parsing, enabling incremental processing of tool-call messages as they are generated.
This enhancement improves responsiveness and allows real-time interaction during model inference.

Automatic chat-template fixing

A lightweight Jinja2-based patcher has been added to automatically fix official chat templates before use.
With this change, official templates now work out of the box, eliminating the need for custom modifications.

In-context reasoning

The parser now supports multiple reasoning blocks within a single generation, even when interleaved with tool calls.
All reasoning content is preserved. No information is lost during parsing or streaming.

Enhanced unit tests

Add unit test for streaming-mode parser. It simulates the generation phase by feeding content character-by-character, comparing the parsed results and verifying that streaming and non-streaming modes reach the same final state.

Additional Notes

  • All unit tests have passed.
  • Community testing is welcome! Please try it out with your model integrations.
  • If your OpenAI-compatible client does not support sending reasoning_content back to the server, use the option --reasoning-format none
  • When reporting issues, it’s recommended to add -lv 1 in the command line to enable more detailed logging.

Please use the chat template included in this PR, or any other chat template that you are certain will work correctly

@MikeLP
Copy link

MikeLP commented Nov 2, 2025

I'm looking forward to get this PR merged!

@hksdpc255 Does it require a custom jinja template from the previous PR or it works good as is?

@hksdpc255
Copy link
Contributor Author

hksdpc255 commented Nov 2, 2025

For now, I’d recommend using a custom template if you’re running more complex workloads.
As for the embedded/official template, it won’t fail at the start, but it may be missing some features that your agent requires.

Edit: The official template is now working properly. There’s no longer need for a custom template.

Edit2: Official template support for Minimax-M2 has been removed. See comment and ochafik/minja#7 (comment) for details.

@ochafik
Copy link
Collaborator

ochafik commented Nov 2, 2025

FYI I've updated (my fork of) Minja w/ support for GLM 4.6's template.
Might affect how you deal w/ the polyfills, as it should now detect GLM's tool call capability properly.

@hksdpc255
Copy link
Contributor Author

@ochafik Excellent work! Once llama.cpp syncs your changes, some parts of this PR can be safely removed.

However, there are still a few small patches needed — for example, replacing dict.items() with dict | items.

@hksdpc255
Copy link
Contributor Author

Currently, the official Minimax-M2 chat template fails to run tool calls because dict.items() and list[-1] are not supported by llama.cpp’s Jinja2 rendering engine.

@ochafik
Copy link
Collaborator

ochafik commented Nov 3, 2025

Currently, the official Minimax-M2 chat template fails to run tool calls because dict.items() and list[-1] are not supported by llama.cpp’s Jinja2 rendering engine.

@hksdpc255 Both should be supported. The confusing error you probably got was because minja implements items() on dict but not on str. It should detect whether the template expects arguments to be an object instead of a more common json string of said object (see requires_object_arguments), and adjust the inputs accordingly: now hopefully works for GLM 4.6.

As for list[-1], it's supported, but MinMax M2's template has a bug, see this comment.

And please feel free to file bugs on https://github.com/ochafik/minja, it's should be cleaner to add syntax support there than to patch things up in llama.cpp.

@hksdpc255
Copy link
Contributor Author

@ochafik Thank you for pointing that out. I’m currently applying your suggested fix in llama.cpp and will test whether it works as expected. Thanks again for the help!

@hksdpc255
Copy link
Contributor Author

Good news! The Minimax M2 tool call is now working.

I’ll push the fix later.

@hksdpc255
Copy link
Contributor Author

hksdpc255 commented Nov 3, 2025

Screen shot for Zed editor: 图片

Model: unsloth's UD-Q3_K_XL

@hksdpc255 hksdpc255 mentioned this pull request Nov 3, 2025
@emuchogu
Copy link

emuchogu commented Nov 3, 2025

Hi @hksdpc255 ,
I cloned your repo https://github.com/hksdpc255/llama.cpp/tree/xml_toolcall and unfortunately it's still not producing the initial think tag at least in the cli. See below.

Model: unsloth--MiniMax-M2-GGUF Q8_0

./llama-cli \
  -m /models/hub/models--unsloth--MiniMax-M2-GGUF/snapshots/*/Q8_0/MiniMax-M2-Q8_0-00001-of-00005.gguf \
  -ngl 99 \
  -sm layer \
  -ts 1,1,1,1,1,1,1,1 \
  -c 78000 \
  -t 16 \
  --jinja \
  -i

Output:

> what is the capital of france?
Okay, the user asked a straightforward question: "What is the capital of France?" This is basic geography knowledge, so the answer should be simple. I don't need to overcomplicate things. 

Hmm, maybe the user is just testing if I know basic facts, or perhaps they're new to this kind of question. Either way, the response should be clear and concise. No need for extra details unless they ask follow-ups. 

I recall that Paris is the capital of France. It's one of the most well-known capitals globally, so this should be an easy one. The user might be a student working on homework, or someone prepping for trivia. Or maybe they're just curious—either way, I should confirm it confidently. 

No signs of confusion or deeper needs here. The question is very direct. I'll just state the answer plainly. If they want more info later, like landmarks or history, they'll ask. For now, keep it simple: Paris is the capital. 

Wait, should I add that it's also a major cultural hub? Nah, overcomplicating it. Just the fact. Done.
</think>

The capital of France is **Paris**. 

Paris is not only the political center but also a major cultural, economic, and gastronomic hub, famous for landmarks like the Eiffel Tower, the Louvre Museum, Notre-Dame Cathedral, and the Champs-Élysées.

@hksdpc255
Copy link
Contributor Author

@emuchogu Sorry, I haven’t tested it with llama-cli — only with llama-server.

If you want <think> and </think> to appear in the content, append --reasoning-format none when running llama-server.

I’m not sure whether llama-cli uses the same parsing logic.

ServeurpersoCom added a commit to ServeurpersoCom/llama.cpp that referenced this pull request Nov 3, 2025
@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

I’ve reverted my previous PR (reasoning-format-minimax-m2) and merged PR #16932 into my testing-branch16 for isolated testing.
I’m running llama-swap with the new XML tool-call parser to check MiniMax-M2 compatibility without any synthetic injection, using --reasoning-format none to observe the parser’s raw behavior.

sendLoadingState: true

macros:
  llama-server: >
    ../llama.cpp.pascal/build/bin/llama-server
    --port 8081
    -ngl 999
    -ctk q8_0
    -ctv q8_0
    -fa on
    --mlock
    -np 1
    --jinja
  models: /var/www/ia/models
  proxy: http://127.0.0.1:8081

  MoE-MiniMax-M2-230B-A10B:
    cmd: |
      ${llama-server}
      -m ${models}/unsloth/MiniMax-M2-GGUF/MiniMax-M2-UD-Q2_K_XL-00001-of-00002.gguf
      --temp 1.0
      --top-p 0.95
      --top-k 40
      --n-cpu-moe 50
      --ctx-size 65536
      --reasoning-format none
    proxy: ${proxy}
    filters:
      strip_params: "temperature, top_p, top_k"

Without this PR :

Streaming, no initial <think> tag in the output:
Sans titre

Curl without streaming no initial <think> tag in the output :

(root|~/llama.cpp.pascal) curl http://127.0.0.1:8081/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "MoE-MiniMax-M2-230B-A10B",
    "messages": [
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "temperature": 1.0,
    "top_p": 0.95,
    "top_k": 40,
    "stream": false
  }' | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1192  100   973  100   219    259     58  0:00:03  0:00:03 --:--:--   317
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The user asks: \"What is the capital of France?\" The answer is Paris. This is a simple question. There's no disallowed content. So the answer is \"Paris.\" Possibly also mention that it's Paris. So answer: \"The capital of France is Paris.\" There's no reason to go beyond that. There's no conflict with policy. So final answer: \"Paris.\"\n</think>\n\nThe capital of France is **Paris**."
      }
    }
  ],
  "created": 1762152163,
  "model": "MoE-MiniMax-M2-230B-A10B",
  "system_fingerprint": "b6942-5698549e7",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 85,
    "prompt_tokens": 29,
    "total_tokens": 114
  },
  "id": "chatcmpl-gfe455eld4ThdT1D7Ji6CtuJm6md4V7W",
  "timings": {
    "cache_n": 15,
    "prompt_n": 14,
    "prompt_ms": 273.966,
    "prompt_per_token_ms": 19.569,
    "prompt_per_second": 51.1012315396801,
    "predicted_n": 85,
    "predicted_ms": 3458.452,
    "predicted_per_token_ms": 40.6876705882353,
    "predicted_per_second": 24.577469920068282
  }
}
(root|~/llama.cpp.pascal)

With this PR :

Streaming :
reasoning go inside reasoning_content :
Sans titre

Curl without streaming, no initial <think> tag in the output :

(root|~/llama.cpp.pascal) curl http://127.0.0.1:8081/v1/chat/completions   -H "Content-Type: application/json"   -d '{
    "model": "MoE-MiniMax-M2-230B-A10B",
    "messages": [
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "temperature": 1.0,
    "top_p": 0.95,
    "top_k": 40,
    "stream": false
  }' | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1265  100  1046  100   219    251     52  0:00:04  0:00:04 --:--:--   304
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "I'm looking at how to respond to the question: \"What is the capital of France?\" The user expects a straightforward answer, which is \"Paris.\" I’ll keep it simple and concise, but I might consider adding a brief note about the Eiffel Tower. However, since the user didn't ask for extra information, I’ll focus on just saying \"Paris\" to fulfill their request. I want to ensure I’m following their guidelines accurately.\n</think>\n\nParis."
      }
    }
  ],
  "created": 1762152603,
  "model": "MoE-MiniMax-M2-230B-A10B",
  "system_fingerprint": "b6943-0619a5b7d",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 92,
    "prompt_tokens": 29,
    "total_tokens": 121
  },
  "id": "chatcmpl-WqvR2S73aa7cZEyIN7lm42yuuatYZwqO",
  "timings": {
    "cache_n": 15,
    "prompt_n": 14,
    "prompt_ms": 278.533,
    "prompt_per_token_ms": 19.895214285714285,
    "prompt_per_second": 50.263344020277664,
    "predicted_n": 92,
    "predicted_ms": 3852.551,
    "predicted_per_token_ms": 41.87555434782609,
    "predicted_per_second": 23.88028088401685
  }
}
(root|~/llama.cpp.pascal)

@hksdpc255
Copy link
Contributor Author

Oh! It seems you’re using non-streaming mode. I can now reproduce your issue with stream: false.

Let me dig into what’s happening…

@ServeurpersoCom
Copy link
Collaborator

Oh! It seems you’re using non-streaming mode. I can now reproduce your issue with stream: false.

Let me dig into what’s happening…

Yes, exactly: it works correctly in streaming mode (tested through the SvelteUI, which specifically designed to be debug-friendly without needing curl -N), but not in non-streaming mode.
So the initial tag still doesn’t appear when stream: false.

@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

Toolcall debug on SvelteUI with your #16932 + #16618 :)

Custom JSON :

{
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "simple_addition_tool",
        "description": "A dummy calculator tool used for testing multi-argument tool call streaming.",
        "parameters": {
          "type": "object",
          "properties": {
            "a": {
              "type": "number",
              "description": "The first number to add."
            },
            "b": {
              "type": "number",
              "description": "The second number to add."
            }
          },
          "required": ["a", "b"]
        }
      }
    }
  ]
}
Sans titre Sans titre2

@hksdpc255
Copy link
Contributor Author

hksdpc255 commented Nov 3, 2025

@ServeurpersoCom The problem is that I added some code that makes it fall back to llama.cpp’s original parser when there are no tools, so the new parser is never called.

llama.cpp/common/chat.cpp

Lines 2748 to 2753 in af5216e

if (!builder.syntax().parse_tool_calls) {
// MiniMax-M2 uses <think>...</think> tags for reasoning content
builder.try_parse_reasoning("<think>", "</think>");
builder.add_content(builder.consume_rest());
return;
}

Simply deleting the code above should fix the issue. I’ll run more tests before pushing a new commit.

图片

@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

@ServeurpersoCom The problem is that I added some code that makes it fall back to llama.cpp’s original parser when there are no tools, so the new parser is never called.

I’ve successfully tested it without these lines of code and confirmed it works as expected for streaming / non streaming / reasoning_content / toolcall

@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

I just realized this, and it seems strange: shouldn’t --reasoning-format none completely bypass any parsing logic instead of still going through it? It’s meant to be the raw passthrough mode for observing the model’s native output.

The .cpp files are already becoming huge and monolithic, making them harder to touch or refactor safely. The --reasoning-format options are also poorly named and not very explicit. In the long run, a modular templating system would help avoid piling up even more C++ parsing code.

If this work is meant to unify several next-generation parsers, maybe we could add a new keyword to --reasoning-format instead? It’s important to keep none as a truly no-parsing mode, since it’s essential for debugging new models.

Also, the current "auto" mode is actually just "deepseek" in practice, so it might be clearer to rename or document it that way to avoid confusion: and your unified detection logic could be implemented directly under auto (or deepseek, since they’re basically aliases) ?

hksdpc255 and others added 2 commits November 16, 2025 10:04
Co-authored-by: Sigbjørn Skjæret <[email protected]>
@hksdpc255
Copy link
Contributor Author

hksdpc255 commented Nov 16, 2025

Not better here:

%c[@continuedev] error: 500 Unknown method: items at row 75, column 22:
%c{% set _args = tc.arguments or {} %}
%c{% for k, v in _args.items() %}
%c                     ^
%c<arg_key>{{ k }}</arg_key>
%c at row 75, column 1:
%c{% set _args = tc.arguments or {} %}
%c{% for k, v in _args.items() %}
%c^
%c<arg_key>{{ k }}</arg_key>
%c at row 69, column 29:
%c{% if m.tool_calls %}
%c{% for tc in m.tool_calls %}
%c                            ^
%c{%- if tc.function %}
%c at row 69, column 1:
%c{% if m.tool_calls %}
%c{% for tc in m.tool_calls %}
%c^
%c{%- if tc.function %}
%c at row 68, column 22:
%c{%- endif -%}
%c{% if m.tool_calls %}
%c                     ^
%c{% for tc in m.tool_calls %}
%c at row 68, column 1:
%c{%- endif -%}
%c{% if m.tool_calls %}
%c^
%c{% for tc in m.tool_calls %}
%c at row 48, column 35:
%c{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
%c{%- elif m.role == 'assistant' -%}
%c                                  ^
%c<|assistant|>
%c at row 45, column 1:
%c{% for m in messages %}
%c{%- if m.role == 'user' -%}<|user|>
%c^
%c{{ visible_text(m.content) }}
%c at row 44, column 24:
%c{%- endfor %}
%c{% for m in messages %}
%c                       ^
%c{%- if m.role == 'user' -%}<|user|>
%c at row 44, column 1:
%c{%- endfor %}
%c{% for m in messages %}
%c^
%c{%- if m.role == 'user' -%}<|user|>
%c at row 1, column 1:
%c[gMASK]<sop>
%c^
%c{%- if tools -%}
%c {"context":"llm_stream_chat","model":"unsloth/glm-4.5-air","provider":"openai","useOpenAIAdapter":true,"streamEnabled":true,"templateMessages":false}

Is that embedded / inline python? I can maybe try to fix myself

Could you try this?
https://github.com/ggml-org/llama.cpp/blob/ea4f0ac2dac4441a6d860b9ae2b9d6d0dbdec4d7/models/templates/GLM-4.6.jinja

Or even further, use the fixed template here: #15904

@CISC
Copy link
Collaborator

CISC commented Nov 16, 2025

@hksdpc255 I don't think it's worth trying to fix what is basically invalid input (unless there is evidence it is caused by the parser, and not bad model output). Unless you have pending changes, shall we merge?

@hksdpc255
Copy link
Contributor Author

hksdpc255 commented Nov 17, 2025

@CISC It’s impossible that the issue was caused by the parser.

However, there are still two problems:

  1. I want to revert commit b93a015. It may not be elegant, but it does handle more edge cases.
  2. Kimi-K2’s chat template still seems to have unresolved issues, especially this part, I don't think message.tool_call_id will be rendered as expected:
  {%- elif message['role'] == 'tool' -%}
    {%- set tool_call_id = message.tool_call_id -%}
    ## Return of {{ tool_call_id }}
{{render_content(message)}}
  {%- elif message['content'] is not none -%}

If you feel that’s acceptable, I will revert the changes in b93a015.

Regarding the Kimi-K2 chat template: I’m not very familiar with the template system myself, so I’m not sure how to fix that part. However, it doesn’t cause crashes, it only reduces model performance. So, if you think it’s fine, it should be safe to merge for now. I can help refine the template later.

@hksdpc255
Copy link
Contributor Author

@CISC With help from the community, I’ve finally identified the root cause of Kimi-K2’s instability:

ikawrakow/ik_llama.cpp#958 (comment)

Now I should ask the maintainer whether partial implementation for a model is acceptable.
If not, we may need to remove the current Kimi-K2 support and wait for a more robust implementation.

@CISC
Copy link
Collaborator

CISC commented Nov 17, 2025

I want to revert commit b93a015. It may not be elegant, but it does handle more edge cases.

My concern is that it doesn't really handle it, it just ignores the bad input. That said crashing the template doesn't really handle it either. :)

Ultimately this is a problem with the model and would fail on transformers as well, may I suggest a more informative change instead?

{% set _args = tc.arguments or {} %}
{% if _args is not mapping %}
    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}
{% endif %}

Now I should ask the maintainer whether partial implementation for a model is acceptable. If not, we may need to remove the current Kimi-K2 support and wait for a more robust implementation.

A partial implementation is better than no implementation, and can always be improved later, I'm fine by leaving it as-is for now.

@hksdpc255
Copy link
Contributor Author

@CISC Applied your suggestions.

After reviewing the new commit cf92deb (mainly the chat-template improvements), it’s safe to merge.

@pwilkin
Copy link
Collaborator

pwilkin commented Nov 17, 2025

Let me drop in quickly and say that your conclusions about Kimi are completely wrong :)

The reason this tool call happens within the thinking block is because it's not real tool calling. It's Roo / Cline using their own custom XML tool calling format. They implemented it back in the days where a lot of models could actually be used for coding, but didn't have support for tool calling (notably DeepSeek). Because of that, models like Kimi that will not perform their own tool calls within the thinking block might still do so with the custom format because for them, it's not really a tool call.

That's why Roo Code is finally adding native tool calling support, the beta for that is already out: RooCodeInc/Roo-Code#9159

@hksdpc255
Copy link
Contributor Author

Perfect. Now all problems seems solved. Ready to merge.

@MikeLP
Copy link

MikeLP commented Nov 17, 2025

@hksdpc255

Getting error for glm4.5-air (unslot) with tool call streaming.

ERROR:    Error during streaming response: Error code: 500 - {'error': {'code': 500, 'message': 'Unknown method: items at row 75, column 22:\n{% set _args = tc.arguments %}\n{% for k, v in _args.items() %}\n                     ^\n<arg_key>{{ k }}</arg_key>\n at row 75, column 1:\n{% set _args = tc.arguments %}\n{% for k, v in _args.items() %}\n^\n<arg_key>{{ k }}</arg_key>\n at row 69, column 29:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n                            ^\n{%- if tc.function %}\n at row 69, column 1:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n^\n{%- if tc.function %}\n at row 68, column 22:\n{%- endif -%}\n{% if m.tool_calls %}\n                     ^\n{% for tc in m.tool_calls %}\n at row 68, column 1:\n{%- endif -%}\n{% if m.tool_calls %}\n^\n{% for tc in m.tool_calls %}\n at row 48, column 35:\n{{- \'/nothink\' if (enable_thinking is defined and not enable_thinking and not content.endswith("/nothink")) else \'\' -}}\n{%- elif m.role == \'assistant\' -%}\n                                  ^\n<|assistant|>\n at row 45, column 1:\n{% for m in messages %}\n{%- if m.role == \'user\' -%}<|user|>\n^\n{% set content = visible_text(m.content) %}{{ content }}\n at row 44, column 24:\n{%- endfor %}\n{% for m in messages %}\n                       ^\n{%- if m.role == \'user\' -%}<|user|>\n at row 44, column 1:\n{%- endfor %}\n{% for m in messages %}\n^\n{%- if m.role == \'user\' -%}<|user|>\n at row 1, column 1:\n[gMASK]<sop>\n^\n{%- if tools -%}\n', 'type': 'server_error'}}
Error during streaming response: Error code: 500 - {'error': {'code': 500, 'message': 'Unknown method: items at row 75, column 22:\n{% set _args = tc.arguments %}\n{% for k, v in _args.items() %}\n                     ^\n<arg_key>{{ k }}</arg_key>\n at row 75, column 1:\n{% set _args = tc.arguments %}\n{% for k, v in _args.items() %}\n^\n<arg_key>{{ k }}</arg_key>\n at row 69, column 29:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n                            ^\n{%- if tc.function %}\n at row 69, column 1:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n^\n{%- if tc.function %}\n at row 68, column 22:\n{%- endif -%}\n{% if m.tool_calls %}\n                     ^\n{% for tc in m.tool_calls %}\n at row 68, column 1:\n{%- endif -%}\n{% if m.tool_calls %}\n^\n{% for tc in m.tool_calls %}\n at row 48, column 35:\n{{- \'/nothink\' if (enable_thinking is defined and not enable_thinking and not content.endswith("/nothink")) else \'\' -}}\n{%- elif m.role == \'assistant\' -%}\n                                  ^\n<|assistant|>\n at row 45, column 1:\n{% for m in messages %}\n{%- if m.role == \'user\' -%}<|user|>\n^\n{% set content = visible_text(m.content) %}{{ content }}\n at row 44, column 24:\n{%- endfor %}\n{% for m in messages %}\n                       ^\n{%- if m.role == \'user\' -%}<|user|>\n at row 44, column 1:\n{%- endfor %}\n{% for m in messages %}\n^\n{%- if m.role == \'user\' -%}<|user|>\n at row 1, column 1:\n[gMASK]<sop>\n^\n{%- if tools -%}\n', 'type': 'server_error'}}

@CISC
Copy link
Collaborator

CISC commented Nov 17, 2025

Getting error for glm4.5-air (unslot) with tool call streaming.

ERROR:    Error during streaming response: Error code: 500 - {'error': {'code': 500, 'message': 'Unknown method: items at row 75, column 22:\n{% set _args = tc.arguments %}\n{% for k, v in _args.items() %}\n

Use the latest template from this PR.

..then you'll probably get a more informative error. :)

@MikeLP
Copy link

MikeLP commented Nov 17, 2025

Getting error for glm4.5-air (unslot) with tool call streaming.

ERROR:    Error during streaming response: Error code: 500 - {'error': {'code': 500, 'message': 'Unknown method: items at row 75, column 22:\n{% set _args = tc.arguments %}\n{% for k, v in _args.items() %}\n

Use the latest template from this PR.

..then you'll probably get a more informative error. :)

I see. So it doesn't work with built-in templates. I just missed this part. Let me try again then.

@MikeLP
Copy link

MikeLP commented Nov 17, 2025

I re-run web_search tool call with glm4.5-air and custom jinja template from the PR.

I'm getting error

 ERROR:    Error during streaming response: Error code: 500 - {'error': {'code': 500, 'message': 'Invalid tool call arguments passed: {"query":"\\"From Zero\\" Linkin Park album tracklist complete songs"limit":3,"type":"text"} at row 76, column 80:\n{% if _args is not mapping %}\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n                                                                               ^\n{% endif %}\n at row 76, column 5:\n{% if _args is not mapping %}\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n    ^\n{% endif %}\n at row 75, column 30:\n{% set _args = tc.arguments or {} %}\n{% if _args is not mapping %}\n                             ^\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n at row 75, column 1:\n{% set _args = tc.arguments or {} %}\n{% if _args is not mapping %}\n^\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n at row 69, column 29:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n                            ^\n{%- if tc.function %}\n at row 69, column 1:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n^\n{%- if tc.function %}\n at row 68, column 22:\n{%- endif -%}\n{% if m.tool_calls %}\n                     ^\n{% for tc in m.tool_calls %}\n at row 68, column 1:\n{%- endif -%}\n{% if m.tool_calls %}\n^\n{% for tc in m.tool_calls %}\n at row 48, column 35:\n{{- \'/nothink\' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else \'\' -}}\n{%- elif m.role == \'assistant\' -%}\n                                  ^\n<|assistant|>\n at row 45, column 1:\n{% for m in messages %}\n{%- if m.role == \'user\' -%}<|user|>\n^\n{{ visible_text(m.content) }}\n at row 44, column 24:\n{%- endfor %}\n{% for m in messages %}\n                       ^\n{%- if m.role == \'user\' -%}<|user|>\n at row 44, column 1:\n{%- endfor %}\n{% for m in messages %}\n^\n{%- if m.role == \'user\' -%}<|user|>\n at row 1, column 1:\n[gMASK]<sop>\n^\n{%- if tools -%}\n', 'type': 'server_error'}}

I use --chat-template-file glm-4.6.jinja (path is correct)
Obviously issue is in {"query":"\\"From Zero\\" Linkin Park album tracklist complete songs"limit":3,"type":"text"} with the missing coma.
But when I used the earliest versions of this PR, I never had this issue with the same model weights/params.

What I'm doing wrong?

P.S.

I used older template from the previous closed PR and it doesn't trigger this issue. At least for now. I'm going to test it more.

@CISC
Copy link
Collaborator

CISC commented Nov 17, 2025

I use --chat-template-file glm-4.6.jinja (path is correct) Obviously issue is in {"query":"\\"From Zero\\" Linkin Park album tracklist complete songs"limit":3,"type":"text"} with the missing coma. But when I used the earliest versions of this PR, I never had this issue with the same model weights/params.

What I'm doing wrong?

Excellent, the error message works as intended, so you're doing everything correct, but that is obviously bad model output, not just the missing comma, but also an unterminated string.

@MikeLP
Copy link

MikeLP commented Nov 17, 2025

@hksdpc255 After merge all custom templates from this PR can be used via --chat-template? Or client should pass it as a file?

@CISC
Copy link
Collaborator

CISC commented Nov 17, 2025

@hksdpc255 After merge all custom templates from this PR can be used via --chat-template? Or client should pass it as a file?

You have to use --jinja --chat-template-file.

@hksdpc255
Copy link
Contributor Author

I re-run web_search tool call with glm4.5-air and custom jinja template from the PR.

I'm getting error

 ERROR:    Error during streaming response: Error code: 500 - {'error': {'code': 500, 'message': 'Invalid tool call arguments passed: {"query":"\\"From Zero\\" Linkin Park album tracklist complete songs"limit":3,"type":"text"} at row 76, column 80:\n{% if _args is not mapping %}\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n                                                                               ^\n{% endif %}\n at row 76, column 5:\n{% if _args is not mapping %}\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n    ^\n{% endif %}\n at row 75, column 30:\n{% set _args = tc.arguments or {} %}\n{% if _args is not mapping %}\n                             ^\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n at row 75, column 1:\n{% set _args = tc.arguments or {} %}\n{% if _args is not mapping %}\n^\n    {{ raise_exception("Invalid tool call arguments passed: " + _args | string) }}\n at row 69, column 29:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n                            ^\n{%- if tc.function %}\n at row 69, column 1:\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n^\n{%- if tc.function %}\n at row 68, column 22:\n{%- endif -%}\n{% if m.tool_calls %}\n                     ^\n{% for tc in m.tool_calls %}\n at row 68, column 1:\n{%- endif -%}\n{% if m.tool_calls %}\n^\n{% for tc in m.tool_calls %}\n at row 48, column 35:\n{{- \'/nothink\' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else \'\' -}}\n{%- elif m.role == \'assistant\' -%}\n                                  ^\n<|assistant|>\n at row 45, column 1:\n{% for m in messages %}\n{%- if m.role == \'user\' -%}<|user|>\n^\n{{ visible_text(m.content) }}\n at row 44, column 24:\n{%- endfor %}\n{% for m in messages %}\n                       ^\n{%- if m.role == \'user\' -%}<|user|>\n at row 44, column 1:\n{%- endfor %}\n{% for m in messages %}\n^\n{%- if m.role == \'user\' -%}<|user|>\n at row 1, column 1:\n[gMASK]<sop>\n^\n{%- if tools -%}\n', 'type': 'server_error'}}

I use --chat-template-file glm-4.6.jinja (path is correct) Obviously issue is in {"query":"\\"From Zero\\" Linkin Park album tracklist complete songs"limit":3,"type":"text"} with the missing coma. But when I used the earliest versions of this PR, I never had this issue with the same model weights/params.

What I'm doing wrong?

P.S.

I used older template from the previous closed PR and it doesn't trigger this issue. At least for now. I'm going to test it more.

@MikeLP @CISC Wait, this problem seems caused by the parser. I may have introduced a regression while optimizing for Kimi-K2. I’ll take a look and prepare a quick fix.

In a rare case, the model may emit a raw string that begins with a valid JSON string. This commit adds unit tests to cover that scenario and fixes the regression introduced during the Kimi-K2 adaptation.
@hksdpc255
Copy link
Contributor Author

@MikeLP @CISC Fixed. Unit test for this case also added.

@CISC
Copy link
Collaborator

CISC commented Nov 18, 2025

Great catch! Congested CIs was useful for once... :)

ikawrakow pushed a commit to ikawrakow/ik_llama.cpp that referenced this pull request Nov 18, 2025
#958)

* port upstream ggml-org/llama.cpp#16932

* Add fixed chat templates.

* fix grammar when tool have no argument

* Insert additional stops for Kimi-K2

* Fix `no triggers set for lazy grammar!` for GLM4.5/4.6

* update chat.cpp

* fix grammar for GLM 4.5/4.6

* chat: Fix streaming parser for granite models (#15682)

* fix(chat): fix streaming parser for granite models

* tests: add test cases for Granite models chat parser

* common : Fix corrupted memory error on json grammar initialization (#16038)

Initalizing RESERVED_NAME in is_reserved_name() is not thread
safe and leads to corrupted memory when used from multiple threads
as can be seen in the asan trace below. This fixes the initialization
to make it thread-safe.

    #0 0x000100abd018 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, void*>*>, bool> std::__1::__hash_table<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>::__emplace_unique_key_args<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) __hash_table:1565
    #1 0x000100ab0320 in SchemaConverter::visit(nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) json-schema-to-grammar.cpp:802
    #2 0x000100aafc48 in std::__1::__function::__func<build_grammar(std::__1::function<void (common_grammar_builder const&)> const&, common_grammar_options const&)::$_2, std::__1::allocator<build_grammar(std::__1::function<void (common_grammar_builder const&)> const&, common_grammar_options const&)::$_2>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&)>::operator()(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&) function.h:319
    #3 0x000100a2c938 in std::__1::__function::__func<common_chat_params_init_llama_3_x(minja::chat_template const&, templates_params const&, bool)::$_0::operator()(common_grammar_builder const&) const::'lambda'(nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&), std::__1::allocator<common_chat_params_init_llama_3_x(minja::chat_template const&, templates_params const&, bool)::$_0::operator()(common_grammar_builder const&) const::'lambda'(nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&)>, void (nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&)>::operator()(nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&) function.h:319
    #4 0x000100a139f8 in foreach_function(nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&, std::__1::function<void (nlohmann::json_abi_v3_12_0::basic_json<nlohmann::json_abi_v3_12_0::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char>>, void> const&)> const&) chat.cpp:762
    #5 0x000100a2a7f4 in std::__1::__function::__func<common_chat_params_init_llama_3_x(minja::chat_template const&, templates_params const&, bool)::$_0, std::__1::allocator<common_chat_params_init_llama_3_x(minja::chat_template const&, templates_params const&, bool)::$_0>, void (common_grammar_builder const&)>::operator()(common_grammar_builder const&) function.h:319
    #6 0x000100aa98f4 in build_grammar(std::__1::function<void (common_grammar_builder const&)> const&, common_grammar_options const&) json-schema-to-grammar.cpp:982
    #7 0x0001009c9314 in common_chat_params_init_llama_3_x(minja::chat_template const&, templates_params const&, bool) chat.cpp:1110
    #8 0x0001009b8afc in common_chat_templates_apply_jinja(common_chat_templates const*, common_chat_templates_inputs const&) chat.cpp:1992
    #9 0x0001009b533c in common_chat_templates_apply(common_chat_templates const*, common_chat_templates_inputs const&) chat.cpp:2074
    #10 0x000100810120 in llamacpp_apply_chat_template+0x724 (predict_oai-98384e17fb94e863:arm64+0x100090120)
    ...

==45482==Register values:
 x[0] = 0x00006020004147f8   x[1] = 0x00006080000013c8   x[2] = 0x0000000000000000   x[3] = 0x0000604006289738
 x[4] = 0x0000000000000002   x[5] = 0x0000000000000001   x[6] = 0x04034000004b4000   x[7] = 0x0000000000000001
 x[8] = 0xbebebebebebebebe   x[9] = 0x17d7d7d7d7d7d7d7  x[10] = 0x00000c04000828ff  x[11] = 0x0000000000000001
x[12] = 0x000000002018d383  x[13] = 0x0000000000000000  x[14] = 0xfa0000000000fafa  x[15] = 0x000010700001ffff
x[16] = 0x000000019dc012c0  x[17] = 0x00000001021284f8  x[18] = 0x0000000000000000  x[19] = 0x00000001700acdc0
x[20] = 0x0000000000000002  x[21] = 0x000000002018d384  x[22] = 0x16dd16fd2e731151  x[23] = 0x0000007000020000
x[24] = 0x0000000100c69c08  x[25] = 0x0000000100c69c20  x[26] = 0x00006080000013c7  x[27] = 0x0000000100c69c00
x[28] = 0x00000001700acd60     fp = 0x00000001700aceb0     lr = 0x0000000100abce30     sp = 0x00000001700acd60
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV __hash_table:1565 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, void*>*>, bool> std::__1::__hash_table<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>::__emplace_unique_key_args<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)
Thread T5 created by T0 here:
    #0 0x0001020b99d4 in pthread_create+0x5c (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x359d4)
    #1 0x000100873910 in std::sys::pal::unix::thread::Thread::new::h77254fdd87a28e05+0x118 (predict_oai-98384e17fb94e863:arm64+0x1000f3910)
    #2 0x0001007c7a1c in test::run_test::haeb3c2bcd5ed6cf6+0x76c (predict_oai-98384e17fb94e863:arm64+0x100047a1c)
    #3 0x0001007aedb0 in test::console::run_tests_console::he9d142d704f3a986+0x149c (predict_oai-98384e17fb94e863:arm64+0x10002edb0)
    #4 0x0001007c5758 in test::test_main::hf86a5e20735245b9+0x118 (predict_oai-98384e17fb94e863:arm64+0x100045758)
    #5 0x0001007c5da0 in test::test_main_static::h61ee9c8fd30abca0+0x54 (predict_oai-98384e17fb94e863:arm64+0x100045da0)
    ...

==45482==ABORTING

* common : fix reasoning before forced tool call via tool_choice = required (#16264)

* common : fix reasoning before forced tool call via tool_choice = required

* common : improve reasoning and commentary handling when tool_choice is required

(cherry picked from commit c746984956d6882c2de73d53ae2bb3bdf889e475)

---------

Co-authored-by: Alde Rojas <[email protected]>

* Try fix Jinja template for GLM

* Improve Kimi-K2 chat template

* Fix "Invalid tool call arguments passed" in a rare case.

In a rare case, the model may emit a raw string that begins with a valid JSON string. This commit adds unit tests to cover that scenario and fixes the regression introduced during the Kimi-K2 adaptation.

---------

Co-authored-by: shun095 <[email protected]>
Co-authored-by: David Ribeiro Alves <[email protected]>
Co-authored-by: crat0z <[email protected]>
Co-authored-by: Alde Rojas <[email protected]>
@CISC CISC merged commit 1920345 into ggml-org:master Nov 18, 2025
71 of 72 checks passed
@aaronnewsome
Copy link

Congrats to everyone on FINALLY getting this into main. I really appreciate all your work on this @hksdpc255 . I've been on the PR compile for 6 days straight of HEAVY usage and not a single crash with Minimax-M2. Bravo!

@pwilkin
Copy link
Collaborator

pwilkin commented Nov 18, 2025

Did a little advertisement for this PR on Reddit, kudos to @hksdpc255 on all his hard work and all the adjustments he made to make this PR work.

@hksdpc255 hksdpc255 deleted the xml_toolcall branch November 19, 2025 03:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

testing Everything test related

Projects

None yet

Development

Successfully merging this pull request may close these issues.