Skip to content

Conversation

@qthequartermasterman
Copy link
Contributor

@qthequartermasterman qthequartermasterman commented Sep 17, 2025

Purpose

The implementation of Prompt Logprobs is incompatible with Prompt Embeds in both the existing v0 engine and in #24278. In that PR is was requested by @DarkLight1337 to block requests with both prompt embeds and prompt logprobs at the API level.

This PR has changes originally in #24278, but since these fixes apply to both engines, and aren't strictly related to enabling prompt embedding support in the v1 engine, I figured it would be a good idea to split it into a separate PR for easier review and to reduce the diff in #24278.

Test Plan

Updated unit test with prompt embeds to explicitly ban requests that have both prompt embeds and prompt logprobs.

Test Result

New tests pass locally. Pending CI, it should be good to go. The full CI of the original PR #24278 never failed related to these changes.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a validation to prohibit the simultaneous use of prompt_logprobs and prompt_embeds in completion requests, addressing an incompatibility. The validation is implemented at two levels: at the API entrypoint in serving_completion.py to provide a clear error to the user, and at the engine level in llm_engine.py as a robust safeguard. A corresponding unit test has been added to verify that the API correctly rejects such requests. The changes are logical, well-tested, and effectively prevent the unsupported combination of parameters.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 17, 2025 15:15
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 17, 2025
qthequartermasterman added a commit to protopia-ai/vllm that referenced this pull request Sep 17, 2025
@qthequartermasterman
Copy link
Contributor Author

@DarkLight1337 is it normal for Readthedocs to take over 10 hours to complete?

@DarkLight1337
Copy link
Member

No, let me rebuild it

@DarkLight1337 DarkLight1337 merged commit bec060f into vllm-project:main Sep 18, 2025
53 of 55 checks passed
845473182 pushed a commit to dsxsteven/vllm_splitPR that referenced this pull request Sep 18, 2025
…litPR into model_register

* 'model_register' of https://github.com/dsxsteven/vllm_splitPR: (138 commits)
  Retrieve `sliding_window` from text config in Gemma3 MM (vllm-project#25085)
  [Docs] Fix API Reference (vllm-project#25140)
  [Kernel] Better inf handling for grouped topk cu (vllm-project#24886)
  [CLI] Use streaming in CLI chat and completion commands (vllm-project#23769)
  [benchmark] add peak throughput metrics and plot (vllm-project#23867)
  [Spec Decode] Efficient padded speculation (vllm-project#24539)
  [V0 Deprecation] Remove more V0 tests (vllm-project#25117)
  [EPLB] Add EPLB support for hunyuan_v1 (vllm-project#23078)
  [XPU] Whisper model support on XPU Platform (vllm-project#25123)
  Mark prompt logprobs as incompatible with prompt embeds at API level (vllm-project#25077)
  [Model] enable data parallel for InternVL vision encoder (vllm-project#23909)
  [Kernels] Overlap shared experts with combine instead of dispatch (vllm-project#24254)
  [Bugfix][Qwen3-Next] add prefixes to shared_expert in qwen3-next and mlp in qwen2moe to successfully load ignored params in quantized models (vllm-project#24960)
  [Core][MM] Cleanup `MultiModalCache` (vllm-project#25006)
  [Docs] Clean up the contributing README (vllm-project#25099)
  [MM Encoder] Apply DP ViT for Qwen3-VL model series (vllm-project#24955)
  [Kernels] Enable DeepGEMM by default (vllm-project#24462)
  [V0 Deprecation] Skip PP test (vllm-project#25128)
  [V0 Deprecation] Remove misc V0 tests (vllm-project#25118)
  [V0 Deprecation] Remove V0 Tracing & Metrics tests (vllm-project#25115)
  ...
debroy-rh pushed a commit to debroy-rh/vllm that referenced this pull request Sep 19, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants