Skip to content

Conversation

@pco111
Copy link
Contributor

@pco111 pco111 commented Jul 24, 2025

Fixes #19353

Description

This pull request addresses an issue where MockLLM.stream_complete() raises a pydantic.ValidationError when called with an empty prompt ("") and max_tokens=None.

The root cause was that the internal generator for the stream would complete without yielding any CompletionResponse objects. The llm_completion_callback decorator would then attempt to build a final event with a None response, leading to the validation failure.

This fix introduces a check at the beginning of the gen_prompt inner function. If the provided prompt is empty, it now yields a single CompletionResponse with empty text and delta attributes. This ensures the generator always produces a valid stream, even for empty inputs, thus resolving the bug.

New Package?

  • Yes
  • No

Version Bump?

  • Yes
  • No

Type of Change

  • Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

  • I added a new unit test to cover this change
  • I believe this change is already covered by existing unit tests
from llama_index.core.llms import MockLLM

def test_mock_llm_stream_complete_empty_prompt_no_max_tokens() -> None:
    """
    Test that MockLLM.stream_complete with an empty prompt and max_tokens=None
    does not raise a validation error.
    This test case is based on issue #19353.
    """
    llm = MockLLM(max_tokens=None)
    response_gen = llm.stream_complete("")
    
    # Consume the generator to trigger the potential error
    responses = list(response_gen)
    
    # Check that we received a single, empty response
    assert len(responses) == 1
    assert responses[0].text == ""
    assert responses[0].delta == ""

Note: A temporary, specific unit test was created to validate this fix, which passed successfully. The test was then removed to maintain a clean test suite, as its sole purpose was to confirm the resolution of this specific bug.

Suggested Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added Google Colab support for the newly added notebooks.
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran uv run make format; uv run make lint to appease the lint gods

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Jul 24, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jul 25, 2025
@logan-markewich logan-markewich enabled auto-merge (squash) July 25, 2025 03:49
auto-merge was automatically disabled July 25, 2025 16:49

Head branch was pushed to by a user without write access

@pco111 pco111 force-pushed the fix/issue-19353-mockllm-stream-complete branch from a7868c6 to 89d3c88 Compare July 25, 2025 16:49
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. and removed size:XS This PR changes 0-9 lines, ignoring generated files. labels Jul 25, 2025
@pco111
Copy link
Contributor Author

pco111 commented Jul 25, 2025

Hello @logan-markewich , thank you for your previous approval. I found that there was a workflow that could not be passed (because of the lack of direct coverage of the latest changes), so I added a test (passed). Please have a look. Thank you!

@AstraBert AstraBert enabled auto-merge (squash) July 28, 2025 11:41
@AstraBert AstraBert merged commit dd1f35b into run-llama:main Jul 28, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm This PR has been approved by a maintainer size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: MockLLM.stream_complete raises error on empty prompt when max_tokens=None

3 participants