Skip to content

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Jun 27, 2025

Purpose

Improve the error message and properly raise a ValueError when receiving a request specifying empty "choice" for structured outputs.

Test Plan

Server:

vllm serve Qwen/Qwen3-0.6B --enforce-eager

Client:

from openai import OpenAI

client = OpenAI(api_key="EMPTY", base_url="http://localhost:8000/v1")

chat_response = client.chat.completions.create(
    model=client.models.list().data[0].id,
    messages=[{
        "role": "user",
        "content": [{"type": "text", "text": "Who are you?"}],
    }],
    extra_body={
        "guided_choice": []
    }
)

Test Result

Before (the server would crash):

INFO 06-27 17:12:40 [async_llm.py:270] Added request chatcmpl-5e577b9ff48c43b391a6a74fc22d15da.
ERROR 06-27 17:12:40 [backend_xgrammar.py:113] Validation should have already occurred. Please file an issue.
ERROR 06-27 17:12:41 [core.py:521] EngineCore encountered a fatal error.
ERROR 06-27 17:12:41 [core.py:521] Traceback (most recent call last):
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/engine/core.py", line 512, in run_engine_core
ERROR 06-27 17:12:41 [core.py:521]     engine_core.run_busy_loop()
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/engine/core.py", line 539, in run_busy_loop
ERROR 06-27 17:12:41 [core.py:521]     self._process_engine_step()
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/engine/core.py", line 564, in _process_engine_step
ERROR 06-27 17:12:41 [core.py:521]     outputs, model_executed = self.step_fn()
ERROR 06-27 17:12:41 [core.py:521]                               ^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/engine/core.py", line 234, in step
ERROR 06-27 17:12:41 [core.py:521]     scheduler_output = self.scheduler.schedule()
ERROR 06-27 17:12:41 [core.py:521]                        ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/core/sched/scheduler.py", line 361, in schedule
ERROR 06-27 17:12:41 [core.py:521]     if structured_output_req and structured_output_req.grammar:
ERROR 06-27 17:12:41 [core.py:521]                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/structured_output/request.py", line 45, in grammar
ERROR 06-27 17:12:41 [core.py:521]     completed = self._check_grammar_completion()
ERROR 06-27 17:12:41 [core.py:521]                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/structured_output/request.py", line 33, in _check_grammar_completion
ERROR 06-27 17:12:41 [core.py:521]     self._grammar = self._grammar.result(timeout=0.0001)
ERROR 06-27 17:12:41 [core.py:521]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/.local/share/uv/python/cpython-3.12.4-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 456, in result
ERROR 06-27 17:12:41 [core.py:521]     return self.__get_result()
ERROR 06-27 17:12:41 [core.py:521]            ^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/.local/share/uv/python/cpython-3.12.4-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
ERROR 06-27 17:12:41 [core.py:521]     raise self._exception
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/.local/share/uv/python/cpython-3.12.4-linux-x86_64-gnu/lib/python3.12/concurrent/futures/thread.py", line 58, in run
ERROR 06-27 17:12:41 [core.py:521]     result = self.fn(*self.args, **self.kwargs)
ERROR 06-27 17:12:41 [core.py:521]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/structured_output/__init__.py", line 109, in _async_create_grammar
ERROR 06-27 17:12:41 [core.py:521]     return self.backend.compile_grammar(request_type, grammar_spec)
ERROR 06-27 17:12:41 [core.py:521]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [core.py:521]   File "/home/mgoin/code/vllm/vllm/v1/structured_output/backend_xgrammar.py", line 116, in compile_grammar
ERROR 06-27 17:12:41 [core.py:521]     raise ValueError(
ERROR 06-27 17:12:41 [core.py:521] ValueError: grammar is not of valid supported types. (StructuredOutputOptions.CHOICE)
ERROR 06-27 17:12:41 [async_llm.py:419] AsyncLLM output_handler failed.
ERROR 06-27 17:12:41 [async_llm.py:419] Traceback (most recent call last):
ERROR 06-27 17:12:41 [async_llm.py:419]   File "/home/mgoin/code/vllm/vllm/v1/engine/async_llm.py", line 378, in output_handler
ERROR 06-27 17:12:41 [async_llm.py:419]     outputs = await engine_core.get_output_async()
ERROR 06-27 17:12:41 [async_llm.py:419]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-27 17:12:41 [async_llm.py:419]   File "/home/mgoin/code/vllm/vllm/v1/engine/core_client.py", line 809, in get_output_async
ERROR 06-27 17:12:41 [async_llm.py:419]     raise self._format_exception(outputs) from None
ERROR 06-27 17:12:41 [async_llm.py:419] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
INFO 06-27 17:12:41 [async_llm.py:345] Request chatcmpl-5e577b9ff48c43b391a6a74fc22d15da failed (engine dead).
INFO:     127.0.0.1:55444 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error

After:

Client:
Traceback (most recent call last):
  File "/home/mgoin/code/t.py", line 9, in <module>
    chat_response = client.chat.completions.create(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mgoin/venvs/vllm/lib/python3.12/site-packages/openai/_utils/_utils.py", line 287, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/mgoin/venvs/vllm/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 925, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/mgoin/venvs/vllm/lib/python3.12/site-packages/openai/_base_client.py", line 1239, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mgoin/venvs/vllm/lib/python3.12/site-packages/openai/_base_client.py", line 1034, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "Choice '[]' cannot be an empty list", 'type': 'BadRequestError', 'param': None, 'code': 400}

Server:
INFO:     127.0.0.1:40160 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @mgoin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug where providing an empty list for the guided_choice parameter in structured output requests caused the server to crash. The change introduces a specific validation to gracefully handle such invalid inputs, returning a clear error to the client instead of an internal server error, significantly improving the stability and user experience.

Highlights

  • Robust Input Validation: Implemented a new validation check within _validate_structured_output to explicitly reject requests where guided_decoding.choice is an empty list, preventing server crashes.
  • Improved Error Handling: Requests with an empty guided_choice list will now result in a BadRequestError (HTTP 400) with a clear message, rather than causing an internal server error and engine crash.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the v1 label Jun 27, 2025
@mgoin mgoin requested a review from russellb June 27, 2025 19:12
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a critical bug where an empty guided_choice list would crash the server. The fix introduces a validation check early in the request processing pipeline, which is the correct approach. My feedback focuses on improving the clarity of the error message for a better user experience.

@mgoin mgoin added the bug Something isn't working label Jun 27, 2025
Copy link
Member

@russellb russellb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you!

@russellb russellb enabled auto-merge (squash) June 27, 2025 19:16
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 27, 2025
@vllm-bot vllm-bot merged commit f719772 into vllm-project:main Jun 28, 2025
80 of 82 checks passed
@mgoin mgoin deleted the fix-empty-choice-crash branch June 28, 2025 09:54
CSWYF3634076 pushed a commit to CSWYF3634076/vllm that referenced this pull request Jul 2, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
…-project#20195)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants