Skip to content

Conversation

reidliu41
Copy link
Contributor

@reidliu41 reidliu41 commented Jul 3, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

  • Also too many args for bench <bench_type>, so add --help= to search:
Tip: Use `vllm [serve|run-batch|bench <bench_type>] --help=<keyword>` to explore arguments from help.
   - To view a argument group:     --help=ModelConfig
   - To view a single argument:    --help=max-num-seqs
   - To search by keyword:         --help=max
   - To list all groups:           --help=listgroup
  
$ vllm bench latency --help=options
INFO 07-03 17:42:07 [__init__.py:244] Automatically detected platform cpu.
options:
  --batch-size BATCH_SIZE
  --disable-detokenize  Do not detokenize responses (i.e. do not include detokenization time in the
                        latency measurement) (default: False)
  --disable-log-stats   Disable logging statistics. (default: False)
  --input-len INPUT_LEN
  --n N                 Number of generated sequences per prompt. (default: 1)
  --num-iters NUM_ITERS
                        Number of iterations to run. (default: 30)
  --num-iters-warmup NUM_ITERS_WARMUP
                        Number of iterations to run for warmup. (default: 10)
  --output-json OUTPUT_JSON
                        Path to save the latency results in JSON format. (default: None)
  --output-len OUTPUT_LEN
  --profile             profile the generation process of a single batch (default: False)
  --use-beam-search
  --use-v2-block-manager
                        [DEPRECATED] block manager v1 has been removed and SelfAttnBlockSpaceManager
                        (i.e. block manager v2) is now the default. Setting this flag to True or False
                        has no effect on vLLM behavior. (default: True)
  -h, --help            show this help message and exit

Test Plan

Test Result

(Optional) Documentation Update

@reidliu41 reidliu41 requested a review from aarnphm as a code owner July 3, 2025 09:42
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @reidliu41, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the user experience for vLLM's benchmark command-line interface by enhancing its help system. It enables a more granular and searchable help display for bench <bench_type> commands, making it much easier for users to discover and understand the numerous configuration options available. The changes involve both integrating the new help mechanism and refining the underlying parsing logic to support multi-part commands.

Highlights

  • Enhanced CLI Help: Integrated the filtered help functionality (--help=<keyword>) into the vllm bench <bench_type> subcommands, allowing users to search for specific arguments or argument groups.
  • Updated Help Epilog: Modified the general CLI help epilog to explicitly mention vllm bench <bench_type> as a command that supports the advanced --help=<keyword> search feature.
  • Improved Subcommand Parsing Logic: Refined the internal logic for show_filtered_argument_or_group_from_help to correctly identify and apply the filtered help to multi-part subcommands (like bench latency), ensuring accurate help display for complex command structures.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the frontend label Jul 3, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The code changes introduce the --help=<keyword> feature to the vllm bench subcommands. A potential issue exists in the subcommand detection logic, where argument values might incorrectly trigger the help function. A more robust approach would involve verifying the subcommand's position in sys.argv.

Comment on lines 188 to 200
if " " in subcommand_name:
# For "bench serve" - check if both parts are in sys.argv
parts = subcommand_name.split()
if not all(part in sys.argv for part in parts):
return
else:
# For simple commands like "serve"
if subcommand_name not in sys.argv:
return
# If this is "serve" but "bench" is also present,
# let "bench serve" handle it
if subcommand_name == "serve" and "bench" in sys.argv:
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current logic to check if the subcommand is active is not robust. It checks for the presence of subcommand parts anywhere in sys.argv, which can lead to incorrect behavior. For example, if a user runs vllm bench throughput --some-option latency --help=foo, this function would be incorrectly triggered for the bench latency subcommand because both "bench" and "latency" are present in sys.argv.

A more robust approach is to check if the subcommand parts appear in the correct order at the beginning of the command-line arguments (right after the script name). This can be done by slicing sys.argv. This change also simplifies the code by removing the if/else logic for single vs. multi-word subcommands and makes it more maintainable.

Suggested change
if " " in subcommand_name:
# For "bench serve" - check if both parts are in sys.argv
parts = subcommand_name.split()
if not all(part in sys.argv for part in parts):
return
else:
# For simple commands like "serve"
if subcommand_name not in sys.argv:
return
# If this is "serve" but "bench" is also present,
# let "bench serve" handle it
if subcommand_name == "serve" and "bench" in sys.argv:
return
parts = subcommand_name.split()
# sys.argv[0] is the program name. The subcommand follows.
# e.g., for `vllm bench latency`, sys.argv is `['vllm', 'bench', 'latency', ...]`
# and subcommand_name is "bench latency".
if len(sys.argv) <= len(parts) or sys.argv[1:1 + len(parts)] != parts:
return

Copy link

github-actions bot commented Jul 3, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: reidliu41 <reid201711@gmail.com>
@reidliu41 reidliu41 force-pushed the improve-bench-help branch from cb1cb11 to 75d4bab Compare July 3, 2025 09:58
@reidliu41
Copy link
Contributor Author

cc @DarkLight1337

Signed-off-by: reidliu41 <reid201711@gmail.com>
@reidliu41
Copy link
Contributor Author

updated

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) July 3, 2025 12:26
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 3, 2025
@DarkLight1337 DarkLight1337 merged commit 9854dc9 into vllm-project:main Jul 3, 2025
74 of 80 checks passed
sfeng33 pushed a commit to sfeng33/vllm that referenced this pull request Jul 6, 2025
huydhn pushed a commit to huydhn/vllm that referenced this pull request Jul 8, 2025
Chen-zexi pushed a commit to Chen-zexi/vllm that referenced this pull request Jul 13, 2025
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
…ct#20430)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…ct#20430)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants