Skip to content

Conversation

reidliu41
Copy link
Contributor

@reidliu41 reidliu41 commented Jun 13, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

  • Modularize CLI Argument Parsing in Benchmark Scripts to clean up entry point

Test Plan

Test Result

(Optional) Documentation Update

Signed-off-by: reidliu41 <reid201711@gmail.com>
Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to run the benchmark scripts with the changes to ensure correctness.

@reidliu41
Copy link
Contributor Author

thanks for your remind, i think it should work.

$ tail benchmark_long_document_qa_throughput.py

    parser = EngineArgs.add_cli_args(parser)

    return parser


if __name__ == "__main__":
    parser = create_argument_parser()
    args = parser.parse_args()
    main(args)


python benchmark_long_document_qa_throughput.py \
  --model meta-llama/Llama-2-7b-chat-hf \
  --document-length 2048 \
  --num-documents 8 \
  --repeat-count 2 \
  --enable-prefix-caching \
  --max-model-len 4096

Capturing CUDA graphs: 100%|████████████████████████████████████████| 67/67 [00:13<00:00,  4.98it/s]
INFO 06-13 12:41:48 [gpu_model_runner.py:2058] Graph capturing finished in 13 secs, took 0.47 GiB
INFO 06-13 12:41:48 [core.py:171] init engine (profile, create kv cache, warmup model) took 20.25 seconds
------warm up------
Adding requests: 100%|███████████████████████████████████████████████| 8/8 [00:00<00:00, 739.17it/s]
Processed prompts: 100%|█| 8/8 [00:02<00:00,  3.38it/s, est. speed input: 6951.96 toks/s, output: 33
Time to execute all requests: 2.3780 secs
------start generating------
Adding requests: 100%|█████████████████████████████████████████████| 16/16 [00:00<00:00, 897.37it/s]
Processed prompts: 100%|█| 16/16 [00:02<00:00,  6.61it/s, est. speed input: 13563.17 toks/s, output:
Time to execute all requests: 2.4378 secs


python3 benchmark_serving.py \
  --backend vllm \
  --model NousResearch/Hermes-3-Llama-3.1-8B \
  --endpoint /v1/completions \
  --dataset-name sharegpt \
  --dataset-path /tmp/ShareGPT_V3_unfiltered_cleaned_split.json \
  --num-prompts 10

Maximum request concurrency: None
100%|███████████████████████████████████████████████████████████████| 10/10 [00:17<00:00,  1.80s/it]
============ Serving Benchmark Result ============
Successful requests:                     10
Benchmark duration (s):                  17.95
Total input tokens:                      1369
Total generated tokens:                  2278
Request throughput (req/s):              0.56
Output token throughput (tok/s):         126.87
Total Token throughput (tok/s):          203.12
---------------Time to First Token----------------
Mean TTFT (ms):                          197.14
Median TTFT (ms):                        215.81
P99 TTFT (ms):                           216.36
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          23.65
Median TPOT (ms):                        23.40
P99 TPOT (ms):                           24.70
---------------Inter-token Latency----------------
Mean ITL (ms):                           23.23
Median ITL (ms):                         23.09
P99 ITL (ms):                            24.39
==================================================


python3 benchmark_throughput.py \
  --model NousResearch/Hermes-3-Llama-3.1-8B \
  --dataset-name sonnet \
  --dataset-path sonnet.txt \
  --num-prompts 10
Adding requests: 100%|████████████████████████████████████████████| 10/10 [00:00<00:00, 1414.65it/s]
Processed prompts: 100%|█| 10/10 [00:04<00:00,  2.41it/s, est. speed input: 1210.23 toks/s, output:
Throughput: 2.41 requests/s, 1569.16 total tokens/s, 361.33 output tokens/s
Total num prompt tokens:  5014
Total num output tokens:  1500


python3 benchmark_latency.py \
  --model NousResearch/Hermes-3-Llama-3.1-8B \
  --batch-size 2
Warmup iterations: 100%|████████████████████████████████████████████| 10/10 [00:29<00:00,  2.94s/it]
Profiling iterations: 100%|█████████████████████████████████████████| 30/30 [01:28<00:00,  2.96s/it]
Avg latency: 2.95804654553552 seconds
10% percentile latency: 2.9449643844825912 seconds
25% percentile latency: 2.952844722523878 seconds
50% percentile latency: 2.955235666508088 seconds
75% percentile latency: 2.969451513257809 seconds
90% percentile latency: 2.9752879458043027 seconds
99% percentile latency: 2.976981793149898 seconds

python benchmark_prefix_caching.py \
    --model meta-llama/Llama-2-7b-chat-hf \
    --enable-prefix-caching \
    --num-prompts 1 \
    --repeat-count 100 \
    --input-length-range 128:256
Testing filtered requests
------start generating------
Adding requests: 100%|██████████████████████████████████████████| 100/100 [00:00<00:00, 3572.20it/s]
Processed prompts: 100%|█| 100/100 [00:00<00:00, 196.37it/s, est. speed input: 43204.25 toks/s, outp
cost time 0.5386934280395508


python benchmark_serving_structured_output.py \
    --backend vllm\
    --model NousResearch/Hermes-3-Llama-3.1-8B \
    --dataset json \
    --structured-output-ratio 1.0 \
    --request-rate 10 \
    --num-prompts 1000
Maximum request concurrency: None
100%|███████████████████████████████████████████████████████████| 1000/1000 [01:43<00:00,  9.70it/s]
============ Serving Benchmark Result ============
Successful requests:                     1000
Benchmark duration (s):                  103.12
Total input tokens:                      123000
Total generated tokens:                  82976
Request throughput (req/s):              9.70
Output token throughput (tok/s):         804.62
Total Token throughput (tok/s):          1997.35
---------------Time to First Token----------------
Mean TTFT (ms):                          57.73
Median TTFT (ms):                        46.48
P99 TTFT (ms):                           516.09
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          26.73
Median TPOT (ms):                        26.59
P99 TPOT (ms):                           32.96
---------------Inter-token Latency----------------
Mean ITL (ms):                           26.41
Median ITL (ms):                         25.86
P99 ITL (ms):                            29.82
==================================================
correct_rate(%) 100.0


python benchmark_prioritization.py \
  --model meta-llama/Llama-2-7b-chat-hf \
  --input-len 128 \
  --output-len 64 \
  --num-prompts 100 \
  --scheduling-policy priority
Capturing CUDA graph shapes: 100%|██████████████████████████████████| 35/35 [00:09<00:00,  3.61it/s]
INFO 06-13 13:08:48 [model_runner.py:1671] Graph capturing finished in 10 secs, took 0.24 GiB
INFO 06-13 13:08:48 [llm_engine.py:428] init engine (profile, create kv cache, warmup model) took 11.47 seconds
Adding requests: 100%|█████████████████████████████████████████| 100/100 [00:00<00:00, 10060.70it/s]
Processed prompts: 100%|█| 100/100 [00:04<00:00, 24.62it/s, est. speed input: 3151.16 toks/s, output
Throughput: 24.56 requests/s, 4714.81 tokens/s
[rank0]:[W613 13:08:52.958746604 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())

@reidliu41
Copy link
Contributor Author

hi team, @DarkLight1337 can you help to set ready if no problem? thanks

@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 14, 2025
@ywang96
Copy link
Member

ywang96 commented Jun 14, 2025

hi team, @DarkLight1337 can you help to set ready if no problem? thanks

@reidliu41 Sorry I forgot to put on the ready label - added it just now!

@reidliu41
Copy link
Contributor Author

seems all passed, is it able to merge?

@houseroad houseroad merged commit 6fa718a into vllm-project:main Jun 14, 2025
56 checks passed
yeqcharlotte pushed a commit to yeqcharlotte/vllm that referenced this pull request Jun 22, 2025
…ject#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
…ject#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: minpeter <kali2005611@gmail.com>
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Jun 24, 2025
…ject#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jun 30, 2025
…ject#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
wseaton pushed a commit to wseaton/vllm that referenced this pull request Jun 30, 2025
…ject#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
…ject#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
…ject#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed structured-output
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants