Skip to content

Conversation

WoosukKwon
Copy link
Collaborator

Fixes a bug in spec decoding introduced in #20291, where token_ids_cpu wasn't properly updated because draft token IDs were incorrectly gated by if not last_rank.

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Copy link

github-actions bot commented Jul 6, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @WoosukKwon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug in the speculative decoding mechanism within the GPU model runner. It refactors the token ID update logic to ensure that speculative tokens are consistently added to the token_ids_cpu array and correctly accounted for in the num_tokens count, thereby resolving an issue that could lead to incorrect model state during speculative inference.

Highlights

  • Bug Fix: Speculative Decoding Token IDs: Corrected a bug in the gpu_model_runner.py where token_ids_cpu was not being properly updated with speculative tokens. This issue arose because the logic for adding draft token IDs was incorrectly gated by a conditional, leading to an incomplete representation of the token sequence.
  • State Management Correction: Ensured that the num_tokens count for each request accurately reflects the total number of tokens, including those added via speculative decoding. Previously, num_tokens might not have accounted for speculative tokens, leading to inconsistencies in the model's internal state.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the v1 label Jul 6, 2025
@WoosukKwon WoosukKwon added this to the v0.9.2 milestone Jul 6, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in speculative decoding where token_ids_cpu was not updated with speculative tokens on the last rank of a pipeline parallel group or in a single-GPU setup. The fix involves moving the token update logic out of a conditional block. The implementation of the fix is sound. I've added one suggestion to improve code clarity by renaming a reused variable to prevent potential confusion during future maintenance.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) July 6, 2025 17:12
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 6, 2025
@DarkLight1337 DarkLight1337 merged commit 9528e3a into main Jul 6, 2025
73 of 75 checks passed
@DarkLight1337 DarkLight1337 deleted the woosuk/fix-spec-decode branch July 6, 2025 19:44
huydhn pushed a commit to huydhn/vllm that referenced this pull request Jul 8, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Chen-zexi pushed a commit to Chen-zexi/vllm that referenced this pull request Jul 13, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
vaibhavjainwiz pushed a commit to red-hat-data-services/vllm that referenced this pull request Jul 15, 2025
Sync to v0.9.2 + remove libsodium + [fix
cachetokeziner](neuralmagic/nm-vllm-ent@1423512)

git log:
```
commit 7b94527 (HEAD -> sync-v0.9.2, nm-fork/sync-v0.9.2)
Merge: 1423512 d07be8a
Author: Selbi Nuryyeva <selbi@redhat.com>
Date:   Fri Jul 11 07:03:51 2025 -0400

    Merge remote-tracking branch 'nm-fork/main' into sync-v0.9.2

commit 1423512
Author: Isotr0py <mozf@mail2.sysu.edu.cn>
Date:   Mon Jun 30 18:16:16 2025 +0800

    disable using CacheTokenizer for transformers >= 4.53.0
    
    fixes vllm-project#20224
    
    addendum to vllm-project#20244

commit d07be8a (nm-fork/main, nm-fork/HEAD)
Merge: bbccdbe 02152ad
Author: Daniele <36171005+dtrifiro@users.noreply.github.com>
Date:   Wed Jul 9 15:18:56 2025 +0200

    Dockerfile*.ubi: remove libsodium (opendatahub-io#245)
    
    It's not needed anymore
    
    https://issues.redhat.com/browse/INFERENG-848

commit 7dd12da
Merge: bbccdbe a5dd03c
Author: Selbi Nuryyeva <selbi@redhat.com>
Date:   Tue Jul 8 10:08:37 2025 -0400

    Merge branch 'v0.9.2-upstream' into sync-v0.9.2

commit a5dd03c (tag: v0.9.2rc2, tag: v0.9.2, upstream/releases/v0.9.2, v0.9.2-upstream, upstream-v0.9.2)
Author: simon-mo <simon.mo@hey.com>
Date:   Sun Jul 6 14:02:36 2025 -0700

    Revert "[V0 deprecation] Remove V0 CPU/XPU/TPU backends (vllm-project#20412)"
    
    This reverts commit e202dd2.

commit c18b3b8
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Mon Jul 7 05:01:48 2025 +0800

    [Bugfix] Add `use_cross_encoder` flag to use correct activation in `ClassifierPooler` (vllm-project#20527)
    
    Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit 9528e3a
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Sun Jul 6 12:44:52 2025 -0700

    [BugFix][Spec Decode] Fix spec token ids in model runner (vllm-project#20530)
    
    Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

commit 9fb52e5
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Mon Jul 7 00:54:36 2025 +0800

    [V1] Support any head size for FlexAttention backend (vllm-project#20467)
    
    Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
```

Test:
CUDA: https://github.com/neuralmagic/nm-cicd/actions/runs/16218517666
ROCM: https://github.com/neuralmagic/nm-cicd/actions/runs/16218578391
patrickvonplaten pushed a commit to patrickvonplaten/vllm that referenced this pull request Jul 15, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
…t#20530)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants