-
-
Notifications
You must be signed in to change notification settings - Fork 9.8k
[BugFix][Spec Decode] Fix spec token ids in model runner #20530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @WoosukKwon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical bug in the speculative decoding mechanism within the GPU model runner. It refactors the token ID update logic to ensure that speculative tokens are consistently added to the token_ids_cpu
array and correctly accounted for in the num_tokens
count, thereby resolving an issue that could lead to incorrect model state during speculative inference.
Highlights
- Bug Fix: Speculative Decoding Token IDs: Corrected a bug in the
gpu_model_runner.py
wheretoken_ids_cpu
was not being properly updated with speculative tokens. This issue arose because the logic for adding draft token IDs was incorrectly gated by a conditional, leading to an incomplete representation of the token sequence. - State Management Correction: Ensured that the
num_tokens
count for each request accurately reflects the total number of tokens, including those added via speculative decoding. Previously,num_tokens
might not have accounted for speculative tokens, leading to inconsistencies in the model's internal state.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly fixes a bug in speculative decoding where token_ids_cpu
was not updated with speculative tokens on the last rank of a pipeline parallel group or in a single-GPU setup. The fix involves moving the token update logic out of a conditional block. The implementation of the fix is sound. I've added one suggestion to improve code clarity by renaming a reused variable to prevent potential confusion during future maintenance.
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Sync to v0.9.2 + remove libsodium + [fix cachetokeziner](neuralmagic/nm-vllm-ent@1423512) git log: ``` commit 7b94527 (HEAD -> sync-v0.9.2, nm-fork/sync-v0.9.2) Merge: 1423512 d07be8a Author: Selbi Nuryyeva <selbi@redhat.com> Date: Fri Jul 11 07:03:51 2025 -0400 Merge remote-tracking branch 'nm-fork/main' into sync-v0.9.2 commit 1423512 Author: Isotr0py <mozf@mail2.sysu.edu.cn> Date: Mon Jun 30 18:16:16 2025 +0800 disable using CacheTokenizer for transformers >= 4.53.0 fixes vllm-project#20224 addendum to vllm-project#20244 commit d07be8a (nm-fork/main, nm-fork/HEAD) Merge: bbccdbe 02152ad Author: Daniele <36171005+dtrifiro@users.noreply.github.com> Date: Wed Jul 9 15:18:56 2025 +0200 Dockerfile*.ubi: remove libsodium (opendatahub-io#245) It's not needed anymore https://issues.redhat.com/browse/INFERENG-848 commit 7dd12da Merge: bbccdbe a5dd03c Author: Selbi Nuryyeva <selbi@redhat.com> Date: Tue Jul 8 10:08:37 2025 -0400 Merge branch 'v0.9.2-upstream' into sync-v0.9.2 commit a5dd03c (tag: v0.9.2rc2, tag: v0.9.2, upstream/releases/v0.9.2, v0.9.2-upstream, upstream-v0.9.2) Author: simon-mo <simon.mo@hey.com> Date: Sun Jul 6 14:02:36 2025 -0700 Revert "[V0 deprecation] Remove V0 CPU/XPU/TPU backends (vllm-project#20412)" This reverts commit e202dd2. commit c18b3b8 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Mon Jul 7 05:01:48 2025 +0800 [Bugfix] Add `use_cross_encoder` flag to use correct activation in `ClassifierPooler` (vllm-project#20527) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> commit 9528e3a Author: Woosuk Kwon <woosuk.kwon@berkeley.edu> Date: Sun Jul 6 12:44:52 2025 -0700 [BugFix][Spec Decode] Fix spec token ids in model runner (vllm-project#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> commit 9fb52e5 Author: Cyrus Leung <tlleungac@connect.ust.hk> Date: Mon Jul 7 00:54:36 2025 +0800 [V1] Support any head size for FlexAttention backend (vllm-project#20467) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> ``` Test: CUDA: https://github.com/neuralmagic/nm-cicd/actions/runs/16218517666 ROCM: https://github.com/neuralmagic/nm-cicd/actions/runs/16218578391
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
…t#20530) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Fixes a bug in spec decoding introduced in #20291, where
token_ids_cpu
wasn't properly updated because draft token IDs were incorrectly gated byif not last_rank
.