Skip to content

Conversation

WoosukKwon
Copy link
Collaborator

This PR removes the block sparse attention and the support for phi3-small which uses the attention.

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation ci/build new-model Requests to new models rocm Related to AMD ROCm v1 tpu Related to Google TPUs labels Jul 19, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request deprecates and removes the BlockSparse Attention feature and the Phi3-Small model, which relied on it. The changes are extensive, touching many files across the attention backends, model registry, and testing infrastructure. My review confirms that the removal is clean and consistent. All references to blocksparse_params, the block-sparse attention implementation, and the Phi3SmallForCausalLM model have been correctly eliminated. The related tests and documentation have also been updated accordingly. The changes look good to me.

@WoosukKwon WoosukKwon added ready ONLY add when PR is ready to merge/full CI is needed and removed documentation Improvements or additions to documentation new-model Requests to new models rocm Related to AMD ROCm tpu Related to Google TPUs ci/build v1 labels Jul 19, 2025
@mergify mergify bot added documentation Improvements or additions to documentation ci/build new-model Requests to new models labels Jul 19, 2025
@mergify mergify bot added rocm Related to AMD ROCm v1 tpu Related to Google TPUs labels Jul 19, 2025
@WoosukKwon WoosukKwon enabled auto-merge (squash) July 19, 2025 05:52
@DarkLight1337
Copy link
Member

Kernels test failure is related to this PR

@hmellor hmellor moved this to In Progress in V0 Deprecation Jul 19, 2025
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
@WoosukKwon WoosukKwon disabled auto-merge July 19, 2025 20:53
@WoosukKwon WoosukKwon merged commit 752c6ad into main Jul 19, 2025
63 of 66 checks passed
@WoosukKwon WoosukKwon deleted the woosuk/remove-v0-blocksparse-attn branch July 19, 2025 20:53
@github-project-automation github-project-automation bot moved this from In Progress to Done in V0 Deprecation Jul 19, 2025
kzawora-intel added a commit to vllm-project/vllm-gaudi that referenced this pull request Jul 21, 2025
Upstream PR vllm-project/vllm#21217 changed
attention APIs. This PR adjusts our attention implementation to the new
API.

---------

Signed-off-by: Konrad Zawora <kzawora@habana.ai>
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
…roject#21217)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
…roject#21217)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: x22x22 <wadeking@qq.com>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…roject#21217)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
…roject#21217)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Paul Pak <paulpak58@gmail.com>
taneem-ibrahim pushed a commit to taneem-ibrahim/vllm that referenced this pull request Aug 14, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
…roject#21217)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
attafosu pushed a commit to attafosu/vllm-gaudi that referenced this pull request Aug 16, 2025
Upstream PR vllm-project/vllm#21217 changed
attention APIs. This PR adjusts our attention implementation to the new
API.

---------

Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: Thomas Atta-fosu <tattafosu@habana.ai>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 27, 2025
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build documentation Improvements or additions to documentation new-model Requests to new models ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm tpu Related to Google TPUs v1
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

4 participants