-
Notifications
You must be signed in to change notification settings - Fork 2.8k
TRTLLM Gen MLA Decode Kernel Integration (same as #7938) #8632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TRTLLM Gen MLA Decode Kernel Integration (same as #7938) #8632
Conversation
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @farazkh80, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've implemented and integrated the TRTLLM Gen MLA Decode Kernel, a new attention backend specifically designed to optimize decode operations for NVIDIA's Blackwell architecture. This enhancement aims to improve performance on the latest GPU hardware. While optimized for decoding, prefill operations will seamlessly fall back to the FlashInfer MLA backend. I've also updated the relevant documentation and added comprehensive unit tests to ensure the stability and correctness of this new feature.
Highlights
- New TRTLLM MLA Attention Backend: I've integrated the TRTLLM Gen MLA Decode Kernel, which is specifically optimized for NVIDIA's Blackwell architecture (e.g., B200 GPUs). This will significantly enhance the performance of decode operations on these newer hardware platforms.
- Decode-Only Optimization with Prefill Fallback: The new TRTLLM MLA backend currently focuses solely on decode operations. For prefill operations, including those with multimodal inputs, the system will gracefully fall back to the existing FlashInfer MLA backend to ensure compatibility and functionality.
- Comprehensive Documentation Updates: I've updated the documentation to reflect the addition of the TRTLLM MLA backend, providing clear instructions on its usage and detailing its specific characteristics, such as its decode-only nature and Blackwell optimization.
- Configuration and Compatibility Constraints: The new backend enforces specific
page_size
requirements (32 or 64) and will issue a warning or adjust the size if an unsupported value is provided. Additionally, it does not currently support speculative decoding, and an error will be raised if attempted. - New Backend Implementation File: I've added a new Python file,
python/sglang/srt/layers/attention/trtllm_mla_backend.py
, which encapsulates the core logic for the TRTLLM MLA backend, including memory management and interaction with FlashInfer's TRTLLM kernels. - Dedicated Unit Tests: To ensure the robustness and correctness of the new backend, I've included a dedicated test file,
python/sglang/test/attention/test_trtllm_mla_backend.py
, with various unit tests covering basic functionality, output matching against reference implementations, and metadata handling.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request integrates the TRTLLM MLA decode kernel. The changes include the new backend implementation, integration into the model runner and server arguments, comprehensive unit tests, and documentation updates. My review focuses on the correctness and consistency of these changes. I've found some inconsistencies in the documentation regarding speculative decoding and multimodal support for the new backend, which contradict the implementation. I've also suggested an improvement to the new unit tests to make them more comprehensive. The core implementation of the new backend appears solid.
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
addressing gemini comments right now |
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
…gl-project#8632) Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
…gl-project#8632) Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
…gl-project#8632) Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
…gl-project#8632) Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Creating a new PR since not able to merge #7938 one due to outdated unresolved conversation.
Commit history is the same nothing new.