-
Notifications
You must be signed in to change notification settings - Fork 2.8k
[WA] fix output data is nan in CI test "test_moe_eval_accuracy_large.py" #7021
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @kkHuang-amd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team,
Gemini here to provide a summary of this pull request. This PR, authored by kkHuang-amd, aims to address a specific issue encountered in the CI test test_moe_eval_accuracy_large.py
where the output data was resulting in NaN
values. The changes implemented are described as a workaround (WA
) to fix this problem, which is suspected to be related to using dirty data from CUDA graph capture during the mha_batch_prefill
operation.
The core modification involves changing how the kv_indices
tensor is initialized within the attention backend. Instead of using torch.empty
, which leaves the tensor with uninitialized (potentially dirty) data, the tensor is now initialized using torch.full
and populated with a calculated pool_size
value. This ensures the tensor starts with a known state, mitigating the NaN
issue.
Highlights
- Attention Backend Modification: Modifies the initialization logic for the
kv_indices
tensor within theaiter_backend.py
attention implementation. - KV Index Initialization: Changes the initialization of
kv_indices
fromtorch.empty
(uninitialized) totorch.full
, filling it with a calculatedpool_size
value to prevent the use of dirty data. - Workaround for CUDA Graph Capture: Implements a workaround targeting potential issues with dirty data when using CUDA graph capture in the attention prefill kernel.
Changelog
- python/sglang/srt/layers/attention/aiter_backend.py
- Added calculation of
self.pool_size
based on the token-to-KV pool size and page size, subtracting 10. - Modified the
update_single_wrapper
method to initializekv_indices
usingtorch.full
with the calculatedself.pool_size
. - Reduced the buffer size added to
paged_kernel_lens_sum
forkv_indices
allocation from 256 to 128. - Added a comment explaining the workaround for the NaN issue in the specific CI test.
- Added calculation of
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix a NaN output issue in a CI test by changing how kv_indices
is initialized in the AITer backend. The use of torch.full
instead of torch.empty
is a sensible approach to avoid problems with uninitialized memory.
However, there are a few points regarding the specific implementation of the workaround that would benefit from further clarification to ensure robustness and maintainability. The main concerns revolve around the magic numbers used and the implications of the fill value chosen for kv_indices
.
Summary of Findings
- Magic Number in
pool_size
Calculation: A magic number10
is used in the calculation ofself.pool_size
. This should be clarified with a comment or a named constant for better maintainability. - Padding Reduction for
kv_indices
: The padding forkv_indices
was reduced from+ 256
to+ 128
. The rationale for this change and confirmation of sufficiency would be helpful. - Fill Value for
kv_indices
:kv_indices
is now filled withself.pool_size
, which seems to be an out-of-bounds index. While this is a workaround for a NaN issue related to CUDA graph capture andmha_batch_prefill
, using an OOB index as a fill value needs careful consideration regarding its interaction with the kernel and potential side effects. Clarification on how this fixes the NaN issue and whether it's the safest approach is needed.
Merge Readiness
The pull request addresses a NaN issue in a CI test, which is important. However, the workarounds introduced, particularly the choice of fill value for kv_indices
and the use of magic numbers, raise some concerns about potential side effects, correctness under all conditions, and maintainability.
Given the high
severity concern regarding the fill value, I recommend that these points be thoroughly discussed and clarified before merging. It's crucial to understand not just that the workaround fixes the symptom, but also how it fixes it and whether it introduces any new risks or masks a deeper issue.
I am not authorized to approve pull requests. Please ensure further review and approval from authorized maintainers after addressing the feedback.
Pick up the last index of pool as the padding value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG, nit - let me do a minor comment.
…py" (sgl-project#7021) Co-authored-by: wunhuang <wunhuang@amd.com> Co-authored-by: HAI <hixiao@gmail.com>
…py" (sgl-project#7021) Co-authored-by: wunhuang <wunhuang@amd.com> Co-authored-by: HAI <hixiao@gmail.com>
…py" (sgl-project#7021) Co-authored-by: wunhuang <wunhuang@amd.com> Co-authored-by: HAI <hixiao@gmail.com>
Motivation
Fix the NAN output in running test_moe_eval_accuracy_large.py
Modifications
Checklist