Skip to content

Conversation

kkHuang-amd
Copy link
Contributor

Motivation

Fix the NAN output in running test_moe_eval_accuracy_large.py

Modifications

Checklist

  • [✓] Format your code according to the Code Formatting with Pre-Commit.
  • [✓] Add unit tests as outlined in the Running Unit Tests.
  • [✓] Update documentation / docstrings / example tutorials as needed, according to Writing Documentation.
  • [✓] Provide throughput / latency benchmark results and accuracy evaluation results as needed, according to Benchmark and Profiling and Accuracy Results.
  • [✓] For reviewers: If you haven't made any contributions to this PR and are only assisting with merging the main branch, please remove yourself as a co-author when merging the PR.
  • [✓] Please feel free to join our Slack channel at https://slack.sglang.ai to discuss your PR.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kkHuang-amd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team,

Gemini here to provide a summary of this pull request. This PR, authored by kkHuang-amd, aims to address a specific issue encountered in the CI test test_moe_eval_accuracy_large.py where the output data was resulting in NaN values. The changes implemented are described as a workaround (WA) to fix this problem, which is suspected to be related to using dirty data from CUDA graph capture during the mha_batch_prefill operation.

The core modification involves changing how the kv_indices tensor is initialized within the attention backend. Instead of using torch.empty, which leaves the tensor with uninitialized (potentially dirty) data, the tensor is now initialized using torch.full and populated with a calculated pool_size value. This ensures the tensor starts with a known state, mitigating the NaN issue.

Highlights

  • Attention Backend Modification: Modifies the initialization logic for the kv_indices tensor within the aiter_backend.py attention implementation.
  • KV Index Initialization: Changes the initialization of kv_indices from torch.empty (uninitialized) to torch.full, filling it with a calculated pool_size value to prevent the use of dirty data.
  • Workaround for CUDA Graph Capture: Implements a workaround targeting potential issues with dirty data when using CUDA graph capture in the attention prefill kernel.

Changelog

  • python/sglang/srt/layers/attention/aiter_backend.py
    • Added calculation of self.pool_size based on the token-to-KV pool size and page size, subtracting 10.
    • Modified the update_single_wrapper method to initialize kv_indices using torch.full with the calculated self.pool_size.
    • Reduced the buffer size added to paged_kernel_lens_sum for kv_indices allocation from 256 to 128.
    • Added a comment explaining the workaround for the NaN issue in the specific CI test.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a NaN output issue in a CI test by changing how kv_indices is initialized in the AITer backend. The use of torch.full instead of torch.empty is a sensible approach to avoid problems with uninitialized memory.

However, there are a few points regarding the specific implementation of the workaround that would benefit from further clarification to ensure robustness and maintainability. The main concerns revolve around the magic numbers used and the implications of the fill value chosen for kv_indices.

Summary of Findings

  • Magic Number in pool_size Calculation: A magic number 10 is used in the calculation of self.pool_size. This should be clarified with a comment or a named constant for better maintainability.
  • Padding Reduction for kv_indices: The padding for kv_indices was reduced from + 256 to + 128. The rationale for this change and confirmation of sufficiency would be helpful.
  • Fill Value for kv_indices: kv_indices is now filled with self.pool_size, which seems to be an out-of-bounds index. While this is a workaround for a NaN issue related to CUDA graph capture and mha_batch_prefill, using an OOB index as a fill value needs careful consideration regarding its interaction with the kernel and potential side effects. Clarification on how this fixes the NaN issue and whether it's the safest approach is needed.

Merge Readiness

The pull request addresses a NaN issue in a CI test, which is important. However, the workarounds introduced, particularly the choice of fill value for kv_indices and the use of magic numbers, raise some concerns about potential side effects, correctness under all conditions, and maintainability.

Given the high severity concern regarding the fill value, I recommend that these points be thoroughly discussed and clarified before merging. It's crucial to understand not just that the workaround fixes the symptom, but also how it fixes it and whether it introduces any new risks or masks a deeper issue.

I am not authorized to approve pull requests. Please ensure further review and approval from authorized maintainers after addressing the feedback.

Copy link
Collaborator

@HaiShaw HaiShaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG, nit - let me do a minor comment.

@HaiShaw HaiShaw enabled auto-merge (squash) June 10, 2025 15:55
@HaiShaw HaiShaw merged commit 8ea7df6 into sgl-project:main Jun 10, 2025
4 of 49 checks passed
almaslof pushed a commit to mpashkovskii/sglang that referenced this pull request Jun 11, 2025
…py" (sgl-project#7021)

Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: HAI <hixiao@gmail.com>
jianan-gu pushed a commit to jianan-gu/sglang that referenced this pull request Jun 12, 2025
…py" (sgl-project#7021)

Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: HAI <hixiao@gmail.com>
Yuechguo pushed a commit to Yuechguo/sglang that referenced this pull request Aug 17, 2025
…py" (sgl-project#7021)

Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: HAI <hixiao@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants