Skip to content

Conversation

fzyzcjy
Copy link
Collaborator

@fzyzcjy fzyzcjy commented Jun 14, 2025

Motivation

Modifications

Checklist

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a potential issue in the DeepSeek V2 model implementation related to weight requantization. It makes the requantization process for shared experts more robust by ensuring that the shared_experts attribute exists on a layer's MLP before attempting to access and requantize its weights. This prevents potential errors or crashes in configurations where shared experts might not be present in certain layers.

Highlights

  • Weight Requantization Fix: Added a check to safely handle layers that might not have shared_experts during the weight requantization process for DeepSeek V2 models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a potential runtime error in the _weight_requant_ue8m0 method within python/sglang/srt/models/deepseek_v2.py. The change modifies how shared_experts are accessed during the weight requantization process for Mixture of Experts (MoE) layers.

Previously, shared_experts was accessed directly via layer.mlp.shared_experts. This could lead to an AttributeError if layer.mlp did not have this attribute, for instance, in model configurations without shared experts.
The fix implements a more robust approach using getattr(layer.mlp, "shared_experts", None) followed by a check if shared_experts is not None:. This ensures that the code gracefully handles cases where shared_experts might be absent, preventing potential crashes and improving the overall stability of the model loading and quantization procedure. The change is clear, follows Python best practices for attribute access, and directly resolves the potential issue.

For further robustness, it might be worth considering if layer.mlp itself could ever be None for layers identified as MoE layers. If that's a possibility, getattr(layer.mlp, ...) could still raise an AttributeError. While this is outside the direct scope of the current changes, it's a related area that might benefit from similar defensive checks if layer.mlp is not guaranteed to be an object.

@zhyncs zhyncs merged commit b57d87c into sgl-project:main Jun 14, 2025
64 of 71 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants