Skip to content

Conversation

yewentao256
Copy link
Collaborator

@yewentao256 yewentao256 commented Jun 27, 2025

Purpose

Fixes #20133

Signed-off-by: yewentao256 <zhyanwentao@126.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yewentao256, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the way optional kernel availability is checked within the vLLM project. It introduces a cached utility function to avoid redundant calls to importlib.util.find_spec, improving performance and code readability. The changes affect multiple files related to fused MoE layers and quantization, ensuring consistent and efficient kernel detection.

Highlights

  • Refactor: Created a utility function _has_module with caching to check for the availability of optional kernel packages (pplx_kernels, deep_ep, deep_gemm).
  • Code Improvement: Replaced direct calls to importlib.util.find_spec with the new cached utility function has_pplx, has_deepep, and has_deep_gemm in multiple files.
  • Dependency Management: Removed unnecessary importlib.util imports from several files.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a nice refactoring that centralizes and caches the checks for optional dependencies (deep_gemm, deepep, pplx_kernels), which should improve performance by avoiding repeated calls to importlib.util.find_spec.

However, the current implementation introduces a critical circular import dependency between vllm/model_executor/layers/fused_moe/utils.py and vllm/model_executor/layers/quantization/utils/fp8_utils.py. I've left a comment with details on how to resolve this by relocating the new utility functions to a lower-level module. Once that's addressed, this will be a solid improvement to the codebase.

Signed-off-by: yewentao256 <zhyanwentao@126.com>
@yewentao256 yewentao256 requested a review from WoosukKwon as a code owner June 27, 2025 18:59
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice work. Just a nit on deep_ep and future structure

vllm/utils.py Outdated
Comment on lines 2934 to 2959
@cache
def _has_module(module_name: str) -> bool:
"""Return True if *module_name* can be found in the current environment.

The result is cached so that subsequent queries for the same module incur
no additional overhead.
"""
return importlib.util.find_spec(module_name) is not None


def has_pplx() -> bool:
"""Whether the optional `pplx_kernels` package is available."""

return _has_module("pplx_kernels")


def has_deepep() -> bool:
"""Whether the optional `deep_ep` package is available."""

return _has_module("deep_ep")


def has_deep_gemm() -> bool:
"""Whether the optional `deep_gemm` package is available."""

return _has_module("deep_gemm")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit/future work: I think with these new import checks and the above torch version check, it would be nice to pull these into a separate import_utils.py file like Transformers to make it clear for new libraries to add there https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py#L386

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds a great idea, we can move these import utils together, I can work on this later. Including is_in_ray_actor and all other util functions we have.

Signed-off-by: yewentao256 <zhyanwentao@126.com>
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 28, 2025
@mgoin mgoin enabled auto-merge (squash) June 28, 2025 18:44
@mgoin mgoin merged commit 4d36693 into vllm-project:main Jun 28, 2025
84 checks passed
@yewentao256 yewentao256 deleted the wye-has-module-refactor branch June 30, 2025 16:03
CSWYF3634076 pushed a commit to CSWYF3634076/vllm that referenced this pull request Jul 2, 2025
…gemm`, `has_deepep`, `has_pplx` (vllm-project#20187)

Signed-off-by: yewentao256 <zhyanwentao@126.com>
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
…gemm`, `has_deepep`, `has_pplx` (vllm-project#20187)

Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
…gemm`, `has_deepep`, `has_pplx` (vllm-project#20187)

Signed-off-by: yewentao256 <zhyanwentao@126.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create a function util and cache the results for has_deepgemm, has_deepep, has_pplx
2 participants