Skip to content

Conversation

kzawora-intel
Copy link
Contributor

@kzawora-intel kzawora-intel commented Jun 12, 2025

Platforms can have triton package installed by external packages (e.g. xgrammar), even if it's unsupported by them and not specified in vLLM requirements. This patch adds additional triton check and prevents non-GPU platforms from autotuning triton flash attention kernels when the package is installed, but incompatible. Example error solved by this PR (importing MLACommonImpl in out-of-tree attention backend, incorrectly attempting to use triton flash attention, because triton package was installed by external dependency):

ERROR 06-12 17:46:38 [core.py:515] EngineCore failed to start.
ERROR 06-12 17:46:38 [core.py:515] Traceback (most recent call last):
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/v1/engine/core.py", line 506, in run_engine_core
ERROR 06-12 17:46:38 [core.py:515]     engine_core = EngineCoreProc(*args, **kwargs)
ERROR 06-12 17:46:38 [core.py:515]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/v1/engine/core.py", line 390, in __init__
ERROR 06-12 17:46:38 [core.py:515]     super().__init__(vllm_config, executor_class, log_stats,
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/v1/engine/core.py", line 76, in __init__
ERROR 06-12 17:46:38 [core.py:515]     self.model_executor = executor_class(vllm_config)
ERROR 06-12 17:46:38 [core.py:515]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/executor/executor_base.py", line 53, in __init__
ERROR 06-12 17:46:38 [core.py:515]     self._init_executor()
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/executor/uniproc_executor.py", line 46, in _init_executor
ERROR 06-12 17:46:38 [core.py:515]     self.collective_rpc("init_worker", args=([kwargs], ))
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/executor/uniproc_executor.py", line 57, in collective_rpc
ERROR 06-12 17:46:38 [core.py:515]     answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 06-12 17:46:38 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/utils.py", line 2680, in run_method
ERROR 06-12 17:46:38 [core.py:515]     return func(*args, **kwargs)
ERROR 06-12 17:46:38 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/worker/worker_base.py", line 559, in init_worker
ERROR 06-12 17:46:38 [core.py:515]     worker_class = resolve_obj_by_qualname(
ERROR 06-12 17:46:38 [core.py:515]                    ^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/utils.py", line 2248, in resolve_obj_by_qualname
ERROR 06-12 17:46:38 [core.py:515]     module = importlib.import_module(module_name)
ERROR 06-12 17:46:38 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
ERROR 06-12 17:46:38 [core.py:515]     return _bootstrap._gcd_import(name[level:], package, level)
ERROR 06-12 17:46:38 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
ERROR 06-12 17:46:38 [core.py:515]   File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
ERROR 06-12 17:46:38 [core.py:515]   File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
ERROR 06-12 17:46:38 [core.py:515]   File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
ERROR 06-12 17:46:38 [core.py:515]   File "<frozen importlib._bootstrap_external>", line 995, in exec_module
ERROR 06-12 17:46:38 [core.py:515]   File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm-hpu/vllm_hpu/v1/worker/hpu_worker.py", line 26, in <module>
ERROR 06-12 17:46:38 [core.py:515]     from vllm_hpu.v1.worker.hpu_model_runner import HPUModelRunner, bool_helper
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm-hpu/vllm_hpu/v1/worker/hpu_model_runner.py", line 39, in <module>
ERROR 06-12 17:46:38 [core.py:515]     from vllm_hpu.v1.attention.backends.hpu_attn import HPUAttentionMetadataV1
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm-hpu/vllm_hpu/v1/attention/backends/hpu_attn.py", line 13, in <module>
ERROR 06-12 17:46:38 [core.py:515]     from vllm_hpu.attention.backends.hpu_attn import (HPUAttentionBackend,
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm-hpu/vllm_hpu/attention/backends/hpu_attn.py", line 22, in <module>
ERROR 06-12 17:46:38 [core.py:515]     from vllm.attention.backends.mla.common import MLACommonImpl
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/attention/backends/mla/common.py", line 221, in <module>
ERROR 06-12 17:46:38 [core.py:515]     from vllm.attention.ops.triton_flash_attention import triton_attention
ERROR 06-12 17:46:38 [core.py:515]   File "/software/users/kzawora/vllm-plugin-dev/vllm/vllm/attention/ops/triton_flash_attention.py", line 403, in <module>
ERROR 06-12 17:46:38 [core.py:515]     @triton.autotune(
ERROR 06-12 17:46:38 [core.py:515]      ^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/triton-3.3.1-py3.12-linux-x86_64.egg/triton/runtime/autotuner.py", line 378, in decorator
ERROR 06-12 17:46:38 [core.py:515]     return Autotuner(fn, fn.arg_names, configs, key, reset_to_zero, restore_value, pre_hook=pre_hook,
ERROR 06-12 17:46:38 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/triton-3.3.1-py3.12-linux-x86_64.egg/triton/runtime/autotuner.py", line 130, in __init__
ERROR 06-12 17:46:38 [core.py:515]     self.do_bench = driver.active.get_benchmarker()
ERROR 06-12 17:46:38 [core.py:515]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/triton-3.3.1-py3.12-linux-x86_64.egg/triton/runtime/driver.py", line 23, in __getattr__
ERROR 06-12 17:46:38 [core.py:515]     self._initialize_obj()
ERROR 06-12 17:46:38 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/triton-3.3.1-py3.12-linux-x86_64.egg/triton/runtime/driver.py", line 20, in _initialize_obj
ERROR 06-12 17:46:38 [core.py:515]     self._obj = self._init_fn()
ERROR 06-12 17:46:38 [core.py:515]                 ^^^^^^^^^^^^^^^
ERROR 06-12 17:46:38 [core.py:515]   File "/usr/local/lib/python3.12/dist-packages/triton-3.3.1-py3.12-linux-x86_64.egg/triton/runtime/driver.py", line 8, in _create_driver
ERROR 06-12 17:46:38 [core.py:515]     raise RuntimeError(f"{len(actives)} active drivers ({actives}). There should only be one.")
ERROR 06-12 17:46:38 [core.py:515] RuntimeError: 0 active drivers ([]). There should only be one.

CC @simon-mo @youkaichao @xuechendi

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kzawora-intel, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request fixes a bug where vLLM would attempt to initialize and use Triton kernels even on systems lacking a compatible GPU driver, leading to runtime errors. The change introduces a more robust check for Triton availability by verifying the presence of an active driver, ensuring Triton is only enabled when the underlying hardware supports it.

Highlights

  • Triton Compatibility Check: Added a check to explicitly disable Triton usage within vLLM if the Triton package is installed but no active Triton driver is detected. This prevents errors on platforms without compatible GPU hardware.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kzawora-intel kzawora-intel force-pushed the private/kzawora/triton_fix branch from 58b97a8 to 7230f14 Compare June 12, 2025 15:15
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses an issue where vLLM could attempt to use an installed Triton package even if no compatible GPU driver is active, leading to runtime errors. The fix involves checking for the number of active Triton drivers and disabling Triton if a unique active driver is not found. My review suggests enhancing this new check by adding comprehensive error handling (using try-except blocks) to gracefully manage potential issues with the Triton installation itself, and by incorporating more specific logging to inform users why Triton might be disabled by this new logic. These changes would improve the robustness and diagnosability of the Triton usability check.

Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
@kzawora-intel kzawora-intel force-pushed the private/kzawora/triton_fix branch from 56dbd8a to 2d9be44 Compare June 12, 2025 15:44
@xuechendi
Copy link
Contributor

@simon-mo @mgoin @WoosukKwon @robertgshaw2-redhat , please help to review

Copy link
Collaborator

@aarnphm aarnphm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks

@aarnphm aarnphm added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 12, 2025
@simon-mo simon-mo merged commit 861a0a0 into vllm-project:main Jun 14, 2025
72 checks passed
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Jun 24, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
…ject#19561)

Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants