Skip to content

No module named 'triton.ops' under Windows with ComfyUI #65

@mykeehu

Description

@mykeehu

Describe the bug

I use ComfyUI, where several modules have triton support. When I install triton and replace the necessary files in the python_embeded folder, I get this error. The pytorch version is correct, with 2.6, cuda 12.4, and python 3.12.7.
Here is the system data:

Total VRAM 24576 MB, total RAM 65289 MB
pytorch version: 2.6.0+cu124
xformers version: 0.0.29.post3
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using xformers attention
ComfyUI version: 0.3.18

And here is a part of log:

Traceback (most recent call last):
  File "I:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2147, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 995, in exec_module
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "I:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_EchoMimic\__init__.py", line 2, in <module>
    from .EchoMimic_node import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
  File "I:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_EchoMimic\EchoMimic_node.py", line 11, in <module>
    from diffusers import AutoencoderKL, DDIMScheduler
  File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist
  File "I:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\import_utils.py", line 934, in __getattr__
    value = getattr(module, name)
            ^^^^^^^^^^^^^^^^^^^^^
  File "I:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\import_utils.py", line 933, in __getattr__
    module = self._get_module(self._class_to_module[name])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\import_utils.py", line 945, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import diffusers.models.autoencoders.autoencoder_kl because of the following error (look up to see its traceback):
No module named 'triton.ops'

Environment details

Triton: 3.20 latest
GPU: RTX 3090
i9-13900 K
Python: 3.12.7
Total VRAM 24576 MB, total RAM 65289 MB
pytorch version: 2.6.0+cu124
xformers version: 0.0.29.post3
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using xformers attention
ComfyUI version: 0.3.18

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions