-
Notifications
You must be signed in to change notification settings - Fork 6.3k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
Cannot load LoRAs into quanto-quantized Flux.
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from huggingface_hub import hf_hub_download
from optimum.quanto import qfloat8, quantize, freeze
from transformers import T5EncoderModel
bfl_repo = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16
transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.load_lora_weights(
hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"), adapter_name="hyper-sd"
)
Logs
ERROR:
Traceback (most recent call last):
File "/home/user/genAI/test.py", line 56, in <module>
pipe.load_lora_weights(
File "/home/user/miniconda3/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 1867, in load_lora_weights
transformer_lora_state_dict = self._maybe_expand_lora_state_dict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 2490, in _maybe_expand_lora_state_dict
base_weight_param = transformer_state_dict[base_param_name]
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'single_transformer_blocks.0.attn.to_k.weight'
System Info
Python 3.12
diffusers 0.32.0 (I tested 0.32.1 and install from git)
Who can help?
C00reNUT and J4BEZ
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working