Skip to content

[Bug] Crash when running the quantized model DeepSeek-R1-AWQ with the parameter moe_wna16 #7728

@pandengyao

Description

@pandengyao

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

[2025-07-03 10:05:08 TP1] Scheduler hit an exception: Traceback (most recent call last):
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/managers/scheduler.py", line 2716, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, pp_rank, dp_rank)
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/managers/scheduler.py", line 335, in init
self.tp_worker = TpWorkerClass(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 66, in init
self.worker = TpModelWorker(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/managers/tp_worker.py", line 81, in init
self.model_runner = ModelRunner(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/model_executor/model_runner.py", line 222, in init
self.initialize(min_per_gpu_memory)
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/model_executor/model_runner.py", line 264, in initialize
self.load_model()
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/model_executor/model_runner.py", line 586, in load_model
self.model = get_model(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/model_loader/init.py", line 22, in get_model
return loader.load_model(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/model_loader/loader.py", line 421, in load_model
model = _initialize_model(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/model_loader/loader.py", line 163, in _initialize_model
return model_class(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/models/deepseek_v2.py", line 2001, in init
self.model = DeepseekV2Model(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/models/deepseek_v2.py", line 1916, in init
[
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/models/deepseek_v2.py", line 1917, in
DeepseekV2DecoderLayer(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/models/deepseek_v2.py", line 1715, in init
self.self_attn = DeepseekV2AttentionMLA(
File "/cds/users/yaopandeng/work/sglang/python/sglang/srt/models/deepseek_v2.py", line 899, in init
and self.fused_qkv_a_proj_with_mqa.weight.dtype == torch.bfloat16
File "/root/miniconda3/envs/sglang_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1940, in getattr
raise AttributeError(
AttributeError: 'ReplicatedLinear' object has no attribute 'weight'. Did you mean: 'qweight'?

Reproduction

python3 -m sglang.launch_server --model cognitivecomputations/DeepSeek-R1-AWQ --tp 8 --trust-remote-code --quantization moe_wna16

Environment

Python: 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: H100
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.183.06
PyTorch: 2.7.0+cu126
sglang: 0.4.8.post1
sgl_kernel: 0.2.1
flashinfer_python: 0.2.6.post1
triton: 3.3.0
transformers: 4.52.3
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.12.13
fastapi: 0.115.14
hf_transfer: 0.1.9
huggingface_hub: 0.33.1
interegular: 0.3.3
modelscope: 1.27.1
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: 0.9.1
xgrammar: 0.1.19
openai: 1.93.0
tiktoken: 0.9.0
anthropic: 0.56.0
litellm: 1.73.6
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE NODE NODE SYS SYS SYS SYS 0-47,96-143 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE NODE NODE PIX NODE NODE SYS SYS SYS SYS 0-47,96-143 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE NODE NODE PIX NODE SYS SYS SYS SYS 0-47,96-143 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE NODE NODE PIX SYS SYS SYS SYS 0-47,96-143 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS PIX NODE NODE NODE 48-95,144-191 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS SYS SYS NODE PIX NODE NODE 48-95,144-191 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS SYS SYS NODE NODE PIX NODE 48-95,144-191 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS SYS SYS NODE NODE NODE PIX 48-95,144-191 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE NODE NODE SYS SYS SYS SYS
NIC1 NODE NODE NODE NODE SYS SYS SYS SYS NODE X PIX NODE NODE NODE SYS SYS SYS SYS
NIC2 NODE NODE NODE NODE SYS SYS SYS SYS NODE PIX X NODE NODE NODE SYS SYS SYS SYS
NIC3 NODE PIX NODE NODE SYS SYS SYS SYS NODE NODE NODE X NODE NODE SYS SYS SYS SYS
NIC4 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE NODE NODE X NODE SYS SYS SYS SYS
NIC5 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE NODE NODE X SYS SYS SYS SYS
NIC6 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS SYS SYS X NODE NODE NODE
NIC7 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS SYS SYS NODE X NODE NODE
NIC8 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS SYS SYS NODE NODE X NODE
NIC9 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS SYS SYS NODE NODE NODE X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NIC9: mlx5_9

ulimit soft: 1048576

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions