-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Closed
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
[2025-03-28 05:54:02 TP0] Scheduler hit an exception: Traceback (most recent call last):
File "/workspaces/sglang/python/sglang/srt/managers/scheduler.py", line 1992, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/workspaces/sglang/python/sglang/srt/managers/scheduler.py", line 249, in __init__
self.tp_worker = TpWorkerClass(
File "/workspaces/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 63, in __init__
self.worker = TpModelWorker(server_args, gpu_id, tp_rank, dp_rank, nccl_port)
File "/workspaces/sglang/python/sglang/srt/managers/tp_worker.py", line 76, in __init__
self.model_runner = ModelRunner(
File "/workspaces/sglang/python/sglang/srt/model_executor/model_runner.py", line 169, in __init__
self.initialize(min_per_gpu_memory)
File "/workspaces/sglang/python/sglang/srt/model_executor/model_runner.py", line 179, in initialize
self.load_model()
File "/workspaces/sglang/python/sglang/srt/model_executor/model_runner.py", line 388, in load_model
self.model = get_model(
File "/workspaces/sglang/python/sglang/srt/model_loader/__init__.py", line 22, in get_model
return loader.load_model(
File "/workspaces/sglang/python/sglang/srt/model_loader/loader.py", line 365, in load_model
model = _initialize_model(
File "/workspaces/sglang/python/sglang/srt/model_loader/loader.py", line 146, in _initialize_model
return model_class(
File "/workspaces/sglang/python/sglang/srt/models/gemma3_causal.py", line 601, in __init__
self.model = Gemma3TextModel(
File "/workspaces/sglang/python/sglang/srt/models/gemma3_causal.py", line 493, in __init__
self.layers = make_layers(
File "/workspaces/sglang/python/sglang/srt/utils.py", line 399, in make_layers
[
File "/workspaces/sglang/python/sglang/srt/utils.py", line 400, in <listcomp>
maybe_offload_to_cpu(layer_fn(idx=idx, prefix=add_prefix(idx, prefix)))
File "/workspaces/sglang/python/sglang/srt/models/gemma3_causal.py", line 495, in <lambda>
lambda idx, prefix: Gemma3DecoderLayer(
File "/workspaces/sglang/python/sglang/srt/models/gemma3_causal.py", line 293, in __init__
self.self_attn = Gemma3Attention(
File "/workspaces/sglang/python/sglang/srt/models/gemma3_causal.py", line 152, in __init__
self.qkv_proj = QKVParallelLinear(
File "/workspaces/sglang/python/sglang/srt/layers/linear.py", line 808, in __init__
super().__init__(
File "/workspaces/sglang/python/sglang/srt/layers/linear.py", line 336, in __init__
super().__init__(
File "/workspaces/sglang/python/sglang/srt/layers/linear.py", line 209, in __init__
self.quant_method = quant_config.get_quant_method(self, prefix=prefix)
File "/workspaces/sglang/python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py", line 121, in get_quant_method
scheme = self.get_scheme(layer=layer, layer_name=prefix)
File "/workspaces/sglang/python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py", line 508, in get_scheme
scheme = self._get_scheme_from_parts( # type: ignore
File "/workspaces/sglang/python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py", line 391, in _get_scheme_from_parts
return CompressedTensorsW8A16Fp8(
NameError: name 'CompressedTensorsW8A16Fp8' is not defined. Did you mean: 'CompressedTensorsW8A8Fp8'?
Seems like only CompressedTensorsW8A8Fp8
is ported but the code is trying to create CompressedTensorsW8A16Fp8
.
CompressedTensorsW8A16Fp8
is required even when running w8a8 quant if compute capability is lesser than 89.
cc: @BBuf
Reproduction
- Download google/gemma-3-4b-it
- Quantize using https://github.com/vllm-project/llm-compressor/blob/main/examples/quantization_w8a8_fp8/README.md
- Prepare machine with CUDA compute capability < 89.
python3 -m sglang.launch_server --model-path ~/.cache/huggingface/local/gemma-3-4b-it-FP8-Dynamic --context-length 16384
Environment
SGLang version: 265e756
Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 3090
GPU 0 Compute Capability: 8.6
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 572.83
PyTorch: 2.5.1+cu124
sglang: 0.4.4.post2
sgl_kernel: 0.0.5.post3
flashinfer: Module Not Found
triton: 3.1.0
transformers: 4.50.0
torchao: 0.9.0
numpy: 2.2.4
aiohttp: 3.11.14
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.24.0
orjson: 3.10.16
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: Module Not Found
zmq: Module Not Found
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.16
openai: 1.68.2
tiktoken: 0.9.0
anthropic: 0.49.0
litellm: 1.64.1
decord: 0.6.0
NVIDIA Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Hypervisor vendor: Microsoft
ulimit soft: 1048576
Metadata
Metadata
Assignees
Labels
No labels