-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Closed
Labels
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
There seems to be a mismatch between the fields of ModelWorkerBatch
and ForwardBatch
regarding _gpu
suffixes. This causes --enable-dp-attention
to not work with --speculative-algorithm EAGLE --speculative-draft-model-path lmsys/DeepSeek-V3-NextN
.
Reproduction
Commit:
Commit: 31dfff7da7ade6703303a67bfe6ef52ead97640a
Author: Yineng Zhang <me@zhyncs.com>
Date: 2025-03-27 19:09:58 -0700
Command:
SGL_ENABLE_JIT_DEEPGEMM=0 python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --speculative-algorithm EAGLE --speculative-draft-model-path lmsys/DeepSeek-V3-NextN --speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2 --trust-remote-code --tp 8 --enable-dp-attention --dp 8
Stacktrace:
[2025-03-28 02:40:40 DP2 TP2] Scheduler hit an exception: Traceback (most recent call last):
File "/home/user/sglang/python/sglang/srt/managers/scheduler.py", line 2007, in run_scheduler_process
scheduler.event_loop_normal()
File "/home/user/sglang/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/user/sglang/python/sglang/srt/managers/scheduler.py", line 598, in event_loop_normal
result = self.run_batch(batch)
File "/home/user/sglang/python/sglang/srt/managers/scheduler.py", line 1395, in run_batch
) = self.draft_worker.forward_batch_speculative_generation(batch)
File "/home/user/sglang/python/sglang/srt/speculative/eagle_worker.py", line 258, in forward_batch_speculative_generation
self.target_worker.forward_batch_generation(
File "/home/user/sglang/python/sglang/srt/managers/tp_worker.py", line 171, in forward_batch_generation
forward_batch = ForwardBatch.init_new(model_worker_batch, self.model_runner)
File "/home/user/sglang/python/sglang/srt/model_executor/forward_batch_info.py", line 234, in init_new
if batch.extend_input_logprob_token_ids is not None:
AttributeError: 'ForwardBatch' object has no attribute 'extend_input_logprob_token_ids'. Did you mean: 'extend_input_logprob_token_ids_gpu'?
Environment
Python: 3.10.14 (main, Mar 12 2025, 23:05:22) [GCC 9.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H200
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.2, V12.2.140
CUDA Driver Version: 550.144.03
PyTorch: 2.5.1+cu124
sglang: 0.4.4.post2
sgl_kernel: 0.0.5.post3
flashinfer: Module Not Found
triton: 3.1.0
transformers: 4.50.0
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.14
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.24.0
orjson: 3.10.16
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.11.0
multipart: Module Not Found
zmq: Module Not Found
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
xgrammar: 0.1.17
openai: 1.69.0
tiktoken: 0.9.0
anthropic: 0.49.0
litellm: 1.64.1
decord: 0.6.0
NVIDIA Topology:
�[4mGPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID�[0m
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A
NIC0 NODE NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE NODE SYS SYS SYS SYS
NIC1 PHB PHB PHB PHB SYS SYS SYS SYS NODE X PHB PHB PHB SYS SYS SYS SYS
NIC2 PHB PHB PHB PHB SYS SYS SYS SYS NODE PHB X PHB PHB SYS SYS SYS SYS
NIC3 PHB PHB PHB PHB SYS SYS SYS SYS NODE PHB PHB X PHB SYS SYS SYS SYS
NIC4 PHB PHB PHB PHB SYS SYS SYS SYS NODE PHB PHB PHB X SYS SYS SYS SYS
NIC5 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS X PHB PHB PHB
NIC6 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS PHB X PHB PHB
NIC7 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS PHB PHB X PHB
NIC8 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS PHB PHB PHB X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
Hypervisor vendor: KVM
ulimit soft: 1024