Skip to content

[Bug] bench_speculative.py got error #4536

@Lzhang-hub

Description

@Lzhang-hub

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

run bench_speculative.py scripts

python3 bench_speculative.py --model-path DeepSeek-R1 --speculative-draft-model-path deepseek-r1-nextn --tp-size 8 --trust-remote-code --batch-size 16 --steps 2 --topk 1  --num_draft_tokens 2 4 8 --context-len 2048 --mem-fraction-static 0.9 --enable-flashinfer-mla

got error:

[2025-03-18 03:38:23 TP1] Scheduler hit an exception: Traceback (most recent call last):
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/managers/scheduler.py", line 1819, in run_scheduler_process
    scheduler.event_loop_normal()
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/managers/scheduler.py", line 502, in event_loop_normal
    result = self.run_batch(batch)
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/managers/scheduler.py", line 1225, in run_batch
    ) = self.draft_worker.forward_batch_speculative_generation(batch)
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/speculative/eagle_worker.py", line 216, in forward_batch_speculative_generation
    spec_info, to_free_cache_loc = self.draft(batch)
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/speculative/eagle_worker.py", line 313, in draft
    score_list, token_list, parents_list = self.cuda_graph_runner.replay(
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/speculative/eagle_draft_cuda_graph_runner.py", line 213, in replay
    self.model_runner.draft_attn_backend.init_forward_metadata_replay_cuda_graph(
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/layers/attention/flashinfer_mla_backend.py", line 803, in init_forward_metadata_replay_cuda_graph
    self.common_template(forward_batch, self.cuda_graph_kv_indices, call_fn)
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/layers/attention/flashinfer_mla_backend.py", line 737, in common_template
    call_fn(i, forward_batch)
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/layers/attention/flashinfer_mla_backend.py", line 792, in call_fn
    self.attn_backends[i].init_forward_metadata_replay_cuda_graph(
  File "/kesgl-workspace/latest/sglang-serving/python/sglang/srt/layers/attention/flashinfer_mla_backend.py", line 293, in init_forward_metadata_replay_cuda_graph
    self.cuda_graph_kv_indptr_cpu[1 : bs + 1] = torch.cumsum(
RuntimeError: The expanded size of the tensor (15) must match the existing size (14) at non-singleton dimension 0.  Target sizes: [15].  Tensor sizes: [14]

It may be related cuda graph replay:
bs!=raw_bs

Reproduction

python3 bench_speculative.py --model-path DeepSeek-R1 --speculative-draft-model-path deepseek-r1-nextn --tp-size 8 --trust-remote-code --batch-size 16 --steps 2 --topk 1  --num_draft_tokens 2 4 8 --context-len 2048 --mem-fraction-static 0.9 --enable-flashinfer-mla

Environment

/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
[2025-03-18 04:27:52] INFO _client.py:1025: HTTP Request: GET https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json "HTTP/1.1 200 OK"
Python: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H20
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.161.07
PyTorch: 2.6.0+cu124
sglang: 0.4.4.post1
sgl_kernel: 0.0.5.post2
flashinfer: 0.2.3+cu124torch2.5
triton: 3.2.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.13
fastapi: 0.115.11
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.23.2
orjson: 3.10.15
packaging: 23.1
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.66.2
tiktoken: 0.9.0
anthropic: 0.49.0
decord: 0.6.0
NVIDIA Topology:
	GPU0	GPU1	GPU2	GPU3	GPU4	GPU5	GPU6	GPU7	NIC0	NIC1	NIC2	NIC3	NIC4	NIC5	NIC6	NIC7	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	NV18	NV18	NV18	NV18	NV18	NV18	NV18	PIX	NODE	NODE	NODE	SYS	SYS	SYS	SYS	0-95,192-287	0		N/A
GPU1	NV18	 X 	NV18	NV18	NV18	NV18	NV18	NV18	NODE	PIX	PHB	NODE	SYS	SYS	SYS	SYS	0-95,192-287	0		N/A
GPU2	NV18	NV18	 X 	NV18	NV18	NV18	NV18	NV18	NODE	PHB	PIX	NODE	SYS	SYS	SYS	SYS	0-95,192-287	0		N/A
GPU3	NV18	NV18	NV18	 X 	NV18	NV18	NV18	NV18	NODE	NODE	NODE	PIX	SYS	SYS	SYS	SYS	0-95,192-287	0		N/A
GPU4	NV18	NV18	NV18	NV18	 X 	NV18	NV18	NV18	SYS	SYS	SYS	SYS	PIX	NODE	NODE	NODE	96-191,288-383	1		N/A
GPU5	NV18	NV18	NV18	NV18	NV18	 X 	NV18	NV18	SYS	SYS	SYS	SYS	NODE	PIX	NODE	NODE	96-191,288-383	1		N/A
GPU6	NV18	NV18	NV18	NV18	NV18	NV18	 X 	NV18	SYS	SYS	SYS	SYS	NODE	NODE	PIX	PHB	96-191,288-383	1		N/A
GPU7	NV18	NV18	NV18	NV18	NV18	NV18	NV18	 X 	SYS	SYS	SYS	SYS	NODE	NODE	PHB	PIX	96-191,288-383	1		N/A
NIC0	PIX	NODE	NODE	NODE	SYS	SYS	SYS	SYS	 X 	NODE	NODE	NODE	SYS	SYS	SYS	SYS
NIC1	NODE	PIX	PHB	NODE	SYS	SYS	SYS	SYS	NODE	 X 	PHB	NODE	SYS	SYS	SYS	SYS
NIC2	NODE	PHB	PIX	NODE	SYS	SYS	SYS	SYS	NODE	PHB	 X 	NODE	SYS	SYS	SYS	SYS
NIC3	NODE	NODE	NODE	PIX	SYS	SYS	SYS	SYS	NODE	NODE	NODE	 X 	SYS	SYS	SYS	SYS
NIC4	SYS	SYS	SYS	SYS	PIX	NODE	NODE	NODE	SYS	SYS	SYS	SYS	 X 	NODE	NODE	NODE
NIC5	SYS	SYS	SYS	SYS	NODE	PIX	NODE	NODE	SYS	SYS	SYS	SYS	NODE	 X 	NODE	NODE
NIC6	SYS	SYS	SYS	SYS	NODE	NODE	PIX	PHB	SYS	SYS	SYS	SYS	NODE	NODE	 X 	PHB
NIC7	SYS	SYS	SYS	SYS	NODE	NODE	PHB	PIX	SYS	SYS	SYS	SYS	NODE	NODE	PHB	 X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_bond_0
  NIC1: mlx5_bond_1
  NIC2: mlx5_bond_2
  NIC3: mlx5_bond_3
  NIC4: mlx5_bond_4
  NIC5: mlx5_bond_5
  NIC6: mlx5_bond_6
  NIC7: mlx5_bond_7


ulimit soft: 1048576

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions