-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
When I use the latest release(0.4.8) or main branch, the two-batch-overlap crash in cuda-graph capture routine.
The call stack indicates the batch.gatherd_buffer is None
[2025-06-24 22:26:45 DP15 TP15] Scheduler hit an exception: Traceback (most recent call last):
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 2632, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, pp_rank, dp_rank)
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 315, in __init__
self.tp_worker = TpWorkerClass(
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker.py", line 79, in __init__
self.model_runner = ModelRunner(
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 220, in __init__
self.initialize(min_per_gpu_memory)
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 300, in initialize
self.init_cuda_graphs()
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 1154, in init_cuda_graphs
self.cuda_graph_runner = CudaGraphRunner(self)
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 336, in __init__
self.capture()
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 436, in capture
) = self.capture_one_batch_size(bs, forward)
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 545, in capture_one_batch_size
self.tbo_plugin.capture_one_batch_size(forward_batch, num_tokens=num_tokens)
File "/sgl-workspace/sglang/python/sglang/srt/two_batch_overlap.py", line 126, in capture_one_batch_size
TboForwardBatchPreparer.prepare_raw(
File "/sgl-workspace/sglang/python/sglang/srt/two_batch_overlap.py", line 255, in prepare_raw
child_a = cls.filter_batch(
File "/sgl-workspace/sglang/python/sglang/srt/two_batch_overlap.py", line 352, in filter_batch
(sum_len, batch.gathered_buffer.shape[1]),
AttributeError: 'NoneType' object has no attribute 'shape'
Reproduction
The 0.4.7.post1 is good.
Environment
Python: 3.12.3 (main, May 26 2025, 18:50:19) [GCC 13.3.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.85
CUDA Driver Version: 570.148.08
PyTorch: 2.7.1+cu126
sglang: 0.4.7.post1
sgl_kernel: 0.1.9
flashinfer_python: 0.2.6.post1
triton: 3.3.1
transformers: 4.52.3
torchao: 0.9.0
numpy: 2.3.0
aiohttp: 3.12.13
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.33.0
interegular: 0.3.3
modelscope: 1.27.0
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.34.3
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.19
openai: 1.87.0
tiktoken: 0.9.0
anthropic: 0.54.0
litellm: 1.72.6
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 NIC12 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 PXB PIX NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE PXB PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS PIX PXB NODE NODE NODE NODE NODE NODE NODE 56-111,168-223 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS PXB PIX NODE NODE NODE NODE NODE NODE NODE 56-111,168-223 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS NODE NODE PIX PXB NODE NODE NODE NODE NODE 56-111,168-223 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE PXB PIX NODE NODE NODE NODE NODE 56-111,168-223 1 N/A
NIC0 PIX PXB NODE NODE SYS SYS SYS SYS X PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC1 PXB PIX NODE NODE SYS SYS SYS SYS PXB X NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC2 NODE NODE PIX PXB SYS SYS SYS SYS NODE NODE X PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC3 NODE NODE PXB PIX SYS SYS SYS SYS NODE NODE PXB X SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC4 SYS SYS SYS SYS PIX PXB NODE NODE SYS SYS SYS SYS X PXB NODE NODE NODE NODE NODE NODE NODE
NIC5 SYS SYS SYS SYS PXB PIX NODE NODE SYS SYS SYS SYS PXB X NODE NODE NODE NODE NODE NODE NODE
NIC6 SYS SYS SYS SYS NODE NODE PIX PXB SYS SYS SYS SYS NODE NODE X PXB NODE NODE NODE NODE NODE
NIC7 SYS SYS SYS SYS NODE NODE PXB PIX SYS SYS SYS SYS NODE NODE PXB X NODE NODE NODE NODE NODE
NIC8 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE X PIX PIX PIX PIX
NIC9 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PIX X PIX PIX PIX
NIC10 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PIX PIX X PIX PIX
NIC11 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PIX PIX PIX X PIX
NIC12 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PIX PIX PIX PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_ib0
NIC1: mlx5_ib1
NIC2: mlx5_ib2
NIC3: mlx5_ib3
NIC4: mlx5_ib4
NIC5: mlx5_ib5
NIC6: mlx5_ib6
NIC7: mlx5_ib7
NIC8: mlx5_eth0
NIC9: mlx5_eth1
NIC10: mlx5_eth2
NIC11: mlx5_eth3
NIC12: mlx5_eth4
ulimit soft: 65536