-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
In 0.4.6.post1
and 0.4.6.post2
docker images, nvidia-nccl-cu12
package is manually upgraded to 2.26.2:
This change was introduced in https://github.com/sgl-project/sglang/pull/5894/files
I noticed a significant performance drop in the latest 0.4.6.post2 image on my H100*8 *2, 400G IB environment, compared to the 0.4.6 image I test on last weekend. Also, manually upgrade via git pull && pip install -e python[all]
won't meet this issue.
When I tried to find the root cause, I noticed pip warned the version is incompatible.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torch 2.6.0+cu124 requires nvidia-nccl-cu12==2.21.5; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-nccl-cu12 2.26.2.post1 which is incompatible.
Reproduction
I'm using LWS in K8s,
containers:
- name: sglang-leader
image: lmsysorg/sglang:v0.4.6.post2-cu124
env:
- name: LWS_WORKER_INDEX
valueFrom:
fieldRef:
fieldPath: metadata.labels['leaderworkerset.sigs.k8s.io/worker-index']
command:
- /bin/sh
- -c
- |
# pip install nvidia-nccl-cu12==2.21.5
python3 -m sglang.launch_server --host 0.0.0.0 --port 8888 --trust-remote-code --show-time-cost \
--model-path /tmp/scratch-space/DeepSeek-V3-0324 --tp 16 \
--dist-init-addr $(LWS_LEADER_ADDRESS):5000 --nnodes $(LWS_GROUP_SIZE) --node-rank $(LWS_WORKER_INDEX) \
--speculative-algo EAGLE --speculative-eagle-topk 1 --speculative-num-steps 3 --speculative-num-draft-tokens 4 \
--cuda-graph-max-bs 1
I started two different LWS with the pip install nvidia-nccl-cu12==2.21.5
command as the only difference.
Or, can use git pull && pip install -e python[all]
from 0.4.6 image, the performance is also normal.
Bench with: python3 -m sglang.bench_serving --backend sglang --num-prompts 10 --dataset-name random --random-input 1024 --random-output 4096 --max-concurrency 1 --port 8888
With nccl 2.21.5:
============ Serving Benchmark Result ============
Backend: sglang
Traffic request rate: inf
Max reqeuest concurrency: 1
Successful requests: 10
Benchmark duration (s): 194.49
Total input tokens: 6101
Total generated tokens: 20087
Total generated tokens (retokenized): 19972
Request throughput (req/s): 0.05
Input token throughput (tok/s): 31.37
Output token throughput (tok/s): 103.28
Total token throughput (tok/s): 134.65
Concurrency: 1.00
Accept length: 2.83
----------------End-to-End Latency----------------
Mean E2E Latency (ms): 19445.69
Median E2E Latency (ms): 24543.22
---------------Time to First Token----------------
Mean TTFT (ms): 276.51
Median TTFT (ms): 263.68
P99 TTFT (ms): 586.01
---------------Inter-Token Latency----------------
Mean ITL (ms): 9.55
Median ITL (ms): 8.42
P95 ITL (ms): 14.76
P99 ITL (ms): 26.79
Max ITL (ms): 34.22
==================================================
With nccl 2.26.2.post1:
============ Serving Benchmark Result ============
Backend: sglang
Traffic request rate: inf
Max reqeuest concurrency: 1
Successful requests: 10
Benchmark duration (s): 357.28
Total input tokens: 6101
Total generated tokens: 20087
Total generated tokens (retokenized): 19986
Request throughput (req/s): 0.03
Input token throughput (tok/s): 17.08
Output token throughput (tok/s): 56.22
Total token throughput (tok/s): 73.30
Concurrency: 1.00
Accept length: 2.90
----------------End-to-End Latency----------------
Mean E2E Latency (ms): 35724.27
Median E2E Latency (ms): 44331.85
---------------Time to First Token----------------
Mean TTFT (ms): 409.48
Median TTFT (ms): 359.05
P99 TTFT (ms): 755.03
---------------Inter-Token Latency----------------
Mean ITL (ms): 17.59
Median ITL (ms): 16.63
P95 ITL (ms): 26.11
P99 ITL (ms): 50.43
Max ITL (ms): 61.33
==================================================
Environment
Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.161.07
PyTorch: 2.6.0+cu124
sglang: 0.4.6.post2
sgl_kernel: 0.1.1
flashinfer_python: 0.2.5+cu124torch2.6
triton: 3.2.0
transformers: 4.51.1
torchao: 0.10.0
numpy: 2.2.5
aiohttp: 3.11.18
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.30.2
interegular: 0.3.3
modelscope: 1.25.0
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.4
python-multipart: 0.0.20
pyzmq: 26.4.0
uvicorn: 0.34.2
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.17
openai: 1.76.2
tiktoken: 0.9.0
anthropic: 0.50.0
litellm: 1.67.5
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 NIC12 NIC13 NIC14 NIC15 NIC16 NIC17 NIC18 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE NODE NODE SYS SYS SYS PIX NODE SYS NODE NODE SYS SYS SYS SYS SYS 0-55,112-167 0N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PIX NODE NODE NODE NODE SYS SYS SYS NODE PIX SYS NODE NODE SYS SYS SYS SYS SYS 0-55,112-167 0N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX NODE NODE NODE SYS SYS SYS NODE NODE SYS PIX NODE SYS SYS SYS SYS SYS 0-55,112-167 0N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE NODE NODE PIX SYS SYS SYS NODE NODE SYS NODE PIX SYS SYS SYS SYS SYS 0-55,112-167 0N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS PIX NODE NODE SYS SYS NODE SYS SYS PIX NODE NODE NODE NODE 56-111,168-223 1N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS SYS SYS NODE PIX NODE SYS SYS NODE SYS SYS NODE PIX NODE NODE NODE 56-111,168-223 1N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS SYS SYS NODE NODE PIX SYS SYS NODE SYS SYS NODE NODE PIX NODE NODE 56-111,168-223 1N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS SYS SYS NODE NODE NODE SYS SYS PIX SYS SYS NODE NODE NODE PIX NODE 56-111,168-223 1N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE NODE NODE SYS SYS SYS PIX NODE SYS NODE NODE SYS SYS SYS SYS SYS
NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE NODE NODE SYS SYS SYS NODE PIX SYS NODE NODE SYS SYS SYS SYS SYS
NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE NODE NODE SYS SYS SYS NODE NODE SYS PIX NODE SYS SYS SYS SYS SYS
NIC3 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE X PIX NODE SYS SYS SYS NODE NODE SYS NODE NODE SYS SYS SYS SYS SYS
NIC4 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE PIX X NODE SYS SYS SYS NODE NODE SYS NODE NODE SYS SYS SYS SYS SYS
NIC5 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE NODE NODE X SYS SYS SYS NODE NODE SYS NODE PIX SYS SYS SYS SYS SYS
NIC6 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS SYS SYS X NODE NODE SYS SYS NODE SYS SYS PIX NODE NODE NODE NODE
NIC7 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS SYS SYS NODE X NODE SYS SYS NODE SYS SYS NODE PIX NODE NODE NODE
NIC8 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS SYS SYS NODE NODE X SYS SYS NODE SYS SYS NODE NODE PIX NODE NODE
NIC9 PIX NODE NODE NODE SYS SYS SYS SYS PIX NODE NODE NODE NODE NODE SYS SYS SYS X NODE SYS NODE NODE SYS SYS SYS SYS SYS
NIC10 NODE PIX NODE NODE SYS SYS SYS SYS NODE PIX NODE NODE NODE NODE SYS SYS SYS NODE X SYS NODE NODE SYS SYS SYS SYS SYS
NIC11 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS SYS SYS NODE NODE NODE SYS SYS X SYS SYS NODE NODE NODE PIX NODE
NIC12 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE PIX NODE NODE NODE SYS SYS SYS NODE NODE SYS X NODE SYS SYS SYS SYS SYS
NIC13 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE NODE NODE PIX SYS SYS SYS NODE NODE SYS NODE X SYS SYS SYS SYS SYS
NIC14 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS SYS SYS PIX NODE NODE SYS SYS NODE SYS SYS X NODE NODE NODE NODE
NIC15 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS SYS SYS NODE PIX NODE SYS SYS NODE SYS SYS NODE X NODE NODE NODE
NIC16 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS SYS SYS NODE NODE PIX SYS SYS NODE SYS SYS NODE NODE X NODE NODE
NIC17 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS SYS SYS NODE NODE NODE SYS SYS PIX SYS SYS NODE NODE NODE X NODE
NIC18 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS NODE NODE NODE SYS SYS NODE SYS SYS NODE NODE NODE NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NIC9: mlx5_9
NIC10: mlx5_10
NIC11: mlx5_11
NIC12: mlx5_12
NIC13: mlx5_13
NIC14: mlx5_14
NIC15: mlx5_15
NIC16: mlx5_16
NIC17: mlx5_17
NIC18: mlx5_bond_0
ulimit soft: 1048576