-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
When I start sglang on 4090 with --tensor-parallel-size=2
, it crashed with RuntimeError: CUDART error: peer access is not supported between these two devices
, we must disable custom all reduce on these devices.
Reproduction
Run python -m sglang.launch_server --served-model-name=qwen --trust-remote-code --model=./Qwen2.5-0.5B-Instruct --tensor-parallel-size=2
on 8*4090 Node.
Environment
Python: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 4090
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda-12.4
NVCC: Cuda compilation tools, release 12.4, V12.4.99
CUDA Driver Version: 550.54.14
PyTorch: 2.5.1+cu124
sglang: 0.4.4.post1
sgl_kernel: 0.0.5.post3
flashinfer: Module Not Found
triton: 3.1.0
transformers: 4.50.0
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.14
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.24.0
orjson: 3.10.16
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: Module Not Found
zmq: Module Not Found
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
xgrammar: 0.1.16
openai: 1.68.2
tiktoken: 0.9.0
anthropic: 0.49.0
litellm: 1.63.14
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX PXB PXB SYS SYS SYS SYS PXB PXB 0-19,40-59 0 N/A
GPU1 PIX X PXB PXB SYS SYS SYS SYS PXB PXB 0-19,40-59 0 N/A
GPU2 PXB PXB X PXB SYS SYS SYS SYS PIX PIX 0-19,40-59 0 N/A
GPU3 PXB PXB PXB X SYS SYS SYS SYS PXB PXB 0-19,40-59 0 N/A
GPU4 SYS SYS SYS SYS X PIX PXB PXB SYS SYS 20-39,60-79 1 N/A
GPU5 SYS SYS SYS SYS PIX X PXB PXB SYS SYS 20-39,60-79 1 N/A
GPU6 SYS SYS SYS SYS PXB PXB X PXB SYS SYS 20-39,60-79 1 N/A
GPU7 SYS SYS SYS SYS PXB PXB PXB X SYS SYS 20-39,60-79 1 N/A
NIC0 PXB PXB PIX PXB SYS SYS SYS SYS X PIX
NIC1 PXB PXB PIX PXB SYS SYS SYS SYS PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
ulimit soft: 1048576