Skip to content

Conversation

ishandhanani
Copy link
Collaborator

This dockerfile allows for FP4 disaggregation with DSR1. The commands are as follows

prefill

NCCL_MNNVL_ENABLE=1 \
NCCL_CUMEM_ENABLE=1 \
SGLANG_USE_MESSAGE_QUEUE_BROADCASTER=0 \
PYTHONUNBUFFERED=1 \
python3 -m sglang.launch_server \
--disaggregation-transfer-backend nixl \
--disaggregation-mode decode \
--host 0.0.0.0 \
--decode-log-interval 1 \
--max-running-requests 1536 \
--context-length 4224 \
--max-total-tokens=2048 \
--disable-radix-cache \
--disable-shared-experts-fusion \
--attention-backend cutlass_mla \
--watchdog-timeout 1000000 \
--model-path /model/ \
--served-model-name nvidia/DeepSeek-R1-0528-FP4 \
--trust-remote-code \
--tp-size 4 \
--dp-size 4 \
--enable-dp-attention \
--cuda-graph-bs 64 \
--port 30000 \
--quantization modelopt_fp4 \
--enable-flashinfer-cutlass-moe \
--enable-ep-moe

decode

NCCL_MNNVL_ENABLE=1 \
NCCL_CUMEM_ENABLE=1 \
SGLANG_USE_MESSAGE_QUEUE_BROADCASTER=0 \
SGL_DISABLE_TP_MEMORY_INBALANCE_CHECK=1 \
PYTHONUNBUFFERED=1 \
python3 -m sglang.launch_server \
--disaggregation-transfer-backend nixl \
--disaggregation-mode prefill \
--host 0.0.0.0 \
--decode-log-interval 1 \
--max-running-requests 1536 \
--context-length 4224 \
--disable-radix-cache \
--disable-shared-experts-fusion \
--attention-backend cutlass_mla \
--watchdog-timeout 1000000 \
--model-path /model/ \
--served-model-name nvidia/DeepSeek-R1-0528-FP4 \
--trust-remote-code \
--tp-size 4 \
--dp-size 4 \
--enable-dp-attention \
--disable-cuda-graph \
--chunked-prefill-size 32768 \
--max-total-tokens 131072 \
--port 30000 \
--max-prefill-tokens 32768 \
--quantization modelopt_fp4 \
--enable-flashinfer-cutlass-moe \
--enable-ep-moe

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ishandhanani, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the Dockerfile.gb200 to enable FP4 disaggregation, specifically for the DeepSeek-R1-0528-FP4 model on GB200 hardware. The changes involve crucial dependency management, including the addition of the nixl package, and an update to how the SGLang project is sourced, now building from its latest upstream version. These modifications are essential for supporting advanced quantization and memory disaggregation features within the Dockerized environment.

Highlights

  • Enable FP4 Disaggregation: The Dockerfile now includes the installation of the nixl Python package, which is explicitly noted as enabling FP4 disaggregation. This is a core change to support the PR's stated intent.
  • SGLang Version Update: The SGLang project is now cloned using --depth 1 for a shallow clone, and the explicit git checkout to a specific commit (a167fd0bcb9ef4b0f4331a109e40c8cdc770b026) has been removed. This means the Dockerfile will now build with the latest HEAD of the SGLang project, incorporating recent updates and features.
  • Dependency Name Correction: Corrected the Python package name mooncake_transfer_engine to mooncake-transfer-engine to align with standard Python package naming conventions (using hyphens instead of underscores).
  • Explicit CuDNN Installation: Added explicit pip install commands for nvidia-cudnn-cu12 and nvidia-cudnn-frontend. A comment indicates these are temporary additions until future flashinfer versions (after 0.2.9rc2) automatically handle their installation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@zhyncs zhyncs merged commit b27b119 into main Aug 2, 2025
14 of 15 checks passed
@zhyncs zhyncs deleted the ishan/gb200 branch August 2, 2025 01:58
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the Dockerfile.gb200 to support FP4 disaggregation. The changes include updating the sglang dependency to the latest version, fixing a package name for mooncake-transfer-engine, and adding new dependencies required for the disaggregation feature. My review focuses on improving the stability of the Docker build and optimizing the Dockerfile structure.

WORKDIR /sgl-workspace
RUN python3 -m pip install --no-cache-dir --upgrade pip setuptools wheel html5lib six \
&& git clone https://github.com/sgl-project/sglang.git \
&& git clone --depth 1 https://github.com/sgl-project/sglang.git \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The git checkout to a specific commit has been removed. While using --depth 1 is a good optimization for reducing the clone size, removing the pinned commit means this Docker build will now pull the HEAD of the sglang repository's default branch. This can lead to non-reproducible builds and may introduce breaking changes unexpectedly.

For production stability and reproducible builds, it is highly recommended to pin dependencies to a specific git commit hash or tag.

Comment on lines +104 to +112
# These will be automatically installed by future versions of flashinfer after 0.2.9rc2
RUN python3 -m pip install --no-cache-dir \
nvidia-cudnn-cu12 \
nvidia-cudnn-frontend

# Allows for FP4 disaggregation
RUN python3 -m pip install --no-cache-dir \
nixl

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To optimize the Docker image size and improve build performance, it's a best practice to combine consecutive RUN commands into a single layer. The two new pip install commands can be merged into one.

For further optimization, you could also merge these new packages into the preceding RUN pip install command on lines 92-102.

# These will be automatically installed by future versions of flashinfer after 0.2.9rc2
# Allows for FP4 disaggregation
RUN python3 -m pip install --no-cache-dir \
    nvidia-cudnn-cu12 \
    nvidia-cudnn-frontend \
    nixl

Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants