Skip to content

Conversation

fzyzcjy
Copy link
Collaborator

@fzyzcjy fzyzcjy commented May 29, 2025

Motivation

copied and modified from DeepEP unit tests

Modifications

Checklist

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! Gemini here, providing a summary of this pull request. This PR introduces a new benchmarking script aimed at tuning the performance configurations for the DeepEP library's Buffer class, specifically focusing on the dispatch and combine operations. The script generates synthetic data, performs correctness checks for the data movement operations, and then iteratively tests different chunk sizes for NVLink and RDMA transfers to identify optimal settings for performance. The best-performing configurations are saved to a JSON file.

Highlights

  • New Benchmarking Script: Adds a new Python script (tuning_deepep.py) dedicated to benchmarking and tuning DeepEP's distributed data movement kernels.
  • DeepEP Configuration Tuning: Implements logic to test various deep_ep.Config parameters, particularly num_max_nvl_chunked_send_tokens and num_max_rdma_chunked_send_tokens, to find optimal values for performance.
  • Correctness Checks: Includes assertions to verify the correctness of the data after dispatch and combine operations, ensuring the tuning process is based on functional kernels.
  • Utility Functions: Introduces a utility file (deepep_utils.py) containing helper functions for distributed setup, tensor manipulation (including FP8 casting), and general benchmarking copied from the DeepEP project's test suite.
  • Configuration Output: Saves the identified best configurations for dispatch and combine operations to a JSON file for later use.

Changelog

Click here to see the changelog
  • benchmark/kernels/deepep/deepep_utils.py
    • New file added, containing utility functions for distributed initialization, tensor comparison, FP8 casting, unique element handling, score grouping, and benchmarking/profiling.
    • Includes context managers for suppressing stdout/stderr during profiling.
  • benchmark/kernels/deepep/tuning_deepep.py
    • New file added, implementing the main tuning script.
    • Sets up distributed environment and DeepEP buffer.
    • Generates synthetic data simulating MoE token dispatch/combine scenarios.
    • Performs correctness tests for dispatch and combine with various settings (FP8/BF16, with/without top-k, async/sync).
    • Includes tuning loops to iterate through different NVLink and RDMA chunk sizes for dispatch and combine.
    • Measures and reports performance (GB/s) for different configurations.
    • Saves the best-performing configurations to a JSON file.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Tuning kernels tight,
Find the fastest chunk size,
Data flies just right.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two new Python scripts, deepep_utils.py (copied from an external source) and tuning_deepep.py, aimed at supporting the tuning of DeepEP configurations. The initiative to add tuning capabilities is valuable.

The deepep_utils.py file, being a copy, brings in utility functions for distributed setup, benchmarking, and tensor manipulations. The tuning_deepep.py script utilizes these utilities to perform comprehensive tests and tune performance parameters for dispatch and combine operations in DeepEP, under various configurations (FP8/BF16, top-k, async modes).

While the core logic for benchmarking and tuning appears functional, there are several areas for improvement regarding robustness, clarity, and configurability, as detailed in the specific comments. Additionally, the pull request description is currently empty and the checklist is not filled out, which should be addressed to provide context and track progress.

Given the medium-severity issues identified, I recommend addressing them to enhance the quality and usability of these scripts before merging.

Summary of Findings

  • Missing Type Hints: The args parameter in deepep_utils.py:init_dist and tuning_deepep.py:test_main lacks type hinting (e.g., argparse.Namespace). Adding this would improve code clarity and help with static analysis. (Severity: low, not commented due to settings)
  • Magic Numbers: Several magic numbers are present (e.g., 1e-4, 448.0 in per_token_cast_to_fp8, L2 cache size in bench, buffer sizes in tuning_deepep.py). Defining these as named constants could enhance readability and maintainability. (Severity: low, not commented due to settings)
  • Implicit FP8 Type Check: In tuning_deepep.py, isinstance(current_x, tuple) is used to determine if current_x is FP8. This is a bit implicit. A more explicit method (e.g., a dedicated flag or type) could improve readability. (Severity: low, not commented due to settings)
  • Unconventional No-op: The use of else () for a no-op in conditional statements (e.g., event.current_stream_wait() if async_mode else ()) is slightly unconventional. A simple if async_mode: event.current_stream_wait() is more common. (Severity: low, not commented due to settings)

Merge Readiness

This pull request adds valuable tuning scripts for DeepEP. However, there are several medium-severity issues related to robustness, clarity, and configurability that should be addressed. Specifically, improvements to parsing logic in deepep_utils.py and handling of configurations, loop structures, and hardcoded values in tuning_deepep.py would enhance the quality of these scripts.

Additionally, the PR description should be filled out, and the checklist completed to provide better context for reviewers.

I recommend that these changes be made before merging. As an AI, I am not authorized to approve pull requests; please ensure further review and approval from team members.

Comment on lines +123 to +155
class suppress_stdout_stderr:
def __enter__(self):
self.outnull_file = open(os.devnull, "w")
self.errnull_file = open(os.devnull, "w")

self.old_stdout_fileno_undup = sys.stdout.fileno()
self.old_stderr_fileno_undup = sys.stderr.fileno()

self.old_stdout_fileno = os.dup(sys.stdout.fileno())
self.old_stderr_fileno = os.dup(sys.stderr.fileno())

self.old_stdout = sys.stdout
self.old_stderr = sys.stderr

os.dup2(self.outnull_file.fileno(), self.old_stdout_fileno_undup)
os.dup2(self.errnull_file.fileno(), self.old_stderr_fileno_undup)

sys.stdout = self.outnull_file
sys.stderr = self.errnull_file
return self

def __exit__(self, *_):
sys.stdout = self.old_stdout
sys.stderr = self.old_stderr

os.dup2(self.old_stdout_fileno, self.old_stdout_fileno_undup)
os.dup2(self.old_stderr_fileno, self.old_stderr_fileno_undup)

os.close(self.old_stdout_fileno)
os.close(self.old_stderr_fileno)

self.outnull_file.close()
self.errnull_file.close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The suppress_stdout_stderr class uses low-level file descriptor manipulations (os.dup, os.dup2) to suppress output. While this is effective for C-level library outputs, have you considered if contextlib.redirect_stdout and contextlib.redirect_stderr (available in Python 3.4+) could offer a simpler, standard library-based solution if only Python-level output needs suppression?

If C-level output suppression is a firm requirement (e.g., from underlying CUDA libraries or C extensions), the current approach is understandable. However, if not, using contextlib could improve readability and reduce complexity. What are your thoughts on this trade-off?

Comment on lines +195 to +197
assert (
sum([name in line for line in prof_lines]) == 1
), f"Errors of the kernel {name} in the profiling table"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assertion sum([name in line for line in prof_lines]) == 1 checks if each kernel name appears exactly once in the profiler output. This could be fragile if a kernel name is a substring of another kernel name or appears in descriptive text within the profiler output lines.

Could this lead to false positives or negatives? Perhaps a more robust check, like ensuring the name is a whole word or matches a more specific pattern in the line, would be safer?

Comment on lines +209 to +213
time_str = line.split()[-2]
for unit, scale in units.items():
if unit in time_str:
kernel_times.append(float(time_str.replace(unit, "")) / scale)
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Parsing the time_str using line.split()[-2] assumes a fixed format for the profiler output table. If the Kineto profiler's table format changes in future PyTorch versions (e.g., more columns added, different spacing), this parsing logic might break.

Would it be more resilient to parse based on column headers or use regular expressions if the format is somewhat stable but allows for minor variations?

output_data = {}

# Tune dispatch performance
best_dispatch_results = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable best_dispatch_results is initialized to None here and is later used to construct dispatch_config (around line 357). While the current loop structure (line 296, processing x_e4m3 which is a tuple) ensures best_dispatch_results is updated (around line 344-356), this reliance on loop order and specific data types for initialization can be a bit fragile.

If, for instance, the isinstance(current_x, tuple) check (line 342) didn't behave as expected or the loop order changed, best_dispatch_results might remain None or not be the expected list of three integers, potentially leading to errors when best_dispatch_results[0], [1], [2] are accessed.

Consider initializing best_dispatch_results to a default valid structure (e.g., a list of default SMs/chunk sizes) or adding an explicit check before its use to ensure it has been properly populated. This would make the logic more robust to future changes. What are your thoughts on this?

assert num_local_ranks == 8 and num_ranks > 8
torch.manual_seed(rank)

for i in (num_sms,):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This loop for i in (num_sms,): iterates only once with i taking the value of num_sms.

If the intention is to run test_main for a single num_sms value, the loop is redundant and could be simplified to a direct call. If the plan was to iterate over multiple num_sms values, the tuple should contain those values (e.g., for i in (num_sms_val1, num_sms_val2):).

Could you clarify the intent here? Simplifying this would improve code readability.

    # If only one num_sms value is intended:
    # test_main(
    #     num_sms, local_rank, num_local_ranks, num_ranks, num_nodes, rank, buffer, group, args
    # )
    # if local_rank == 0:
    #     print("", flush=True)
    # Or, if multiple values were intended, define them in the iterable:
    # for sms_value in [24, 48]: # Example values
    #     test_main(
    #         sms_value, local_rank, num_local_ranks, num_ranks, num_nodes, rank, buffer, group, args
    #     )
    #     if local_rank == 0:
    #         print("", flush=True)

    # Current code:
    for i in (num_sms,):
        test_main(
            i, local_rank, num_local_ranks, num_ranks, num_nodes, rank, buffer, group, args
        )
        if local_rank == 0:
            print("", flush=True)

args = parser.parse_args()
print(f"Start system with {args=}")

num_processes = 8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The num_processes variable is hardcoded to 8. This implies the script is primarily designed for execution on a single node with 8 GPUs.

For broader usability and testing on different hardware configurations, would it be beneficial to make num_processes configurable, perhaps via a command-line argument similar to other parameters like --nnodes or --num-sms? This would make the tuning script more flexible.

Suggested change
num_processes = 8
num_processes = args.num_gpus_per_node # Example: if added to argparse
# Or, if keeping it simple for now, ensure this assumption is documented.
# num_processes = 8 # Assumes 8 GPUs per node

@zhyncs zhyncs merged commit 6df81e8 into sgl-project:main May 29, 2025
1 check passed
Layssy pushed a commit to Layssy/sglang-iaas that referenced this pull request Jun 9, 2025
xwu-intel pushed a commit to xwu-intel/sglang that referenced this pull request Jun 17, 2025
walker-ai pushed a commit to walker-ai/sglang that referenced this pull request Jul 8, 2025
Merge branch 'sgl_20250610_sync_tag047 of git@code.alipay.com:Theta/SGLang.git into main

https://code.alipay.com/Theta/SGLang/pull_requests/52


Reviewed-by: 剑川 <jianchuan.gys@antgroup.com>


* [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697)
* [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703)
* [CI] Fix setup of disaggregation with different tp (sgl-project#6706)
* [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712)
* Fuse routed_scaling_factor in DeepSeek (sgl-project#6710)
* Overlap two kernels in DeepSeek with communication (sgl-project#6711)
* Minor refactor two-batch overlap (sgl-project#6682)
* Speed up when having padding tokens two-batch overlap (sgl-project#6668)
* [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479)
* Fix LoRA bench (sgl-project#6719)
* temp
* Fix PP for Qwen3 MoE (sgl-project#6709)
* [feat] triton kernel for get_last_loc (sgl-project#6676)
* [fix] more mem for draft_extend cuda_graph (sgl-project#6726)
* [PD] bug fix:  Update status if nixl receiver send a a dummy req. (sgl-project#6720)
* Tune memory arguments on B200 (sgl-project#6718)
* Add DeepSeek-R1-0528 function call chat template (sgl-project#6725)
* refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715)
* Add draft extend CUDA graph for Triton backend (sgl-project#6705)
* refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545)
* [PD] Support completion endpoint (sgl-project#6729)
* PD Rust LB (PO2) (sgl-project#6437)
* Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680)
* Support picking variants of EPLB algorithms (sgl-project#6728)
* Support tuning DeepEP configs (sgl-project#6742)
* [test] add ut and bm for get_last_loc (sgl-project#6746)
* Fix mem_fraction_static for AMD CI (sgl-project#6748)
* [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265)
* Improve EPLB logical to physical dispatch map (sgl-project#6727)
* Update DeepSeek-R1-0528 function call chat template (sgl-project#6765)
* [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761)
* Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737)
* Support sliding window in triton backend (sgl-project#6509)
* Fix shared experts fusion error (sgl-project#6289)
* Fix one bug in the grouped-gemm triton kernel (sgl-project#6772)
* update llama4 chat template and pythonic parser (sgl-project#6679)
* feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784)
* Support token-level quantization for EP MoE (sgl-project#6782)
* Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785)
* ci: relax test_function_call_required (sgl-project#6786)
* Add intel_amx backend for Radix Attention for CPU (sgl-project#6408)
* Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734)
* fix(PD-disaggregation): Can not get local ip (sgl-project#6792)
* [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791)
* Bump torch to 2.7.0 (sgl-project#6788)
* chore: bump sgl-kernel v0.1.5 (sgl-project#6794)
* Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787)
* chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795)
* [Minor] Always append newline after image token when parsing chat message (sgl-project#6797)
* Update CI tests for Llama4 models (sgl-project#6421)
* [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981)
* chore: update blackwell docker (sgl-project#6800)
* misc: cache is_hopper_arch (sgl-project#6799)
* Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804)
* Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803)
* [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699)
* Add draft extend CUDA graph for flashinfer backend  (sgl-project#6805)
* Refactor CustomOp to avoid confusing bugs (sgl-project#5382)
* Tiny log prefill time (sgl-project#6780)
* Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813)
* Add simple utility to dump tensors for debugging (sgl-project#6815)
* Fix profiles do not have consistent names (sgl-project#6811)
* Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812)
* [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093)
* [Router] Fix k8s Service Discovery (sgl-project#6766)
* Add CPU optimized kernels for topk and rope fusions  (sgl-project#6456)
* fix new_page_count_next_decode (sgl-project#6671)
* Fix wrong weight reference in dynamic EPLB (sgl-project#6818)
* Minor add metrics to expert location updater (sgl-project#6816)
* [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735)
* [FEAT] Add transformers backend support  (sgl-project#5929)
* [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745)
* fix ep_moe_reorder kernel bugs (sgl-project#6858)
* [Refactor] Multimodal data processing for VLM (sgl-project#6659)
* Decoder-only Scoring API (sgl-project#6460)
* feat: add dp-rank to KV events (sgl-project#6852)
* Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736)
* Fix one missing arg in DeepEP (sgl-project#6878)
* Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861)
* support 1 shot allreduce  in 1-node and 2-node using mscclpp (sgl-project#6277)
* Fix Qwen3MoE missing token padding optimization (sgl-project#6820)
* Tiny update error hints (sgl-project#6846)
* Support layerwise rebalancing experts (sgl-project#6851)
* Tiny allow profiler API to auto create directory (sgl-project#6865)
* Support Blackwell DeepEP docker images (sgl-project#6868)
* [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837)
* [theta]merge 0605
* oai: fix openAI client error with single request via batch api (sgl-project#6170)
* [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764)
* Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890)
* [CUTLASS-FP4-MOE]  Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887)
* bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877)
* [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458)
* AITER backend extension and workload optimizations (sgl-project#6838)
* [theta]merge
* [theta]merge
* [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930)
* Fix a bug in abort & Improve docstrings for abort (sgl-project#6931)
* Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934)
* Sync the changes on cuda graph runners (sgl-project#6932)
* [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922)
* [Refactor] image data process in bench_serving (sgl-project#6879)
* [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767)
* Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939)
* [sgl-kernel] update deepgemm (sgl-project#6942)
* chore: bump sgl-kernel v0.1.6 (sgl-project#6943)
* Minor compile fused topk (sgl-project#6944)
* [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910)
* Tiny re-introduce profile id logging (sgl-project#6912)
* Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955)
* reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369)
* chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945)
* add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924)
* [Docker] Add docker file for SGL Router (sgl-project#6915)
* Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874)
* Add canary for EPLB rebalancing (sgl-project#6895)
* Refactor global_server_args_dict (sgl-project#6866)
* Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)
* Update server timeout time in AMD CI. (sgl-project#6953)
* [misc] add is_cpu() (sgl-project#6950)
* Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885)
* Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916)
* chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955)
* chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957)
* [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853)
* Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968)
* [AMD] Add more tests to per-commit-amd (sgl-project#6926)
* chore: bump sgl-kernel v0.1.7 (sgl-project#6963)
* Slightly improve the sampler to skip unnecessary steps (sgl-project#6956)
* rebase h20 fused_moe config (sgl-project#6966)
* Fix CI and triton moe Configs (sgl-project#6974)
* Remove unnecessary kernels of num_token_non_padded (sgl-project#6965)
* Extend cuda graph capture bs for B200 (sgl-project#6937)
* Fuse routed scaling factor in deepseek (sgl-project#6970)
* Sync cuda graph runners (sgl-project#6976)
* Fix draft extend ut stability with flush cache (sgl-project#6979)
* Fix triton sliding window test case (sgl-project#6981)
* Fix expert distribution dumping causes OOM (sgl-project#6967)
* Minor remove one kernel for DeepSeek (sgl-project#6977)
* [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929)
* Enable more unit tests for AMD CI. (sgl-project#6983)
* Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973)
* Eliminate stream sync to speed up LoRA batch init  (sgl-project#6960)
* support qwen3 emebedding (sgl-project#6990)
* Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557)
* chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958)
* cleanup tmp dir (sgl-project#7007)
* chore: update pr test xeon (sgl-project#7008)
* Fix cutlass MLA gets almost zero accuracy (sgl-project#6998)
* Update amd nightly models CI. (sgl-project#6992)
* feat: add direct routing strategy to DP worker (sgl-project#6884)
* Fallback to lower triton version for unfound fused moe configs (sgl-project#7013)
* Fix torchvision version for Blackwell (sgl-project#7015)
* Simplify prepare_extend_after_decode (sgl-project#6987)
* Migrate to assertEqual (sgl-project#6741)
* Fix torch version in blackwell dockerfile (sgl-project#7017)
* chore: update pr test xeon (sgl-project#7018)
* Update default settings for blackwell (sgl-project#7023)
* Support both approximate and exact expert distribution collection (sgl-project#6964)
* Add decode req pool (sgl-project#6980)
* [theta]merge 0610
* [theta]merge 0610
* [CI] Add CI workflow for sgl-router docker build (sgl-project#7027)
* Fix fused_moe triton configs (sgl-project#7029)
* CPU: map changes from developing branch in sgl-kernel (sgl-project#6833)
* chore: bump v0.4.7 (sgl-project#7038)
* Update README.md (sgl-project#7040)
@liweiqing1997
Copy link

I was tuning on 4-node H20 3E machine and got an error:

File "/usr/local/lib/python3.10/dist-packages/deep_ep-1.1.0+e6d61fc-py3.10-linux-x86_64.egg/deep_ep/buffer.py", line 430, in internode_dispatch
recv_x, recv_x_scales, _, _, _, _, _, _, _, _, _, _, _, _, event = self.runtime.internode_dispatch(
RuntimeError: Failed: CUDA error /nvme3/lwq/deepep_install/DeepEP/csrc/kernels/internode.cu:1029 'unspecified launch failure'

The root cause is:
internode.cu:1029 :SWITCH_RDMA_RANKS(DISPATCH_LAUNCH_CASE);
image

After checking, I don't know if it's because the deepep mian branch does not support EP 32 for 4 nodes, just like the issue below:deepseek-ai/DeepEP#255

How can i do? Thank you

@fzyzcjy
Copy link
Collaborator Author

fzyzcjy commented Jul 10, 2025

wondering whether b/c the tuning config causes some issues

@Xerxes-cn
Copy link

Can you explain each command line parameter? I am confused about what num-sms means. Does it set the total number of SMs or the number of SMs used?

@fzyzcjy
Copy link
Collaborator Author

fzyzcjy commented Jul 24, 2025

used

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants