Skip to content

Conversation

yuan-luo
Copy link
Contributor

@yuan-luo yuan-luo commented May 28, 2025

Motivation

moe_pre_reorder is one of the important kernels in EP MoE.
Currently moe_pre_reorder is using triton kernel. This PR is to introduce cuda implementation for this kernel.
The new kernel gains 10-20% performance improvement.

Modifications

Checklist

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @yuan-luo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! I, gemini-code-assist, have summarized this pull request for you. This PR, titled "WIP: Add cuda kernel for moe pre reorder", introduces a new CUDA kernel implementation for the Mixture-of-Experts (MoE) pre-reorder step. The existing Triton kernel for this operation is commented out and replaced with a call to the new CUDA-based kernel. The necessary C++ and Python bindings are added to integrate this new kernel into the sgl-kernel library and the SGLang runtime.

Highlights

  • New CUDA Kernel for MoE Pre-Reorder: A new CUDA kernel (pre_reorder_cuda_kernel) is implemented in sgl-kernel/csrc/moe/moe_reorder_kernel.cu to handle the pre-reordering of input data for MoE layers. This kernel performs the reordering and applies scaling based on expert IDs and a source-to-destination mapping.
  • Replaces Triton Kernel: The existing call to the pre_reorder_triton_kernel in python/sglang/srt/layers/moe/ep_moe/layer.py is commented out and replaced with a call to the newly added CUDA kernel function (sgl_moe_pre_reorder). A TODO comment indicates that there might be a future switch to choose between Triton and CUDA kernels.
  • C++/Python Bindings: The new CUDA kernel function (moe_pre_reorder) is exposed to Python by adding its definition and implementation binding in sgl-kernel/csrc/common_extension.cc and declaring it in sgl-kernel/include/sgl_kernel_ops.h. A Python wrapper function is also added in sgl-kernel/python/sgl_kernel/moe.py, although the current usage in layer.py seems to directly import the bound function.

Changelog

Click here to see the changelog
  • python/sglang/srt/layers/moe/ep_moe/layer.py
    • Added import for sgl_kernel.moe_pre_reorder for CUDA/HIP.
    • Commented out the call to pre_reorder_triton_kernel.
    • Added a call to the new sgl_moe_pre_reorder function, passing the required tensors and parameters, including num_blocks and block_size.
  • sgl-kernel/csrc/common_extension.cc
    • Added TorchScript definition and CUDA implementation binding for the new moe_pre_reorder function.
  • sgl-kernel/csrc/moe/moe_reorder_kernel.cu
    • Added a new file containing the pre_reorder_cuda_kernel implementation.
    • Added a C++ wrapper function moe_pre_reorder to launch the CUDA kernel from PyTorch.
  • sgl-kernel/include/sgl_kernel_ops.h
    • Added the C++ function declaration for moe_pre_reorder.
  • sgl-kernel/python/sgl_kernel/moe.py
    • Added a Python wrapper function moe_pre_reorder that calls the bound C++ function.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


CUDA kernel new,
Reorders data, fast path,
Triton takes a rest.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a CUDA kernel for MoE (Mixture of Experts) pre-reordering, aiming to replace the existing Triton kernel, likely for performance improvements. The initiative to optimize this critical path with a dedicated CUDA kernel is a good step.

The core CUDA kernel logic for reordering and scaling seems plausible. However, there are a few critical issues in the integration and parameter passing that need to be addressed. Additionally, there's a minor typo and a point about platform consistency.

Since this is a Work-In-Progress (WIP) PR, the incomplete PR description and checklist are understandable. Please ensure these are filled out before final merging.

Summary of Findings

  • Incorrect variable in CUDA kernel launch: In sgl-kernel/csrc/moe/moe_reorder_kernel.cu, num_tokens is used to set total_blocks for the kernel grid, but num_tokens is undefined. It should use the num_blocks parameter.
  • Missing parameters in Python C++ op wrapper: The Python function sgl_kernel.moe.moe_pre_reorder does not accept or pass num_blocks and block_size to the underlying C++ Torch op, which expects them.
  • Typo in import alias: In python/sglang/srt/layers/moe/ep_moe/layer.py, moe_pre_reoder should be moe_pre_reorder.
  • Platform consistency for import condition: The Python import for the CUDA kernel uses _is_cuda or _is_hip, but the C++ op is only registered for CUDA. This might need alignment if HIP support for this specific kernel is not planned.
  • Kernel selection mechanism: The code includes a TODO for switching between Triton and CUDA kernels, but currently hardcodes the CUDA kernel. This is acceptable for WIP but should be addressed for flexibility.

Merge Readiness

This pull request is a work-in-progress and introduces a promising CUDA kernel for MoE pre-reordering. However, there are critical issues related to parameter passing and variable usage that must be addressed before this PR can be considered for merging. Specifically, the CUDA kernel launch configuration uses an undefined variable, and the Python wrapper for the C++ operation is missing necessary parameters.

Once these critical issues are resolved, and the medium severity items (typo, platform consistency, TODO clarification) are considered, the PR will be in a much better state. As a reviewer, I am not authorized to approve pull requests. Please ensure these changes are made and further reviewed before merging.

@yuan-luo yuan-luo marked this pull request as draft May 28, 2025 10:36
@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch from f18532f to 55aac1e Compare May 28, 2025 10:42
@yuan-luo
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new CUDA kernel for MoE (Mixture of Experts) pre-reordering, along with its integration into the Python and C++ codebase. The CUDA kernel itself (pre_reorder_cuda_kernel) is well-implemented, clear, and uses efficient patterns like the grid-stride loop.

The integration points, including the Python wrapper in sgl_kernel.moe.py, the C++ extension registration in common_extension.cc, and the header declaration in sgl_kernel_ops.h, all appear correct and follow standard practices.

The PR is marked as WIP, and the TODO comment in python/sglang/srt/layers/moe/ep_moe/layer.py rightly points out the need for a mechanism to choose between the new CUDA kernel and the existing Triton kernel. Implementing this selection logic will be crucial for completing this feature.

Overall, this is a good step towards providing an optimized CUDA implementation for this operation. The checklist items in the PR description (tests, documentation, benchmarks) will also be important to address as the work progresses.

Summary of Findings

  • TorchScript Signature Precision: In sgl-kernel/csrc/common_extension.cc, the moe_pre_reorder operator is defined with a return type of (Tensor[]) in its TorchScript signature. However, the corresponding C++ implementation moe_pre_reorder (in moe_reorder_kernel.cu) is void. Using -> () in the signature might offer better precision.
  • Kernel Selection Mechanism (WIP): The TODO in python/sglang/srt/layers/moe/ep_moe/layer.py highlights the need to implement a mechanism to choose between the newly added CUDA kernel and the existing Triton kernel. This is a key part of completing the feature.

Merge Readiness

This pull request is a work-in-progress and makes good progress by adding a CUDA kernel for MoE pre-reordering. The kernel implementation and its initial integration are well done.

However, as indicated by the 'WIP' status and the TODO comment, the mechanism for selecting between this new CUDA kernel and the existing Triton kernel is not yet implemented. This selection logic is crucial for the feature's completeness. Additionally, as this is a WIP, aspects like unit tests, documentation, and benchmarks (as per the PR checklist) will need to be addressed before this can be considered ready for merging.

I am unable to approve pull requests. Based on the review, I recommend that these WIP items, particularly the kernel selection mechanism, be completed and further reviewed before merging.

@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch 2 times, most recently from a617da5 to 9a457d2 Compare May 28, 2025 11:13
@yuan-luo
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new CUDA kernel for moe_pre_reorder, aiming to replace the existing Triton implementation for a 5x performance improvement. The changes involve adding the CUDA kernel, its C++ bindings, Python wrappers, and integrating it into the MoE layer.

Overall, the core logic for the CUDA kernel and its integration appears sound. The claimed performance improvement is significant and welcome. However, as a "Work In Progress" (WIP) PR, there are a few key areas to address before this can be merged:

Summary of Findings

  • Missing Unit Tests: The PR introduces a new CUDA kernel which is performance-critical. It's essential to add unit tests to verify its correctness against the existing Triton kernel, especially covering different input configurations and edge cases. This was noted as 'high' severity but not commented inline as it's a general project structure concern rather than a specific line of code.
  • Configuration Flag for Kernel Choice: A TODO comment mentions adding a configuration flag to switch between the Triton and CUDA kernels. This is important for benchmarking, debugging, and providing a fallback. This was commented on with 'medium' severity.
  • Documentation for New Components: The new Python wrapper and the CUDA kernel itself would benefit from docstrings/comments to improve maintainability and understanding. This was commented on with 'medium' severity.

Merge Readiness

This PR is a promising step towards improving MoE performance. However, given its WIP status and the critical need for unit tests for the new CUDA kernel, along with the planned configuration flag and necessary documentation, I recommend that these changes be addressed before merging. I am unable to approve the PR directly; further review and approval from other maintainers will be necessary once these points are resolved.

@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch 5 times, most recently from d968538 to 2a6667b Compare May 29, 2025 02:28
@strgrb strgrb marked this pull request as ready for review May 29, 2025 02:30
@zhyncs zhyncs changed the title WIP: Add cuda kernel for moe pre reorder Add cuda kernel for moe pre reorder May 29, 2025
@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch from 7341223 to 93c8fe6 Compare May 29, 2025 03:43
@yuan-luo yuan-luo changed the title Add cuda kernel for moe pre reorder WIP:Add cuda kernel for moe pre reorder May 29, 2025
@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch 2 times, most recently from 1442c80 to 00d112b Compare May 30, 2025 16:14
@yuan-luo
Copy link
Contributor Author

yuan-luo commented May 30, 2025

Finally, the kernel benchmark script also needs to be updated to utilize the Triton Benchmark tool to compare performance differences in various scenarios, and place the benchmark script in the benchmark folder of sgl-kernel.

Per discussed, I'll do it in this PR.

In large batch_size(4096) and large hidden_size(4096/8192), the cuda kernel shows better performance than triton kernel.
While if hidden_size is small, the triton kernel's performance are better.

hidden_size=8192
[root@aa2d6f61f1ae pre_reorder]# python bench_moe_ep_pre_reorder.py 
INFO 05-31 00:15:37 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:15:39 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:15:39 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:15:39 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:15:39 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:15:39 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:15:40 [__init__.py:243] Automatically detected platform cuda.
ep-moe-pre-reorder-performance:
   batch_size  CUDA Kernel  Triton Kernel
0        64.0    36.256000      37.439998
1       128.0    50.783999      52.255999
2       256.0    82.800001      84.063999
3       512.0   140.000001     136.800006
4       640.0   167.040005     168.288007
5       768.0   194.640011     203.232005
6      1024.0   253.632009     283.583999
7      2048.0   484.655976     533.248007
8      4096.0   946.655989    1073.887944

hidden_size=4096
[root@aa2d6f61f1ae pre_reorder]# python bench_moe_ep_pre_reorder.py 
INFO 05-31 00:21:43 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:21:45 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:21:45 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:21:45 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:21:45 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:21:45 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:21:45 [__init__.py:243] Automatically detected platform cuda.
ep-moe-pre-reorder-performance:
   batch_size  CUDA Kernel  Triton Kernel
0        64.0    30.208001      30.880000
1       128.0    34.976002      34.336001
2       256.0    48.287999      47.648001
3       512.0    76.895997      72.672002
4       640.0    91.631994      85.376002
5       768.0   101.760000      95.615998
6      1024.0   130.640000     126.368001
7      2048.0   241.375998     256.736010
8      4096.0   461.871982     510.208011

hidden_size=1024
[root@aa2d6f61f1ae pre_reorder]# python bench_moe_ep_pre_reorder.py 
INFO 05-31 00:22:42 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:22:44 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:22:44 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:22:44 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:22:44 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:22:44 [__init__.py:243] Automatically detected platform cuda.
INFO 05-31 00:22:44 [__init__.py:243] Automatically detected platform cuda.
ep-moe-pre-reorder-performance:
   batch_size  CUDA Kernel  Triton Kernel
0        64.0    24.256000      24.544001
1       128.0    26.272001      26.112000
2       256.0    30.208001      28.352000
3       512.0    37.120000      32.960001
4       640.0    42.864002      37.664000
5       768.0    45.600001      39.808001
6      1024.0    53.535998      45.088001
7      2048.0    86.943999      69.888003
8      4096.0   149.664000     117.472000

@yuan-luo yuan-luo changed the title Add cuda kernel for moe pre reorder [EP] Add cuda kernel for moe_ep_pre_reorder May 30, 2025
@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch 2 times, most recently from 80db60b to 043903f Compare May 31, 2025 12:40
@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch 2 times, most recently from 7c4bbaf to b7226c0 Compare May 31, 2025 14:15
Copy link
Collaborator

@BBuf BBuf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

int dst_idx = token_src2dst[k];
float* dst_ptr = gateup_input_ptr + dst_idx * hidden_size;

for (int i = tid; i < hidden_size; i += blockDim.x) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please modify this core loop with vectorization ld/st to increase the memory bandwidth? It is quite useful XD. Thanks!
refer to:

  1. https://github.com/sgl-project/sglang/blob/main/sgl-kernel/csrc/elementwise/activation.cu#L43
  2. https://github.com/flashinfer-ai/flashinfer/blob/main/include/flashinfer/activation.cuh#L41

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WIP.

Copy link
Contributor Author

@yuan-luo yuan-luo Jun 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to this, build and test in progress. Will push the commit when test is ready.

__global__ void ep_pre_reorder_cuda_kernel(
    const float* __restrict__ input_ptr,
    float* __restrict__ gateup_input_ptr,
    const int* __restrict__ src2dst_ptr,
    const int* __restrict__ topk_ids_ptr,
    const float* __restrict__ a1_scales_ptr,
    int start_expert_id,
    int end_expert_id,
    int topk,
    int hidden_size) {
  int token_idx = blockIdx.x;
  int tid = threadIdx.x;

  const float* src_ptr = input_ptr + int64_t(token_idx) * hidden_size;
  const int* token_src2dst = src2dst_ptr + token_idx * topk;
  const int* token_topk_ids = topk_ids_ptr + token_idx * topk;

  for (int k = 0; k < topk; ++k) {
    int expert_id = token_topk_ids[k];
    if (expert_id < start_expert_id || expert_id > end_expert_id) continue;

    float scale = 1.0f;
    if (a1_scales_ptr != nullptr) {
      scale = 1.0f / a1_scales_ptr[expert_id - start_expert_id];
    }

    int dst_idx = token_src2dst[k];
    float* dst_ptr = gateup_input_ptr + int64_t(dst_idx) * hidden_size;

    constexpr uint32_t vec_size = 16 / sizeof(float);
    using vec_t = flashinfer::vec_t<float, vec_size>;

    for (int idx = tid; idx < hidden_size / vec_size; idx += blockDim.x) {
      vec_t input_vec, output_vec;
      input_vec.cast_load(src_ptr + idx * vec_size);
#pragma unroll
      for (uint32_t i = 0; i < vec_size; ++i) {
        float val = static_cast<float>(input_vec[i]);
        output_vec[i] = val * scale;
      }
      output_vec.cast_store(dst_ptr + idx * vec_size);
    }
  }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New kernel gains better performance.

[root@aa2d6f61f1ae pre_reorder]# python bench_moe_ep_pre_reorder.py
INFO 06-01 18:28:10 [__init__.py:243] Automatically detected platform cuda.
INFO 06-01 18:28:11 [__init__.py:243] Automatically detected platform cuda.
INFO 06-01 18:28:11 [__init__.py:243] Automatically detected platform cuda.
INFO 06-01 18:28:11 [__init__.py:243] Automatically detected platform cuda.
INFO 06-01 18:28:11 [__init__.py:243] Automatically detected platform cuda.
INFO 06-01 18:28:11 [__init__.py:243] Automatically detected platform cuda.
INFO 06-01 18:28:12 [__init__.py:243] Automatically detected platform cuda.
/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. 
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
ep-moe-pre-reorder-performance:
   batch_size  CUDA Kernel  Triton Kernel
0        64.0    28.928000      30.624000
1       128.0    33.376001      34.704000
2       256.0    46.080001      48.448000
3       512.0    70.560001      73.311999
4       640.0    82.719997      86.015999
5       768.0    93.503997      96.415997
6      1024.0   118.351996     127.263993
7      2048.0   217.680007     257.615983
8      4096.0   419.856012     510.464013

Copy link
Contributor Author

@yuan-luo yuan-luo Jun 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since @ch-wan added use_per_token_if_dynamic in pre_reorder Triton kernel #6782, I need to update code accordingly. Please wait for the internal build and test. After that, I'll push the new code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Alcanderian @ch-wan Address all the comments and updated cuda kernel based on latest signature. Please help to review.

@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch 2 times, most recently from 3a999e0 to c8ed76d Compare June 1, 2025 11:26
@Alcanderian
Copy link
Collaborator

🚀 LGTM

@yuan-luo yuan-luo force-pushed the moe_pre_reorder_cuda branch from c8ed76d to f6aeaf1 Compare June 2, 2025 01:20
@zhyncs zhyncs merged commit 55444ed into sgl-project:main Jun 2, 2025
42 of 44 checks passed
Layssy pushed a commit to Layssy/sglang-iaas that referenced this pull request Jun 9, 2025
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
xwu-intel pushed a commit to xwu-intel/sglang that referenced this pull request Jun 17, 2025
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
walker-ai pushed a commit to walker-ai/sglang that referenced this pull request Jul 8, 2025
Merge branch 'sgl_20250610_sync_tag047 of git@code.alipay.com:Theta/SGLang.git into main

https://code.alipay.com/Theta/SGLang/pull_requests/52


Reviewed-by: 剑川 <jianchuan.gys@antgroup.com>


* [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697)
* [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703)
* [CI] Fix setup of disaggregation with different tp (sgl-project#6706)
* [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712)
* Fuse routed_scaling_factor in DeepSeek (sgl-project#6710)
* Overlap two kernels in DeepSeek with communication (sgl-project#6711)
* Minor refactor two-batch overlap (sgl-project#6682)
* Speed up when having padding tokens two-batch overlap (sgl-project#6668)
* [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479)
* Fix LoRA bench (sgl-project#6719)
* temp
* Fix PP for Qwen3 MoE (sgl-project#6709)
* [feat] triton kernel for get_last_loc (sgl-project#6676)
* [fix] more mem for draft_extend cuda_graph (sgl-project#6726)
* [PD] bug fix:  Update status if nixl receiver send a a dummy req. (sgl-project#6720)
* Tune memory arguments on B200 (sgl-project#6718)
* Add DeepSeek-R1-0528 function call chat template (sgl-project#6725)
* refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715)
* Add draft extend CUDA graph for Triton backend (sgl-project#6705)
* refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545)
* [PD] Support completion endpoint (sgl-project#6729)
* PD Rust LB (PO2) (sgl-project#6437)
* Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680)
* Support picking variants of EPLB algorithms (sgl-project#6728)
* Support tuning DeepEP configs (sgl-project#6742)
* [test] add ut and bm for get_last_loc (sgl-project#6746)
* Fix mem_fraction_static for AMD CI (sgl-project#6748)
* [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265)
* Improve EPLB logical to physical dispatch map (sgl-project#6727)
* Update DeepSeek-R1-0528 function call chat template (sgl-project#6765)
* [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761)
* Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737)
* Support sliding window in triton backend (sgl-project#6509)
* Fix shared experts fusion error (sgl-project#6289)
* Fix one bug in the grouped-gemm triton kernel (sgl-project#6772)
* update llama4 chat template and pythonic parser (sgl-project#6679)
* feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784)
* Support token-level quantization for EP MoE (sgl-project#6782)
* Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785)
* ci: relax test_function_call_required (sgl-project#6786)
* Add intel_amx backend for Radix Attention for CPU (sgl-project#6408)
* Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734)
* fix(PD-disaggregation): Can not get local ip (sgl-project#6792)
* [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791)
* Bump torch to 2.7.0 (sgl-project#6788)
* chore: bump sgl-kernel v0.1.5 (sgl-project#6794)
* Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787)
* chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795)
* [Minor] Always append newline after image token when parsing chat message (sgl-project#6797)
* Update CI tests for Llama4 models (sgl-project#6421)
* [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981)
* chore: update blackwell docker (sgl-project#6800)
* misc: cache is_hopper_arch (sgl-project#6799)
* Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804)
* Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803)
* [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699)
* Add draft extend CUDA graph for flashinfer backend  (sgl-project#6805)
* Refactor CustomOp to avoid confusing bugs (sgl-project#5382)
* Tiny log prefill time (sgl-project#6780)
* Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813)
* Add simple utility to dump tensors for debugging (sgl-project#6815)
* Fix profiles do not have consistent names (sgl-project#6811)
* Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812)
* [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093)
* [Router] Fix k8s Service Discovery (sgl-project#6766)
* Add CPU optimized kernels for topk and rope fusions  (sgl-project#6456)
* fix new_page_count_next_decode (sgl-project#6671)
* Fix wrong weight reference in dynamic EPLB (sgl-project#6818)
* Minor add metrics to expert location updater (sgl-project#6816)
* [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735)
* [FEAT] Add transformers backend support  (sgl-project#5929)
* [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745)
* fix ep_moe_reorder kernel bugs (sgl-project#6858)
* [Refactor] Multimodal data processing for VLM (sgl-project#6659)
* Decoder-only Scoring API (sgl-project#6460)
* feat: add dp-rank to KV events (sgl-project#6852)
* Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736)
* Fix one missing arg in DeepEP (sgl-project#6878)
* Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861)
* support 1 shot allreduce  in 1-node and 2-node using mscclpp (sgl-project#6277)
* Fix Qwen3MoE missing token padding optimization (sgl-project#6820)
* Tiny update error hints (sgl-project#6846)
* Support layerwise rebalancing experts (sgl-project#6851)
* Tiny allow profiler API to auto create directory (sgl-project#6865)
* Support Blackwell DeepEP docker images (sgl-project#6868)
* [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837)
* [theta]merge 0605
* oai: fix openAI client error with single request via batch api (sgl-project#6170)
* [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764)
* Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890)
* [CUTLASS-FP4-MOE]  Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887)
* bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877)
* [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458)
* AITER backend extension and workload optimizations (sgl-project#6838)
* [theta]merge
* [theta]merge
* [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930)
* Fix a bug in abort & Improve docstrings for abort (sgl-project#6931)
* Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934)
* Sync the changes on cuda graph runners (sgl-project#6932)
* [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922)
* [Refactor] image data process in bench_serving (sgl-project#6879)
* [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767)
* Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939)
* [sgl-kernel] update deepgemm (sgl-project#6942)
* chore: bump sgl-kernel v0.1.6 (sgl-project#6943)
* Minor compile fused topk (sgl-project#6944)
* [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910)
* Tiny re-introduce profile id logging (sgl-project#6912)
* Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955)
* reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369)
* chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945)
* add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924)
* [Docker] Add docker file for SGL Router (sgl-project#6915)
* Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874)
* Add canary for EPLB rebalancing (sgl-project#6895)
* Refactor global_server_args_dict (sgl-project#6866)
* Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)
* Update server timeout time in AMD CI. (sgl-project#6953)
* [misc] add is_cpu() (sgl-project#6950)
* Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885)
* Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916)
* chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955)
* chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957)
* [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853)
* Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968)
* [AMD] Add more tests to per-commit-amd (sgl-project#6926)
* chore: bump sgl-kernel v0.1.7 (sgl-project#6963)
* Slightly improve the sampler to skip unnecessary steps (sgl-project#6956)
* rebase h20 fused_moe config (sgl-project#6966)
* Fix CI and triton moe Configs (sgl-project#6974)
* Remove unnecessary kernels of num_token_non_padded (sgl-project#6965)
* Extend cuda graph capture bs for B200 (sgl-project#6937)
* Fuse routed scaling factor in deepseek (sgl-project#6970)
* Sync cuda graph runners (sgl-project#6976)
* Fix draft extend ut stability with flush cache (sgl-project#6979)
* Fix triton sliding window test case (sgl-project#6981)
* Fix expert distribution dumping causes OOM (sgl-project#6967)
* Minor remove one kernel for DeepSeek (sgl-project#6977)
* [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929)
* Enable more unit tests for AMD CI. (sgl-project#6983)
* Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973)
* Eliminate stream sync to speed up LoRA batch init  (sgl-project#6960)
* support qwen3 emebedding (sgl-project#6990)
* Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557)
* chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958)
* cleanup tmp dir (sgl-project#7007)
* chore: update pr test xeon (sgl-project#7008)
* Fix cutlass MLA gets almost zero accuracy (sgl-project#6998)
* Update amd nightly models CI. (sgl-project#6992)
* feat: add direct routing strategy to DP worker (sgl-project#6884)
* Fallback to lower triton version for unfound fused moe configs (sgl-project#7013)
* Fix torchvision version for Blackwell (sgl-project#7015)
* Simplify prepare_extend_after_decode (sgl-project#6987)
* Migrate to assertEqual (sgl-project#6741)
* Fix torch version in blackwell dockerfile (sgl-project#7017)
* chore: update pr test xeon (sgl-project#7018)
* Update default settings for blackwell (sgl-project#7023)
* Support both approximate and exact expert distribution collection (sgl-project#6964)
* Add decode req pool (sgl-project#6980)
* [theta]merge 0610
* [theta]merge 0610
* [CI] Add CI workflow for sgl-router docker build (sgl-project#7027)
* Fix fused_moe triton configs (sgl-project#7029)
* CPU: map changes from developing branch in sgl-kernel (sgl-project#6833)
* chore: bump v0.4.7 (sgl-project#7038)
* Update README.md (sgl-project#7040)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants