-
Notifications
You must be signed in to change notification settings - Fork 2.8k
CPU: map changes from developing branch in sgl-kernel #6833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @yanbing-j, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! Gemini here, providing a summary of this pull request. The primary goal of this PR, as stated in the description, is to optimize the performance of the decode operation by incorporating algorithms inspired by FlashMLA. The changes focus on the CPU kernel implementations for attention, particularly for decode and extend phases. This involves introducing VNNI packing for KV caches and restructuring the attention kernels to leverage these packed formats and improve data locality and computation efficiency. The PR also includes some minor optimizations to utility functions and refactors parts of the decode attention logic into smaller, more focused functions.
Highlights
- Performance Optimization: Implements performance optimizations for the decode and extend attention kernels on CPU by adopting techniques from FlashMLA, specifically VNNI packing.
- VNNI Packing: Adds new utility functions (
pack_vnni_Nx32
,pack_vnni_Kx32
, and updatespack_vnni
,pack_vnni2
) to convert KV cache data into the VNNI format, which is beneficial for vectorized and matrix multiplication operations on modern CPUs (like those with AVX512 VNNI support). - MLA Kernel: Introduces a dedicated kernel (
decode_attention_mla_kernel_impl
) specifically optimized for the Multi-Layer Attention (MLA) case (wherenum_heads_kv == 1
), utilizing the new VNNI packing and a different attention calculation flow. - Code Refactoring: Refactors the main decode attention logic by extracting common steps like KV buffer updates (
decode_set_kv_buffer
) and accumulation across KV splits (decode_accumulate_kv_splits
) into separate functions, improving modularity. - AVX512 Utilities: Adds AVX512-specific transpose utility functions (
transpose_16x16_32bit
,transpose_2x32_16bit
) invec.h
to support the VNNI packing implementations.
Changelog
Click here to see the changelog
- sgl-kernel/csrc/cpu/decode.cpp
- Added include for
gemm.h
. - Implemented AVX512-specific VNNI packing functions (
pack_vnni_Nx32
,pack_vnni
). - Refactored
fill_stub
andcopy_stub
withconstexpr int kVecSize
and#pragma GCC unroll 4
. - Added a new
copy_stub
template for copying from float to scalar_t with a block size. - Extracted KV buffer update logic into
decode_set_kv_buffer
. - Extracted accumulation across KV splits logic into
decode_accumulate_kv_splits
. - Introduced
decode_attention_mla_kernel_impl
for MLA optimization. - Modified
decode_attention_kernel_impl
(MHA) to use the new refactored functions. - Modified
decode_attention_grouped_kernel_impl
(GQA/MQA) to use the new refactored functions and adjusted logit cap loop bounds. - Updated
decode_attention_cpu
to calldecode_set_kv_buffer
, check for MLA, and dispatch to the appropriate kernel (decode_attention_mla_kernel_impl
ordecode_attention_grouped_kernel_impl
). - Allocated temporary buffer for MLA kernel.
- Added include for
- sgl-kernel/csrc/cpu/extend.cpp
- Implemented AVX512-specific VNNI packing functions (
pack_vnni_Nx32
,pack_vnni_Kx32
). - Updated
pack_vnni
andpack_vnni2
to use the new AVX512 packing functions when available. - Refactored
fill_stub
andcopy_stub
withconstexpr int kVecSize
and#pragma GCC unroll 4
.
- Implemented AVX512-specific VNNI packing functions (
- sgl-kernel/csrc/cpu/vec.h
- Added AVX512 transpose utility functions (
transpose_16x16_32bit
,transpose_2x32_16bit
).
- Added AVX512 transpose utility functions (
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
CPU cores hum fast,
Data packed, attention cast,
Decode speeds increase.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces significant optimizations for decode performance, particularly targeting AVX512-capable CPUs by mapping algorithms inspired by FlashMLA. The changes involve substantial refactoring of the attention kernels, introduction of VNNI packing for keys and values, and a new MLA (Masked Language Attention) kernel.
The refactoring separates KV cache updates, attention computation, and cross-KV-split accumulation, which generally improves modularity. The use of AVX512 intrinsics and manual loop unrolling in fill_stub
and copy_stub
functions are good performance practices.
However, there are a few areas that need attention, primarily a potential correctness issue related to logit_cap
application in the GQA/MQA kernel and some assumptions that should be clarified or asserted for robustness. It would also be beneficial to see benchmark results to quantify the performance improvements, as indicated in the PR checklist.
Summary of Findings
- Potential Correctness Issue with
logit_cap
in GQA/MQA Kernel: Insgl-kernel/csrc/cpu/decode.cpp
, the application oflogit_cap
indecode_attention_grouped_kernel_impl
(line 1305) processesBLOCK_H * BLOCK_N
elements, while the valid computed scores areBLOCK_H * n_size
. Ifn_size < BLOCK_N
andlogit_cap != 0
, this could lead to incorrect results by processing stale data. This was flagged as high severity. - Assumptions on Head Size Divisibility for AVX512 Paths: Several AVX512-specific packing functions (e.g.,
pack_vnni
indecode.cpp
andextend.cpp
) assume that head sizes (K
,Kv
, orN
inpack_vnni2
) are multiples of 32, with comments like "no remainder". If these assumptions are violated, it could lead to incorrect behavior. Adding assertions or ensuring upstream checks would improve robustness. This was flagged as medium severity. - Clarity of Comments: A comment in
decode.cpp
(line 39,pack_vnni_Nx32
) "skip 64 elems for deepseek" is unclear and could benefit from more explanation. This was flagged as medium severity. - MLA Kernel
logit_cap
Limitation: The new MLA kernel indecode.cpp
(line 1042) explicitly checks thatlogit_cap == 0.f
. This limitation should be noted, as it might affect applicability iflogit_cap
is needed. This was flagged as medium severity. - Missing Benchmark Results: The PR aims to optimize performance, but the checklist item for providing benchmark results is unchecked. These results are important for validating the impact of the changes. (Not commented in-line as it's a general PR concern).
Merge Readiness
This pull request introduces substantial and complex optimizations for CPU decoding performance. While the overall direction and many of the low-level optimizations (like VNNI packing and loop unrolling) are promising, there's a high-severity potential correctness issue regarding logit_cap
application in the GQA/MQA kernel that must be addressed before merging. Additionally, several medium-severity concerns regarding assumptions about data dimensions and comment clarity should be reviewed.
Given the nature and extent of these changes, thorough testing, especially for edge cases and configurations where logit_cap != 0
, is crucial. It would also be very helpful to see benchmark results demonstrating the performance gains.
I am unable to approve pull requests. Based on this review, I recommend that the identified high-severity issue be resolved, and the medium-severity points be considered, before this PR is merged. Further review by others would also be beneficial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we can add all the diffs from our developing branch to this one.
@yanbing-j to simplify upstreaming efforts, put all the diffs from https://github.com/mingfeima/sglang/tree/cpu_opt_ww11 in sgl-kernel to this one.
|
ba5dbda
to
4edb7b0
Compare
sgl-kernel/csrc/cpu/decode.cpp
Outdated
// check if we have MLA here | ||
void* k_buffer_data = k_buffer.data_ptr(); | ||
void* v_buffer_data = v_buffer.data_ptr(); | ||
const bool is_mla = (k_buffer_data == v_buffer_data) && (num_heads_kv == 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's put tighter restrict here:
const bool is_mla = (k_buffer_data == v_buffer_data) && (num_heads_kv == 1); | |
const bool is_mla = (k_buffer_data == v_buffer_data) && (num_heads_kv == 1) && (head_size == head_size_v + 64); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix gemini code review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the suggestion, @mingfeima. Adding the condition (head_size == head_size_v + 64)
provides a more specific check for the MLA condition. This tighter restriction is likely important for ensuring the correct kernel is used based on the specific head size configuration, which aligns with the PR's goal of optimizing decode performance by mapping algorithms like FlashMLA. This seems like a valuable refinement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
almost done, just some minor issues to address to.
sgl-kernel/csrc/cpu/qkv_proj.cpp
Outdated
@@ -624,3 +629,69 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> qkv_proj_with_rope( | |||
|
|||
return std::make_tuple(q_input, k_input, v_input); | |||
} | |||
|
|||
std::tuple<at::Tensor, at::Tensor, at::Tensor> fused_qkv_proj_with_rope( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a very good naming ...
std::tuple<at::Tensor, at::Tensor, at::Tensor> fused_qkv_proj_with_rope( | |
std::tuple<at::Tensor, at::Tensor, at::Tensor> qkv_proj_with_rope_fused_weight( |
sgl-kernel/csrc/cpu/qkv_proj.cpp
Outdated
CHECK_EQ(qkv_a_proj_weight.size(0), q_lora_rank + kv_lora_rank + qk_rope_head_dim); | ||
CHECK_EQ(qkv_a_proj_weight.size(1), get_row_size(hidden_size, use_int8_w8a8)); | ||
|
||
at::Tensor q_a_proj_weight = qkv_a_proj_weight.narrow(0, 0, q_lora_rank).contiguous(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use torch.split
and since we are spliting dim0, you don't need .contiguous(). The result tensors should be contiguous anyway, you may double check that.
@@ -182,6 +182,26 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> qkv_proj_with_rope( | |||
bool is_vnni, | |||
std::optional<std::vector<int64_t>> block_size); | |||
|
|||
std::tuple<at::Tensor, at::Tensor, at::Tensor> fused_qkv_proj_with_rope( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change the func name.
test/srt/cpu/test_norm.py
Outdated
@@ -37,7 +37,8 @@ def _forward_native( | |||
|
|||
def _norm_test(self, m, n, dtype): | |||
|
|||
x = torch.randn([m, n], dtype=dtype) | |||
x = torch.randn([m, 2 * n], dtype=dtype) | |||
x = x[..., :n] # Make x non-contiguous |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
write something below in the utils.py
def make_non_contiguous(Tensor: x) -> Tensor
dfbaf22
to
f0e0030
Compare
4ea8722
to
2eb21aa
Compare
decode.cpp: update the parallel scheme for DeepSeek MLA remove logit cap for mla kernel use brgemm for MLA and vectorized packing to vnni format
The bug is due to Btmp1 is not allocated by at::empty which may contain NaN, and Btmp1 is padded when sending to brgemm. So we need to init Btmp1 to a whatever value for each thread only once.
1. brgemm impl: move brgemm out of inner loop 2. avx512 impl: move scaling out of inner loop 3. fp8_scaled_mm: change BLOCK_M to 128 to reduce access to B 4. cvt_fp8_bf16: ignore NaN handling ``` Comparing: True max_diff = 0.01562, asum = 10.562, bsum = 10.375 gemm_bf16(native): 89.812 us, gemm_fp8(opt): 124.585 us Comparing: True max_diff = 0.01562, asum = -32.500, bsum = -32.750 gemm_bf16(native): 83.805 us, gemm_fp8(opt): 125.586 us Comparing: True max_diff = 0.01562, asum = -35.750, bsum = -36.500 gemm_bf16(native): 89.579 us, gemm_fp8(opt): 151.284 us Comparing: True max_diff = 0.03125, asum = 4512.000, bsum = 4512.000 gemm_bf16(native): 262.104 us, gemm_fp8(opt): 615.823 us ``` ``` Comparing: True max_diff = 0.01562, asum = 10.562, bsum = 10.375 gemm_bf16(native): 86.403 us, gemm_fp8(opt): 95.792 us Comparing: True max_diff = 0.01562, asum = -32.500, bsum = -32.750 gemm_bf16(native): 84.178 us, gemm_fp8(opt): 100.573 us Comparing: True max_diff = 0.01562, asum = -35.750, bsum = -36.500 gemm_bf16(native): 90.365 us, gemm_fp8(opt): 114.198 us Comparing: True max_diff = 0.03125, asum = 4512.000, bsum = 4512.000 gemm_bf16(native): 267.053 us, gemm_fp8(opt): 404.231 us ```
``` gemm_bf16(native): 255.233 us, gemm_fp8(opt): 140.526 us, gemm_int8(opt): 99.790 us, gemm_bf16(opt): 188.522 us gemm_bf16(native): 254.283 us, gemm_fp8(opt): 125.441 us, gemm_int8(opt): 99.161 us, gemm_bf16(opt): 189.518 us ```
Co-authored-by: mingfeima <mingfei.ma@intel.com>
Co-authored-by: mingfeima <mingfei.ma@intel.com>
Merge branch 'sgl_20250610_sync_tag047 of git@code.alipay.com:Theta/SGLang.git into main https://code.alipay.com/Theta/SGLang/pull_requests/52 Reviewed-by: 剑川 <jianchuan.gys@antgroup.com> * [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697) * [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703) * [CI] Fix setup of disaggregation with different tp (sgl-project#6706) * [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712) * Fuse routed_scaling_factor in DeepSeek (sgl-project#6710) * Overlap two kernels in DeepSeek with communication (sgl-project#6711) * Minor refactor two-batch overlap (sgl-project#6682) * Speed up when having padding tokens two-batch overlap (sgl-project#6668) * [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479) * Fix LoRA bench (sgl-project#6719) * temp * Fix PP for Qwen3 MoE (sgl-project#6709) * [feat] triton kernel for get_last_loc (sgl-project#6676) * [fix] more mem for draft_extend cuda_graph (sgl-project#6726) * [PD] bug fix: Update status if nixl receiver send a a dummy req. (sgl-project#6720) * Tune memory arguments on B200 (sgl-project#6718) * Add DeepSeek-R1-0528 function call chat template (sgl-project#6725) * refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715) * Add draft extend CUDA graph for Triton backend (sgl-project#6705) * refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545) * [PD] Support completion endpoint (sgl-project#6729) * PD Rust LB (PO2) (sgl-project#6437) * Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680) * Support picking variants of EPLB algorithms (sgl-project#6728) * Support tuning DeepEP configs (sgl-project#6742) * [test] add ut and bm for get_last_loc (sgl-project#6746) * Fix mem_fraction_static for AMD CI (sgl-project#6748) * [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265) * Improve EPLB logical to physical dispatch map (sgl-project#6727) * Update DeepSeek-R1-0528 function call chat template (sgl-project#6765) * [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761) * Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737) * Support sliding window in triton backend (sgl-project#6509) * Fix shared experts fusion error (sgl-project#6289) * Fix one bug in the grouped-gemm triton kernel (sgl-project#6772) * update llama4 chat template and pythonic parser (sgl-project#6679) * feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784) * Support token-level quantization for EP MoE (sgl-project#6782) * Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785) * ci: relax test_function_call_required (sgl-project#6786) * Add intel_amx backend for Radix Attention for CPU (sgl-project#6408) * Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734) * fix(PD-disaggregation): Can not get local ip (sgl-project#6792) * [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791) * Bump torch to 2.7.0 (sgl-project#6788) * chore: bump sgl-kernel v0.1.5 (sgl-project#6794) * Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787) * chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795) * [Minor] Always append newline after image token when parsing chat message (sgl-project#6797) * Update CI tests for Llama4 models (sgl-project#6421) * [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981) * chore: update blackwell docker (sgl-project#6800) * misc: cache is_hopper_arch (sgl-project#6799) * Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804) * Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803) * [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699) * Add draft extend CUDA graph for flashinfer backend (sgl-project#6805) * Refactor CustomOp to avoid confusing bugs (sgl-project#5382) * Tiny log prefill time (sgl-project#6780) * Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813) * Add simple utility to dump tensors for debugging (sgl-project#6815) * Fix profiles do not have consistent names (sgl-project#6811) * Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812) * [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093) * [Router] Fix k8s Service Discovery (sgl-project#6766) * Add CPU optimized kernels for topk and rope fusions (sgl-project#6456) * fix new_page_count_next_decode (sgl-project#6671) * Fix wrong weight reference in dynamic EPLB (sgl-project#6818) * Minor add metrics to expert location updater (sgl-project#6816) * [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735) * [FEAT] Add transformers backend support (sgl-project#5929) * [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745) * fix ep_moe_reorder kernel bugs (sgl-project#6858) * [Refactor] Multimodal data processing for VLM (sgl-project#6659) * Decoder-only Scoring API (sgl-project#6460) * feat: add dp-rank to KV events (sgl-project#6852) * Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736) * Fix one missing arg in DeepEP (sgl-project#6878) * Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861) * support 1 shot allreduce in 1-node and 2-node using mscclpp (sgl-project#6277) * Fix Qwen3MoE missing token padding optimization (sgl-project#6820) * Tiny update error hints (sgl-project#6846) * Support layerwise rebalancing experts (sgl-project#6851) * Tiny allow profiler API to auto create directory (sgl-project#6865) * Support Blackwell DeepEP docker images (sgl-project#6868) * [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837) * [theta]merge 0605 * oai: fix openAI client error with single request via batch api (sgl-project#6170) * [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764) * Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890) * [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887) * bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877) * [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458) * AITER backend extension and workload optimizations (sgl-project#6838) * [theta]merge * [theta]merge * [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930) * Fix a bug in abort & Improve docstrings for abort (sgl-project#6931) * Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934) * Sync the changes on cuda graph runners (sgl-project#6932) * [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922) * [Refactor] image data process in bench_serving (sgl-project#6879) * [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767) * Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939) * [sgl-kernel] update deepgemm (sgl-project#6942) * chore: bump sgl-kernel v0.1.6 (sgl-project#6943) * Minor compile fused topk (sgl-project#6944) * [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910) * Tiny re-introduce profile id logging (sgl-project#6912) * Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955) * reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369) * chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945) * add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924) * [Docker] Add docker file for SGL Router (sgl-project#6915) * Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874) * Add canary for EPLB rebalancing (sgl-project#6895) * Refactor global_server_args_dict (sgl-project#6866) * Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220) * Update server timeout time in AMD CI. (sgl-project#6953) * [misc] add is_cpu() (sgl-project#6950) * Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885) * Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916) * chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955) * chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957) * [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853) * Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968) * [AMD] Add more tests to per-commit-amd (sgl-project#6926) * chore: bump sgl-kernel v0.1.7 (sgl-project#6963) * Slightly improve the sampler to skip unnecessary steps (sgl-project#6956) * rebase h20 fused_moe config (sgl-project#6966) * Fix CI and triton moe Configs (sgl-project#6974) * Remove unnecessary kernels of num_token_non_padded (sgl-project#6965) * Extend cuda graph capture bs for B200 (sgl-project#6937) * Fuse routed scaling factor in deepseek (sgl-project#6970) * Sync cuda graph runners (sgl-project#6976) * Fix draft extend ut stability with flush cache (sgl-project#6979) * Fix triton sliding window test case (sgl-project#6981) * Fix expert distribution dumping causes OOM (sgl-project#6967) * Minor remove one kernel for DeepSeek (sgl-project#6977) * [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929) * Enable more unit tests for AMD CI. (sgl-project#6983) * Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973) * Eliminate stream sync to speed up LoRA batch init (sgl-project#6960) * support qwen3 emebedding (sgl-project#6990) * Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557) * chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958) * cleanup tmp dir (sgl-project#7007) * chore: update pr test xeon (sgl-project#7008) * Fix cutlass MLA gets almost zero accuracy (sgl-project#6998) * Update amd nightly models CI. (sgl-project#6992) * feat: add direct routing strategy to DP worker (sgl-project#6884) * Fallback to lower triton version for unfound fused moe configs (sgl-project#7013) * Fix torchvision version for Blackwell (sgl-project#7015) * Simplify prepare_extend_after_decode (sgl-project#6987) * Migrate to assertEqual (sgl-project#6741) * Fix torch version in blackwell dockerfile (sgl-project#7017) * chore: update pr test xeon (sgl-project#7018) * Update default settings for blackwell (sgl-project#7023) * Support both approximate and exact expert distribution collection (sgl-project#6964) * Add decode req pool (sgl-project#6980) * [theta]merge 0610 * [theta]merge 0610 * [CI] Add CI workflow for sgl-router docker build (sgl-project#7027) * Fix fused_moe triton configs (sgl-project#7029) * CPU: map changes from developing branch in sgl-kernel (sgl-project#6833) * chore: bump v0.4.7 (sgl-project#7038) * Update README.md (sgl-project#7040)
Motivation
This PR is to map changes from developing branch in sgl-kernel, including:
Modifications
Checklist