Skip to content

Conversation

u4lr451
Copy link
Contributor

@u4lr451 u4lr451 commented Jun 17, 2025

Motivation

Currently, #6081 does not enable CUDA graphs when some DP ranks are idle input.
The goal of this pull request is to support CUDA graph execution even when some DP ranks are idle, thereby improving performance.

Modifications

Checklist

performance

server launch comands

with --disable-radix , tp-size=16, dp-size = 16.

  1. with mtp
#2 h20 nodes 
 python3 -m sglang.launch_server --model-path /sgl-workspace//DeepSeek-V3-0324 --disable-overlap-schedule --disable-radix --tp 16 --dist-init-addr 29.226.50.190:20000 --host 0.0.0.0 --port 30000 --nnodes 2 --node-rank  ${RANK}  --trust-remote-code --enable-dp-attention --dp-size 16 --disable-custom-all-reduce --mem-fraction-static 0.60 --cuda-graph-max-bs 256 --enable-metrics --speculative-algo NEXTN --speculative-draft /sgl-workspace/SGLang/DeepSeek-V3-0324-NextN --max-running-requests 256 --chunked-prefill-size 16384 --speculative-num-steps 2 --speculative-eagle-topk 4 --speculative-num-draft-tokens 4 --watchdog-timeout 120
  1. without mtp
# 2 h20 nodes
 python3 -m sglang.launch_server --model-path /sgl-workspace//DeepSeek-V3-0324 --disable-overlap-schedule --disable-radix --tp 16 --dist-init-addr 29.226.50.190:20000 --host 0.0.0.0 --port 30000 --nnodes 2 --node-rank ${RANK}  --trust-remote-code --enable-dp-attention --dp-size 16 --disable-custom-all-reduce --mem-fraction-static 0.60 --cuda-graph-max-bs 256 --enable-metrics  --max-running-requests 256 --chunked-prefill-size 16384  --watchdog-timeout 120

test method

  1. Use the GSM8K datasets
  2. Testing steps: First flush_cache, then call sglang.test.few_shot_gsm8k.run_eval to evaluate performance, num_shots=5,max_new_tokens=512.

results

  • num_questions 500
image
  • num_questions : 200
image

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @u4lr451, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the performance of the system by enabling CUDA graph execution even when some Data Parallel (DP) ranks are idle. It achieves this by removing previous code that explicitly prevented CUDA graphs in such situations and by making crucial adjustments to metadata handling and graph replay logic to ensure correctness and robustness across varying batch states. The changes aim to maximize the utilization of CUDA graphs for enhanced inference throughput.

Highlights

  • CUDA Graph Enablement for Idle Ranks: Removed the explicit condition that previously disabled CUDA graphs when Data Parallel (DP) ranks were idle, allowing performance benefits in these scenarios. This directly addresses the PR's core motivation.
  • CUDA Graph Replay Consistency: Modified the CUDA graph replay mechanism to consistently use the forward_mode that was active during graph capture, rather than the current batch's forward_mode. This ensures correct execution for static CUDA graphs, especially when batch characteristics change.
  • Attention Metadata Robustness: Enhanced the attention backend's metadata initialization to correctly handle cases where spec_info.accept_length_cpu might be empty, preventing potential errors when processing idle inputs.
  • Idle Input Initialization: Ensured that accept_length and accept_length_cpu are properly initialized for idle inputs within eagle_utils, providing complete metadata for all batch types.
  • Relaxed Draft Extend Graph Condition: Removed the strict requirement that all ranks must be active for the draft extend CUDA graph to run. The decision to use the CUDA graph is now solely based on the cuda_graph_runner's can_run method, allowing more flexible graph utilization.
  • Buffer Allocation Adjustment: Updated the cuda_graph_runner's gathered_buffer allocation to use self.max_num_token, which likely provides a more accurate and efficient sizing for the buffer.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully enables CUDA graph execution for DP idle batches in the speculative decoding path, addressing a performance limitation. The changes involve removing previous checks that disabled CUDA graphs for idle batches and include necessary correctness fixes to handle edge cases like empty accept_length_cpu lists and ensure proper metadata initialization during CUDA graph replay. The refactoring of CUDA graph eligibility checks into the forward_draft_extend_after_decode function also improves code organization.

@u4lr451 u4lr451 force-pushed the enable_cuda_graph_for_dp_idle_batch branch from 770ff80 to 54dacbf Compare June 18, 2025 06:11
@MiterV1
Copy link
Contributor

MiterV1 commented Jun 18, 2025

when use this patch with PD+DEEPEP+DP Attention+MTP, a error occurred:

/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:1553: indexSelectLargeIndex: block: [292,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
[2025-06-18 09:21:39 DP2 TP2] Scheduler hit an exception: Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/managers/scheduler.py", line 2598, in run_scheduler_process
    scheduler.event_loop_normal_disagg_decode()
  File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/disaggregation/decode.py", line 665, in event_loop_normal_disagg_decode
    result = self.run_batch(batch)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/managers/scheduler.py", line 1662, in run_batch
    ) = self.draft_worker.forward_batch_speculative_generation(batch)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/speculative/eagle_worker.py", line 323, in forward_batch_speculative_generation
    self.verify(batch, spec_info)
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/speculative/eagle_worker.py", line 685, in verify
    res: EagleVerifyOutput = spec_info.verify(
                             ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/speculative/eagle_utils.py", line 479, in verify
    accept_index_cpu = accept_index.tolist()
                       ^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


[2025-06-18 09:21:39] Child process unexpectedly failed with exitcode=131. pid=101132

@u4lr451
Copy link
Contributor Author

u4lr451 commented Jun 19, 2025

@MiterV1 Please provide the reproduction commands and environment

@ch-wan ch-wan self-assigned this Jun 19, 2025
@u4lr451 u4lr451 force-pushed the enable_cuda_graph_for_dp_idle_batch branch 2 times, most recently from 4af3eed to 87f35cb Compare June 23, 2025 08:32
@u4lr451 u4lr451 force-pushed the enable_cuda_graph_for_dp_idle_batch branch from 87f35cb to 5556691 Compare June 23, 2025 08:43
@u4lr451 u4lr451 force-pushed the enable_cuda_graph_for_dp_idle_batch branch from b286098 to b6262ed Compare June 23, 2025 09:28
@ch-wan
Copy link
Collaborator

ch-wan commented Jun 23, 2025

/gemini summary

@ch-wan
Copy link
Collaborator

ch-wan commented Jun 23, 2025

/gemini review

Copy link
Contributor

Summary of Changes

This pull request primarily focuses on enhancing performance by enabling CUDA graph execution for Data Parallel (DP) idle batches, a scenario previously unsupported. It involves critical adjustments to how CUDA graphs are captured and replayed, ensuring that metadata aligns correctly even when some DP ranks are not actively processing requests. The changes also improve the compatibility of idle batch speculative inputs with the CUDA graph pipeline and optimize buffer allocations, leading to more efficient model inference.

Highlights

  • CUDA Graph Enablement: Removed previous restrictions that prevented CUDA graphs from being utilized when Data Parallel (DP) ranks are idle, allowing for significant performance improvements in such scenarios.
  • CUDA Graph Replay Correctness: Ensured that CUDA graph replay mechanisms correctly use the capture_forward_mode for initializing attention backend metadata. This is crucial for accurate graph execution, especially when the current batch's mode might differ from the mode used during graph capture (e.g., an idle batch).
  • Idle Batch Speculative Input Compatibility: Enhanced the creation of EagleDraftInput for idle batches by adding necessary accept_length and accept_length_cpu fields, and ensuring a custom_mask is properly applied. This makes these inputs fully compatible with CUDA graph-enabled draft extend operations.
  • Buffer Allocation Optimization: Corrected and optimized the size calculation for the gathered_buffer within the CUDA graph runner, improving memory efficiency.
  • Dynamic Hidden Size for Eagle3: Implemented conditional logic to set the correct hidden_size for EagleDraftInput when creating idle inputs, specifically accounting for the Eagle3 speculative algorithm.
Changelog
  • python/sglang/srt/layers/attention/flashattention_backend.py
    • Modified init_forward_metadata_replay_cuda_graph (lines 1767-1773) to safely calculate metadata.max_seq_len_q by handling cases where spec_info.accept_length_cpu might be an empty list, preventing a ValueError during CUDA graph replay for idle batches.
    • Minor whitespace adjustments (lines 1704-1715, 1728-1742, 1747-1752, 1758-1761).
  • python/sglang/srt/managers/scheduler.py
    • Removed the explicit code block (lines 1824-1827) that previously prevented CUDA graphs from being enabled when a batch was idle or None, thereby allowing CUDA graphs for idle DP ranks.
  • python/sglang/srt/model_executor/cuda_graph_runner.py
    • Adjusted the initialization of self.gathered_buffer (lines 309-329) to use self.max_num_token, optimizing its size and simplifying conditional allocation.
    • Initialized a new self.custom_mask (lines 324-330) for speculative decoding.
    • Ensured that self.tbo_plugin.replay_prepare and self.model_runner.attn_backend.init_forward_metadata_replay_cuda_graph (lines 677-691) are called with self.capture_forward_mode for correct CUDA graph replay, aligning metadata with the captured graph.
    • Added logic (lines 683-684) to assign self.custom_mask to forward_batch.spec_info for idle batches when speculative info is present.
    • Updated get_spec_info (lines 736-746) to reuse the pre-allocated self.custom_mask for EagleVerifyInput instead of creating a new one.
  • python/sglang/srt/speculative/eagle_utils.py
    • Added accept_length and accept_length_cpu fields (lines 102-103) to the EagleDraftInput created by create_idle_input, ensuring compatibility with subsequent draft extend operations under CUDA graphs.
  • python/sglang/srt/speculative/eagle_worker.py
    • Refactored forward_batch_speculative_generation (lines 322-332), check_forward_draft_extend_after_decode (lines 344-363), and forward_draft_extend_after_decode (lines 819-864) to streamline CUDA graph eligibility checks and remove the redundant can_run_draft_extend_cuda_graph parameter.
    • Updated forward_draft_extend_after_decode (lines 831-839) to dynamically set the hidden_size for idle EagleDraftInput based on whether Eagle3 is used.
Activity
  • On 2025-06-18, MiterV1 reported a RuntimeError: CUDA error: device-side assert triggered when using the patch with PD+DEEPEP+DP Attention+MTP, providing a detailed stack trace.
  • On 2025-06-19, u4lr451 (the author) requested reproduction commands and environment details from MiterV1 to investigate the reported error.
  • On 2025-06-17, gemini-code-assist[bot] provided several high-priority review comments, highlighting crucial correctness fixes: handling empty accept_length_cpu in flashattention_backend.py, optimizing gathered_buffer sizing in cuda_graph_runner.py, ensuring correct capture_forward_mode usage for CUDA graph replay in cuda_graph_runner.py, and the necessity of adding accept_length/accept_length_cpu to idle inputs in eagle_utils.py.
  • On 2025-06-20, ch-wan raised a question about potentially refactoring spec_info for idle batches to achieve natural compatibility with all attention backends, aiming to reduce future development complexity.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly enhances the performance of the system by enabling CUDA graph execution for idle Data Parallel (DP) batches, a crucial improvement for distributed inference. The changes involve refactoring the CUDA graph management, particularly in cuda_graph_runner.py and eagle_worker.py, to ensure that forward_mode consistency is maintained during graph replay and that idle batches are properly handled with custom_mask initialization. The removal of explicit can_run_draft_extend_cuda_graph parameters streamlines the API, centralizing CUDA graph decision logic within the respective runners. Additionally, the update correctly initializes EagleDraftInput for idle scenarios, including dynamic hidden_size calculation for eagle3 models. Overall, the changes are well-implemented and directly address the stated performance objectives.

Comment on lines 104 to 105
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding accept_length and accept_length_cpu to the idle EagleDraftInput is necessary for compatibility with the updated init_forward_metadata_replay_cuda_graph function in flashattention_backend.py. This ensures that idle inputs are properly structured for CUDA graph replay.

            accept_length=torch.empty((0,), device=device, dtype=torch.int32),
            accept_length_cpu=[],

@zhyncs zhyncs merged commit ed0a0b6 into sgl-project:main Jun 24, 2025
227 of 282 checks passed
yilian49 pushed a commit to yilian49/sglang that referenced this pull request Jun 24, 2025
Co-authored-by: austindeng <austindeng@tencent.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
Co-authored-by: ch-wan <cwan39@gatech.edu>
whybeyoung pushed a commit to whybeyoung/sglang that referenced this pull request Jun 24, 2025
Co-authored-by: austindeng <austindeng@tencent.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
Co-authored-by: ch-wan <cwan39@gatech.edu>
chenxijun1029 pushed a commit to chenxijun1029/sglang that referenced this pull request Jul 17, 2025
Co-authored-by: austindeng <austindeng@tencent.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
Co-authored-by: ch-wan <cwan39@gatech.edu>
pi314ever pushed a commit to pi314ever/sglang that referenced this pull request Jul 17, 2025
* Use seq_len_fill_value in the cuda graph runners (sgl-project#7233)

* support custom weight loader for model runner (sgl-project#7122)

Co-authored-by: kavioyu <kavioyu@tencent.com>

* Fix AMD speculative decoding (sgl-project#7252)

* [Refactor] OAI Server components (sgl-project#7167)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>

* OAI Server Skeleton & Core Utility Endpoints (sgl-project#7179)

* [amd] Opt dsv3 moe (sgl-project#7160)

Co-authored-by: wunhuang <wunhuang@amd.com>

* update ci node for xeon (sgl-project#7265)

* feat: mtp support dp-attention (sgl-project#6081)

Co-authored-by: austindeng <austindeng@tencent.com>
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Qiaolin Yu <liin1211@outlook.com>
Co-authored-by: ch-wan <cwan39@gatech.edu>

* support qwen2 running on ascend npu device (sgl-project#7022)

Co-authored-by: 刁莹煜 <diaoyingyu1@hisilicon.com>

* Fix Deepseek R1 0528 FP4 tensor name mismatch issue during weights loading. (sgl-project#7164)

* bugfix(tool call ebnf): Fix EBNF generation for optional function parameters (sgl-project#7283)

* Fix AWQ Dequant and Weight Loading of deepseek v2 (sgl-project#6842)

* fix: resolve b200 dsv3 mtp issue (sgl-project#7286)

* ci: Fix test_ebnf_generate_all_optional_function_params (sgl-project#7288)

* fix: only enable flash_attn test on sm80 sm90 (sgl-project#7289)

* [PD] Support get local ip from NIC for PD disaggregation (sgl-project#7237)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* [PD] Add custom memory pool option to support Mooncake PD with NVLink  (sgl-project#7264)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* Upstreaming hicache bug fixes (sgl-project#7267)

* Update python API of activation, topk, norm and rope and remove vllm dependency (sgl-project#6614)

Co-authored-by: Wu, Chunyuan <chunyuan.wu@intel.com>
Co-authored-by: jianan-gu <jianan.gu@intel.com>
Co-authored-by: sdp <sdp@gnr799219.jf.intel.com>

* Fix hicache benchmark script bug - some sampled input_request is [] (sgl-project#7300)

* chore: change logs from`INFO` to `DEBUG` for dp and add force quit for tokenizer manager (sgl-project#7251)

* update invalid link in doc (sgl-project#7297)

* Fix mini_lb for PD with long output: limit chunk size of decode response (sgl-project#7301)

Signed-off-by: ch-tiger1 <xyz@ch-tech.ip-ddns.com>
Co-authored-by: ch-tiger1 <xyz@ch-tech.ip-ddns.com>

* Fix profiler error when there are idle passes (sgl-project#7003)

* [pd] optimize dockerfile for  pd disaggregation (sgl-project#7319)

Co-authored-by: zhyncs <me@zhyncs.com>

* Merge PDLB (Prefill-Decode Load Balancer) into SGLang Router (sgl-project#7096)

* Add more refactored openai test & in CI (sgl-project#7284)

* fix: resolve blackwell deepep image issue (sgl-project#7331)

* add seed in CPU UTs to avoid flaky failure (sgl-project#7333)

* Multi-Stage Awake: Support Resume and Pause KV Cache and Weights separately (sgl-project#7099)

* Reintroduce tiny fix sampler error when prob is not contiguous (sgl-project#7354)

* [Refactor] Clean up radix cache related API (sgl-project#7303)

Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>

* Put `_normalize_rid` before other normalization in `io_struct` (sgl-project#7363)

* [PD] Transfer hidden states for mtp when disaggregation (sgl-project#7242)

* [Bugfix][PD] Set conclude state before clear when failure happens (sgl-project#7362)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* docs: update installation (sgl-project#7366)

* [Docker] optimize dockerfile  remove deepep and blackwell merge it to… (sgl-project#7343)

Co-authored-by: Yineng Zhang <me@zhyncs.com>

* Clean unused import for mimo mtp model (sgl-project#7370)

* [Bugfix]Fix hang bug using dp attention with HiRadixCache (sgl-project#7159)

Signed-off-by: huanglong <huanglong@linux.alibaba.com>

* [Doc] add embedding rerank doc (sgl-project#7364)

* Fix judgment condition for enabling Deepseek V3/R1 shared expert fusion optimization (sgl-project#7371)

* Feat/refactor embedding server (sgl-project#7322)

* Purge VerlEngine (sgl-project#7326)

Signed-off-by: Ata Fatahi <immrata@gmail.com>

* support return logprobs for pipeline (sgl-project#7356)

Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>

* [PD] Optimize custom mem pool usage and bump mooncake version (sgl-project#7393)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* Support THUDM/GLM-4-0414 (GLM-Z1) Glm4ForCausalLM architecture. (sgl-project#5485)

* Refine OpenAI serving entrypoint to remove batch requests (sgl-project#7372)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>

* [Feature] Comprehensive Hybrid Parallelism Support (sgl-project#6389)

* [DeepSeekNextN] fix: residual of head norm can be None (sgl-project#7398)

* [OAI refactor] Add rerank and score serving (sgl-project#7399)

Co-authored-by: Chang Su <chang.s.su@oracle.com>

* [OAI Server Refactor] [ChatCompletions & Completions] Implement UsageInfo Processor (sgl-project#7360)

Co-authored-by: Chang Su <chang.s.su@oracle.com>

* Fix All-Gather under world size one (sgl-project#7219)

* Optimize DP attn scheduling for speculative decoding (sgl-project#7285)

* Update usage_processor.py (sgl-project#7402)

* Fix 7285 Merge Conflicts (sgl-project#7403)

* chore: upgrade mooncake-transfer-engine 0.3.4 (sgl-project#7401)

* [OAI Server Refactor] [ChatCompletions & Completions] Support Return Hidden State (sgl-project#7329)

Signed-off-by: keru <rukeyang@gmail.com>

* Remove batches api in docs & example (sgl-project#7400)

* [BugFix]: fix EmbeddingReqInput single input error (sgl-project#7396)

* [BugFix]fix qwen25 invoke function call streaming responses with curly braces as the starting indicator (sgl-project#7394)

* fix overlap pagecount (sgl-project#6984)

Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>

* fix: Fix CI test_function_call_parser.py (sgl-project#7425)

* Fix CPU offloading for MLA memory pool (sgl-project#7409)

* [fix] PD disaggregation when enable mtp and tp!=dp (sgl-project#7420)

* feat(oai refactor): Replace `openai_api` with `entrypoints/openai`  (sgl-project#7351)

Co-authored-by: Jin Pan <jpan236@wisc.edu>

* Refactor LoRAManager and LoRAMemoryPool state management logic for dynamic LoRA loading support (sgl-project#7412)

* refactor(test): reorganize OpenAI test file structure (sgl-project#7408)

* [minor] simplify the `TokenToKVPoolAllocator` (sgl-project#7414)

* Tiny add logging for GC  (sgl-project#7406)

* FlashInfer NVFP4 MoE with EP & 2-stream shared expert (sgl-project#7327)

Co-authored-by: JieXin Liang <Alcanderian@users.noreply.github.com>
Co-authored-by: alcanderian <alcanderian@gmail.com>

* Remove copy after bmm (sgl-project#7441)

* Fix torch compile run (sgl-project#7391)

Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Sai Enduri <saimanas.enduri@amd.com>

* [misc] Add PD service discovery support in router (sgl-project#7361)

* add fused moe config for qwen3 in triton3.3.1 (sgl-project#7445)

* Fix CUDA Graph Check under Deepep with DP FFN (sgl-project#7451)

* Update hyperparameter_tuning.md (sgl-project#7454)

* feat: integrate deepgemm into EPMoE (sgl-project#6821)

Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>

* Solve docker build failed in the virtual machine (sgl-project#7290)

Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Sai Enduri <saimanas.enduri@amd.com>
Co-authored-by: HAI <hixiao@gmail.com>

* Fix a bug in BatchTokenIDOut & Misc style and dependency updates (sgl-project#7457)

* [CI] Upgrade mooncake to 0.3.4.post1 to fix 8 gpu tests (sgl-project#7472)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* Fix prefill OOM due to wrong token calculation when page > 1  (sgl-project#7397)

* feat(func_call): Add more check in `BaseFormatDetector.parse_streaming_increment` (sgl-project#7479)

* Fix dtype for idle input in spec decoding (sgl-project#7456)

* update mooncake in dockerfile (sgl-project#7480)

* kvcache io kernels and test case (sgl-project#7382)

* [perf] slightly imporve DeepSeek-R1-FP4 TP8 (sgl-project#7481)

* Quick fix for DeepGemm requant to also cover MTP. (sgl-project#7378)

* Support weight loading without mmap (sgl-project#7469)

* ci: Revert openai_server related tests in AMD suites (sgl-project#7449)

* Perormance: Enable cuda graph for dp idle batch (sgl-project#7269)

Co-authored-by: austindeng <austindeng@tencent.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
Co-authored-by: ch-wan <cwan39@gatech.edu>

* bugfix: Prevent global mutation of conv.stop_str across requests (sgl-project#7347)

Co-authored-by: Chang Su <chang.s.su@oracle.com>

* Fix RequestValidationError response format (sgl-project#7487)

* Fix MTP with Deepseek R1 Fp4 (sgl-project#7376)

* chore: bump sgl-kernel v0.2.0 (sgl-project#7490)

* chore: bump v0.4.8 (sgl-project#7493)

* [AMD] add aiter fused moe in DeepEP path (sgl-project#7268)

* enable aiter_biased_grouped_topk kernel (sgl-project#7423)

* [PD Disaggregation] replace transfer with batch transfer for better performance (sgl-project#7236)

* Remove cumsum_buffer initilization (sgl-project#7439)

* [benchmark] fbgemm benchmark support bandwidth report and support fbgemm_cutlass_gmm (sgl-project#7422)

* Support multi-thread model weight loading (sgl-project#7277)

* [PD] NIXL: Register kv args in advance and cleanup finished requests (sgl-project#6717)

* fix: Add `--model` as an alias for `--model-path` in server_args (sgl-project#7505)

* misc: Improvement to serving_chat.py and add more ut (sgl-project#7489)

* Fuse sorted_token_ids padding to moe_align_block_size kernel (sgl-project#7437)

* [OAI] patch origin request_id logic (sgl-project#7508)

* [PD][Spec] Fix hidden state transfer for spec decode (sgl-project#7516)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* EPLB support for MTP (sgl-project#7510)

* clean duplicate code (sgl-project#7512)

* [ci] add router benchmark script and CI (sgl-project#7498)

* fix: force synchronization between TP workers when update_weights (sgl-project#6626)

Co-authored-by: dangkai.dk <dangkai.dk@alibaba-inc.com>

* [CPU] [BF16] Call fused_experts_cpu, weight_packed_linear and bmm_cpu kernel in DeepSeek model (sgl-project#6641)

Co-authored-by: Thien Tran <gau.nernst@yahoo.com.sg>

* [CI] Upgrade mooncake to v0.3.4.post2 to fix potential slice failed bug (sgl-project#7522)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* npu fused op (sgl-project#7386)

Co-authored-by: Li Junwen <lijunwen13@hisilicon.com>

* feat: send kvmetrics from sglang scheduler (sgl-project#6721)

* [PD] Add different TP sizes support for no-MLA models (sgl-project#6793)

Co-authored-by: shangmingc <csmthu@gmail.com>
Co-authored-by: Shangming Cai <caishangming@linux.alibaba.com>

* enable aiter fp8 blockscale quant (sgl-project#7520)

* take aiter get_rope back (sgl-project#7521)

* Fix typo of flash_cache (sgl-project#7513)

* feat: add return hidden_states at async generation (sgl-project#7507)

* minor: 'role' must be system/assistant/tool, but case insensitive for now (sgl-project#7499)

* Fix FP8 KV Cache Support in FA3 Backend (sgl-project#7148)

* Fix gathered_buffer issues in tbo (sgl-project#7531)

* [PD] Raise error for incompatible mooncake version and some minor fixes (sgl-project#7527)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* [CMake] Fix sgl-kernel CMakeLists for Blackwell (sgl-project#7543)

* Add Tencent HunYuanMoEV1 model support (sgl-project#7549)

* Update seed in CPU UTs to avoid flaky failure with single test (sgl-project#7544)

* chore: improve ci bug reporting (sgl-project#7542)

* chore: remove vlm unnecessary import (sgl-project#7541)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: yhyang201 <yhyang201@gmail.com>
Co-authored-by: Mick <mickjagger19@icloud.com>

* chore: bump v0.4.8.post1 (sgl-project#7559)

* [PD][NIXL] Set is_sorted=False to fix NIXL_ERR_NOT_FOUND (sgl-project#7330)

* [Fix] incorrect assert in EPLB (sgl-project#7575)

* Updates Gemma3n MLP layer to adapt latest transformers version (sgl-project#7573)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>

* Fix MTP error when enabling two-batch overlap  (sgl-project#7569)

* Add e2e test for multi instance multi stage memory release/resume occupuation (sgl-project#7208)

Signed-off-by: Ata Fatahi <immrata@gmail.com>

* [CI] Add CI Testing for Prefill-Decode Disaggregation with Router (sgl-project#7540)

* Updates transformers and timm dependencies (sgl-project#7577)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>

* feat: support compatibility between MTP and two-batch-overlap (sgl-project#7225)

Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>

* Move multimodal processors into a separate folder (sgl-project#7581)

* Fix broken CI TestVILAServer (sgl-project#7610)

* [router] add centralized configuration module for sgl-router (sgl-project#7588)

* Fix: Minicpm (sgl-project#7612)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>

* Hybrid kv cache for LLaMA4 (sgl-project#6563)

Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
Co-authored-by: tarinkk <rt572@physics.rutger.edu>
Co-authored-by: tarinkk <rt572@rutgers.physics.edu>
Co-authored-by: Hanming Lu <69857889+hanming-lu@users.noreply.github.com>

* [CPU] add optimizations for INT8 and FP8 DeepSeek (sgl-project#6769)

Co-authored-by: Zheng, Beilei <beilei.zheng@intel.com>

* Tiny add logs for expert location updater (sgl-project#7308)

* Fix flakiness in LoRA batch test. (sgl-project#7552)

* [BUG] fix local_rank in initialize_dp_attention (sgl-project#7584)

* Support dynamic LoRA loading / unloading in engine/server API (sgl-project#7446)

* [PD] Respect sampling_params.max_new_tokens when PD disaggregation is activated (sgl-project#7598)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* fix unit tests (sgl-project#7618)

* Let ep_scatter support arbitrary strides / ue8m0 format (sgl-project#7309)

* Let EP prefill support new DeepGEMM (sgl-project#7310)

* docs: add gb200 nvl72 and a16z grant (sgl-project#7620)

* oai: Adds support for OpenAI chat completions API in bench_serving (sgl-project#7036)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: yhyang201 <47235274+yhyang201@users.noreply.github.com>
Co-authored-by: Mick <mickjagger19@icloud.com>

* [bugfix] Remove PR comment posting from Rust benchmark workflow (sgl-project#7625)

* [Minor] clean up multimodal processor and tokenizer manager (sgl-project#7624)

* Add dsv3 fused a gemm to sgl-kernel (sgl-project#7630)

* Add @mickqian as the CODEOWNERS of multimodal (sgl-project#7636)

* Fix stream reasoning parser and Adds Kimi reasoning parser  (sgl-project#7432)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>

* Fix sgl-router startup crash (sgl-project#7619)

* [bugfix] fix runtime dropping panic in editable (sgl-project#7628)

* Move files related to EPLB (sgl-project#7580)

* [misc] reduce weird rope_scaling_factor warning (sgl-project#7176)

* [AMD] Add unit-test-sgl-kernel-amd to AMD CI (sgl-project#7539)

* Update CODEOWNERS (sgl-project#7640)

* [EAGLE] remove a wrong adjustment for page_size > 1 & topk > 1 in server_args.py (sgl-project#7643)

* [CPU] add c++ kernel to bind CPU cores and memory node (sgl-project#7524)

* Improve streaming, log_level, memory report, weight loading, and benchmark script (sgl-project#7632)

Co-authored-by: Kan Wu <wukanustc@gmail.com>

* Add dsv3 router gemm kernel (sgl-project#7627)

* chore: upgrade flashinfer v0.2.7 jit (sgl-project#7663)

* [doc] update lws doc for pd (sgl-project#7318)

* Fix: sync prepare_fp8_layer_for_marlin with latest vllm changes (sgl-project#7648)

* Add small requirements for benchmark/parse_result tools (sgl-project#7671)

* [CPU] remove process_group from inputs of shm_allreduce and shm_allgather (sgl-project#7486)

* chore: bump sgl-kernel v0.2.1 (sgl-project#7675)

* support llama4 eagle3  (sgl-project#6985)

Co-authored-by: shuaills <shishuaiuoe@gmail.com>
Co-authored-by: Shenggui Li <somerlee.9@gmail.com>
Co-authored-by: Yingyi Huang <yingyihuang2000@outlook.com>
Co-authored-by: yizhang2077 <1109276519@qq.com>

* Refactor mm processors and Enable mixed modality processing (sgl-project#7629)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>

* upgrade sgl kernel to 0.2.1 for main (sgl-project#7676)

* add description for llama4 eagle3 (sgl-project#7688)

* fix(model loader): use safe_open to prevent file handle leaks. (sgl-project#7684)

* chore: upgrade flashinfer v0.2.7.post1 (sgl-project#7698)

* Improve error handling for requests with unloaded LoRA path(s) (sgl-project#7642)

* Apply dsv3_fused_a_gemm kernel (sgl-project#7635)

* Fix GPTQMarlinMoE (sgl-project#7697)

* [1/n] apply wna16marlin kernel in moe weight only quantization (sgl-project#7683)

Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: HandH1998 <1335248067@qq.com>
Co-authored-by: 弋云 <yiyun.wyt@antgroup.com>
Co-authored-by: walker-ai <2398833647@qq.com>

* Apply dsv3 router gemm kernel for deepseek-r1 fp4 (sgl-project#7677)

* [AMD] Temporarily disable test_no_overlap_scheduler and test_vision_chunked_prefill (sgl-project#7717)

* [RL] add --skip-warmup (sgl-project#7416)

* [RL] support update_weights_from_distributed with different group and multiple weights (sgl-project#7292)

* [router] add --log-level to sgl-router (sgl-project#6512)

* [b200] support trt-llm allreduce fuse rms_norm_add kernel (sgl-project#7621)

* [CPU] Bind threads and numa node for each TP rank (sgl-project#6549)

Co-authored-by: srinarayan-srikanthan <srinarayan.srikanthan@intel.com>

* Support non-contiguous query input for extend/decode attention (sgl-project#7462)

* Support updating weights at once by stopping all requests (sgl-project#6698)

Signed-off-by: Tianyu Zhou <albert.zty@antgroup.com>
Co-authored-by: Zilin Zhu <zhuzilinallen@gmail.com>

* Fix num_tokens_pre_allocated in disaggregation log (sgl-project#7714)

* [CPU] [sgl-kernel] set dispatch key of initialize to CatchAll (sgl-project#7734)

* [CPU] fix all_reduce and all_gather (sgl-project#6770)

Co-authored-by: blzheng <beilei.zheng@intel.com>

* fix awq and dsv3 fused gemm compatible (sgl-project#7735)

* [CI][Router] Fix bench_one_batch_server for pd router test (sgl-project#7731)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* Add CUTLASS FP8 Blockscale MoE kernel for Hopper architecture (sgl-project#7278)

Co-authored-by: HydraQYH <QYH820@Outlook.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>

* fix dsv3 fused proj check  (sgl-project#7738)

* Ascend attention backend(PA&MLA) (sgl-project#7722)

Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: VDV1985 <vladdv85@mail.ru>

* [fix] fix dsv3_router_gemm filter (sgl-project#7750)

* [CPU] refine CPU integration code (sgl-project#7647)

* [CPU] support the case where num_attention_heads or intermediate_size is not divisible by the TP size (sgl-project#6771)

* support qwen3 dense model dp attention (sgl-project#7681)

* [optimize] add two stream norm for qwen3 (sgl-project#7740)

Co-authored-by: ispobock <ispobaoke@gmail.com>

* feat: use D2D instead of H2H in pp (sgl-project#7673)

Co-authored-by: alpha-baby <fujianhao1997@qq.com>

* [Bug] add flashinfer bool check for fusedmoe in Qwen moe models (sgl-project#7723)

* [fix] put cpu in the first priority in get_device() (sgl-project#7752)

* [optimize] fuse renormalize into moe_topk_softmax (sgl-project#7744)

Co-authored-by: ispobock <ispobaoke@gmail.com>

* chore: bump sgl-kernel 0.2.2 (sgl-project#7755)

* fix CI: update native api ipynb (sgl-project#7754)

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>

* fuse renormal into moe topk softmax kernel python code (sgl-project#7751)

Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>

* Remove type conversion and fix id map in topk (sgl-project#7759)

* Add V2-lite model test (sgl-project#7390)

Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>

* refactor llama4 dp attention logic (sgl-project#7729)

* fix(docs): fix the broken link in `docs/references/production_metrics.md` (sgl-project#7741)

Signed-off-by: rudeigerc <rudeigerc@gmail.com>

* [fix] update bench_speculative.py for compatibility (sgl-project#7764)

Signed-off-by: Kay Yan <kay.yan@daocloud.io>

* Move mem_fraction_static adjustment for multimodal models to `server_args.py` & Fix session control & Other cleanups (sgl-project#7748)

* [RL] Add --nccl-port to prevent port conflict (sgl-project#7418)

* [RL] add pause and continue generation for async rl training (sgl-project#7419)

* [Fix] Alloc return type error (sgl-project#7778)

Signed-off-by: Capronir <839972205@qq.com>

* [feat] Support EAGLE3 for Qwen (sgl-project#7745)

Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
Co-authored-by: zyksir <zyksir@outlook.com>

* saving hidden_states.clone() (sgl-project#7705)

* [1/n]: add cutlass W4A8 moe kernel for hopper architecture (sgl-project#7772)

Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
Co-authored-by: yicwang <yichen.wang@bytedance.com>

* add model: qwen2-audio (sgl-project#7596)

* Optimize Hopper CUTLASS FP8 Blockwise Grouped GEMM Kernel in Small K Scenario (sgl-project#7782)

* Embedding parallel by attn_tp (sgl-project#7623)

* fix: fix apply_shuffle_mul_sum (sgl-project#7444)

* chore: bump sgl-kernel v0.2.3 (sgl-project#7784)

* fix: use nvidia-nccl-cu12 2.27.5 (sgl-project#7787)

* DP Attention with Auto DeepEP Dispatch (sgl-project#7222)

* chore: upgrade sgl-kernel v0.2.3 (sgl-project#7786)

* Fix incorrect spec_num_draft_tokens in draft_extend (sgl-project#7757)

* [fix] fix misusing of is_cuda (sgl-project#7790)

* Add treemask mode to build_eagle_tree & release sgl-kernel 0.2.3 (sgl-project#7756)

Co-authored-by: Pranjal Shankhdhar <pranjal.ssh@gmail.com>

* chore: bump sgl-kernel v0.2.4 (sgl-project#7800)

* ci: fix port args (sgl-project#7792)

* Fix CI test OOM issue. (sgl-project#7799)

* chore: upgrade sgl-kernel v0.2.4 (sgl-project#7801)

* chore: bump v0.4.9 (sgl-project#7802)

* fix merge conflict issue

* fix hpu attention nonetyep issue

* fix alignment

* fix alignment2

* Ci failure fixes

* fix attention-backend choices

---------

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: ch-tiger1 <xyz@ch-tech.ip-ddns.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Signed-off-by: Ata Fatahi <immrata@gmail.com>
Signed-off-by: keru <rukeyang@gmail.com>
Signed-off-by: Tianyu Zhou <albert.zty@antgroup.com>
Signed-off-by: rudeigerc <rudeigerc@gmail.com>
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
Signed-off-by: Capronir <839972205@qq.com>
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
Signed-off-by: Mohit Sinha <msinha@habana.ai>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: KavioYu <67678385+yukavio@users.noreply.github.com>
Co-authored-by: kavioyu <kavioyu@tencent.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: yhyang201 <47235274+yhyang201@users.noreply.github.com>
Co-authored-by: kk <43161300+kkHuang-amd@users.noreply.github.com>
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>
Co-authored-by: u4lr451 <u4lr451@gmail.com>
Co-authored-by: austindeng <austindeng@tencent.com>
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Qiaolin Yu <liin1211@outlook.com>
Co-authored-by: ch-wan <cwan39@gatech.edu>
Co-authored-by: Yijie Zhu <762412795@qq.com>
Co-authored-by: 刁莹煜 <diaoyingyu1@hisilicon.com>
Co-authored-by: Charles Chen <pychen96@gmail.com>
Co-authored-by: Chang Su <chang.s.su@oracle.com>
Co-authored-by: AniZpZ <zhuangsen.zp@antgroup.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
Co-authored-by: YanbingJiang <yanbing.jiang@intel.com>
Co-authored-by: Wu, Chunyuan <chunyuan.wu@intel.com>
Co-authored-by: jianan-gu <jianan.gu@intel.com>
Co-authored-by: sdp <sdp@gnr799219.jf.intel.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Co-authored-by: linzhuo <15313137931lz@gmail.com>
Co-authored-by: ch-tiger1 <tiger@ch-tech.ip-ddns.com>
Co-authored-by: ch-tiger1 <xyz@ch-tech.ip-ddns.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
Co-authored-by: ybyang <10629930+whybeyoung@users.noreply.github.com>
Co-authored-by: Simo Lin <linsimo.mark@gmail.com>
Co-authored-by: Jinn <47354855+jhinpan@users.noreply.github.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
Co-authored-by: DarkSharpness <76582120+DarkSharpness@users.noreply.github.com>
Co-authored-by: Atream <80757050+Atream@users.noreply.github.com>
Co-authored-by: Li Hui <lambert80.ios@gmail.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
Co-authored-by: woodx <124784234+woodx9@users.noreply.github.com>
Co-authored-by: Ata Fatahi <immrata@gmail.com>
Co-authored-by: strgrb <zhangkaihong.zkh@antgroup.com>
Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>
Co-authored-by: Wenbo Yang <solrex@users.noreply.github.com>
Co-authored-by: Chang Su <csu272@usc.edu>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
Co-authored-by: Keyang Ru <rukeyang@gmail.com>
Co-authored-by: ehuaa <ehuamail@163.com>
Co-authored-by: pansicheng <sicheng.pan.chn@gmail.com>
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
Co-authored-by: Jin Pan <jpan236@wisc.edu>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Trevor Morris <tmorris@nvidia.com>
Co-authored-by: JieXin Liang <Alcanderian@users.noreply.github.com>
Co-authored-by: alcanderian <alcanderian@gmail.com>
Co-authored-by: Ke Bao <ISPObaoke@163.com>
Co-authored-by: Sai Enduri <saimanas.enduri@amd.com>
Co-authored-by: Yi Zhang <1109276519@qq.com>
Co-authored-by: xutizhou <xutingz@nvidia.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>
Co-authored-by: HAI <hixiao@gmail.com>
Co-authored-by: Yuhong Guo <guoyuhong1985@outlook.com>
Co-authored-by: huangtingwei <141888744+huangtingwei9988@users.noreply.github.com>
Co-authored-by: Alex Sun <alex.s@amd.com>
Co-authored-by: valarLip <103567126+valarLip@users.noreply.github.com>
Co-authored-by: Francis <38564764+ssssnow@users.noreply.github.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: xianzhiT <xianzhitang@tencent.com>
Co-authored-by: yilian49 <43861414+yilian49@users.noreply.github.com>
Co-authored-by: DangKai <dangkai4u@outlook.com>
Co-authored-by: dangkai.dk <dangkai.dk@alibaba-inc.com>
Co-authored-by: Thien Tran <gau.nernst@yahoo.com.sg>
Co-authored-by: ll819214 <18801269230@163.com>
Co-authored-by: Li Junwen <lijunwen13@hisilicon.com>
Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>
Co-authored-by: Hongbo Xu <1320612015@qq.com>
Co-authored-by: shangmingc <csmthu@gmail.com>
Co-authored-by: eigen <52445717+yyihuang@users.noreply.github.com>
Co-authored-by: mlmz <54172054+minleminzui@users.noreply.github.com>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
Co-authored-by: Meng, Peng <pengmeng@tencent.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: yhyang201 <yhyang201@gmail.com>
Co-authored-by: tarinkk <129432511+tarinkk@users.noreply.github.com>
Co-authored-by: tarinkk <rt572@physics.rutger.edu>
Co-authored-by: tarinkk <rt572@rutgers.physics.edu>
Co-authored-by: Hanming Lu <69857889+hanming-lu@users.noreply.github.com>
Co-authored-by: Zheng, Beilei <beilei.zheng@intel.com>
Co-authored-by: Sheng Qi <shengqi2018@pku.edu.cn>
Co-authored-by: finetune <82650881+finetunej@users.noreply.github.com>
Co-authored-by: Hubert Lu <55214931+hubertlu-tw@users.noreply.github.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
Co-authored-by: narutolhy <582909902@qq.com>
Co-authored-by: lukec <118525388+sleepcoo@users.noreply.github.com>
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
Co-authored-by: Shenggui Li <somerlee.9@gmail.com>
Co-authored-by: Yingyi Huang <yingyihuang2000@outlook.com>
Co-authored-by: Simon_CQK <cqk0100@gmail.com>
Co-authored-by: Kyungmin Lee <30465912+lkm2835@users.noreply.github.com>
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: HandH1998 <1335248067@qq.com>
Co-authored-by: 弋云 <yiyun.wyt@antgroup.com>
Co-authored-by: walker-ai <2398833647@qq.com>
Co-authored-by: Zilin Zhu <zhuzilinallen@gmail.com>
Co-authored-by: srinarayan-srikanthan <srinarayan.srikanthan@intel.com>
Co-authored-by: Albert <albert.zty@antgroup.com>
Co-authored-by: Ziming Huang <1520787127@qq.com>
Co-authored-by: ayrnb <70835312+ayrnb@users.noreply.github.com>
Co-authored-by: HydraQYH <QYH820@Outlook.com>
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: VDV1985 <vladdv85@mail.ru>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: TianyuZhang1214 <tianyuzhang1214@163.com>
Co-authored-by: alpha-baby <fujianhao1997@qq.com>
Co-authored-by: Yuchen Cheng <rudeigerc@gmail.com>
Co-authored-by: Kay Yan <kay.yan@daocloud.io>
Co-authored-by: Caproni <40862361+Capronir@users.noreply.github.com>
Co-authored-by: Ximingwang-09 <72070413+Ximingwang-09@users.noreply.github.com>
Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
Co-authored-by: zyksir <zyksir@outlook.com>
Co-authored-by: SijiaYang <yangsijia.614@bytedance.com>
Co-authored-by: yicwang <yichen.wang@bytedance.com>
Co-authored-by: Leng Yue <lengyue@lengyue.me>
Co-authored-by: Qi Yuhang <45795032+HydraQYH@users.noreply.github.com>
Co-authored-by: Gang Chen <13298548+MoonBall@users.noreply.github.com>
Co-authored-by: Pranjal Shankhdhar <pranjal.ssh@gmail.com>
Co-authored-by: jay <jthakur@habana.ai>
shuaills pushed a commit to shuaills/sglang that referenced this pull request Jul 21, 2025
Co-authored-by: austindeng <austindeng@tencent.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
Co-authored-by: ch-wan <cwan39@gatech.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants