Skip to content

Conversation

CatherineSue
Copy link
Collaborator

@CatherineSue CatherineSue commented May 28, 2025

Motivation

  1. Fix [Bug] incorret tool_calls index when there are multi tool_calls and in stream mode #6310
  2. Non-stream function call response in OpenAI doesn't have index. We should remove that. See: https://platform.openai.com/docs/api-reference/chat/object
  3. Refactor parse_streaming_increment in BaseFormatDetector

The current parse_streaming_increment in BaseFormatDetector has some issues:

The Original Problem

  1. For a single tool call in streaming, the first tool_index should be 0, but it was assigned as the index of the function in the tools list passed by user's request.

  2. When there were multiple tool calls, the tool_index was always 0 instead of being sequential (0, 1, 2, etc.). This was because:

  • self.current_tool_id = len(tool_call_arr) - 1 jumped directly to the last tool in the current tool_call_arr, which is reinitialized as [] every time of the function call. But the current_tool_id should be the index of the tool processed in the calls list
  • Complex array handling logic caused state inconsistencies in the entire function

Refactor Details

  1. Removed tool_call_arr and is_complete arrays

Why:

  • The array tool_call_arr was being rebuilt from scratch on every streaming increment, but self.current_tool_id update based on len(tool_call_arr).
  • This caused state inconsistencies because previous state was lost
  • The logic was overly complex and hard to maintain
  • They seemed to be designed for a different use case (single JSON array with multiple tools). However, based on the following content, it then assumes the obj is never a list but a dictionary:
    # Handle parameters/arguments consistency
    if "parameters" in obj:
    assert (
    "arguments" not in obj
    ), "model generated both parameters and arguments"
    obj["arguments"] = obj["parameters"]
    tool_call_arr.append(obj)

Why it won't affect us:

  • We're focusing on the common case where obj is always a single tool object (dict)
  • Each tool call is processed independently
  • We don't need to track multiple tools in a single parsing call
  1. Removed the Entire Case 1: Handle new tool discovered in array Section

Why we removed Case 1:
Case 1 was unreachable in the original case, because

  • tool_call_array = [obj] always has length 1 (single tool object)
  • So 1 > current_tool_id + 1 is only true when current_tool_id = -1 (initialization) , but the initialization can be easily handled in Case 2.

Why it won't affect us:

  • Tool switching now happens naturally when each tool completes in Case 2
  • Each tool call is processed in separate streaming sessions
  • We don't need complex logic to detect "new tools in an array"
  1. Proper State Management
  • Ensured streamed_args_for_tool is properly sized for each new tool, as it is used in adapter.py (Not sure the reason yet)
  • Added bounds checking for prev_tool_call_arr access
  • Correct tool_index assignment using the right ID for completing vs. ongoing tools
  1. The Flow Now Works Like This
  • Tool 1 starts: current_tool_id = 0, sends tool name with tool_index=0
  • Tool 1 completes: current_tool_id increments to 1, buffer clears
  • Tool 2 starts: Uses current_tool_id = 1, sends tool name with tool_index=1
  • Tool 2 completes: current_tool_id increments to 2, and so on...

Modifications

  • Simplify parsing logic by removing tool_call_arr and is_complete arrays
  • Add proper current_tool_id increment when tool completes (original Case 3: Handle streaming arguments with is_complete=True)
  • Ensure correct tool_index assignment before incrementing current_tool_id
  • Remove index in non-stream tool call in adapter.py

Checklist

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @CatherineSue, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! Gemini here, providing a summary of this pull request authored by CatherineSue.

The primary goal of this PR is to address a bug where the tool_index was incorrectly reported as 0 for multiple streaming tool calls and to refactor the parse_streaming_increment method in BaseFormatDetector for improved clarity and correctness. Additionally, it removes the index field for non-streaming tool calls in the OpenAI adapter to align with the API specification.

The refactor of parse_streaming_increment involves simplifying the logic by removing complex array handling (tool_call_arr, is_complete) that was causing state inconsistencies. The new approach processes each streaming increment as a potential part of the current tool call, managing state (current_tool_id, streamed_args_for_tool) more directly. This ensures that tool_index is correctly incremented as each tool call completes during a streaming session.

Highlights

  • Refactor parse_streaming_increment: The core parsing logic for streaming tool calls in BaseFormatDetector has been significantly refactored. This includes removing the tool_call_arr and is_complete arrays, simplifying the parsing loop, and improving state management to handle sequential tool calls more reliably.
  • Remove index for non-stream OpenAI calls: The index field has been removed from the ToolCall object generated for non-streaming responses in the OpenAI API adapter (adapter.py), aligning with the official OpenAI API specification for non-streaming tool calls.

Changelog

Click here to see the changelog
  • python/sglang/srt/function_call/base_format_detector.py
    • In parse_base_json, the assigned tool_index for non-streaming calls is changed to -1, indicating the caller should determine the correct index (line 53).
    • The parse_streaming_increment method is heavily refactored:
      • Removed tool_call_arr and is_complete arrays, simplifying state.
      • Removed the old 'Handle new tool discovered in array' logic (Case 1).
      • Updated the logic for handling tool name streaming (now Case 1) to correctly initialize and use self.current_tool_id (lines 178-198).
      • Updated the logic for handling streaming arguments (now Case 2) to use the current tool call object directly, add bounds checking for prev_tool_call_arr, correctly assign tool_index based on completion status, and increment self.current_tool_id upon tool completion (lines 205-265).
      • Updated how self.prev_tool_call_arr is updated (lines 268-272).
  • python/sglang/srt/function_call/qwen25_detector.py
    • Removed a TODO comment in the structure_info method (line 107).
  • python/sglang/srt/function_call/utils.py
    • Added a detailed docstring to the _partial_json_loads function explaining its purpose, arguments, and return value (lines 21-37).
  • python/sglang/srt/openai_api/adapter.py
    • Removed the index=call_info.tool_index argument when creating ToolCall objects in v1_chat_generate_response for non-streaming responses (line 1330).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Streaming JSON,
A partial, growing sight,
Index now counts right.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@zhyncs zhyncs self-assigned this May 28, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly refactors the parse_streaming_increment method in BaseFormatDetector to address issues with tool_index assignment and improve overall logic clarity. The changes simplify state management by assuming a single tool object is processed per parsing increment and correctly handle sequential tool_index generation.

Key improvements:

  • Correct tool_index Handling: The tool_index is now correctly incremented and assigned sequentially for multiple tool calls in streaming mode.
  • Simplified Logic: Removal of tool_call_arr and is_complete arrays, along with unreachable code paths, makes the parsing logic easier to follow and maintain.
  • OpenAI API Compliance: The index field has been removed from non-streaming tool call responses in adapter.py to align with the OpenAI API specification.
  • Improved State Management: State variables like streamed_args_for_tool and prev_tool_call_arr are now managed more robustly.

The detailed pull request description was very helpful in understanding the rationale behind these significant changes. The refactor appears sound and addresses the stated motivations effectively.

Summary of Findings

  • Tool Indexing Logic Refactor: The core logic in BaseFormatDetector.parse_streaming_increment has been substantially refactored. This resolves an issue where tool_index was not correctly assigned for sequential tool calls and simplifies state management, making the code more robust and maintainable.
  • OpenAI API Compliance: The index field was removed from the ToolCall object in non-streaming responses within adapter.py. This change ensures better compliance with the official OpenAI API specification, which does not define an index for tool calls in the non-streaming chat completion response.
  • Code Clarity and Simplification: The removal of intermediate arrays (tool_call_arr, is_complete) and associated complex logic in parse_streaming_increment has greatly improved the clarity of the tool call parsing mechanism.
  • Documentation: A helpful docstring was added to _partial_json_loads in utils.py, improving code understanding. (Severity: low, not commented directly due to review settings)
  • Comment Cleanup: A resolved TODO comment was removed in qwen25_detector.py. (Severity: low, not commented directly due to review settings)

Merge Readiness

The pull request introduces significant improvements to tool call parsing and OpenAI API compliance. The refactoring is well-explained and appears robust. The one comment made is more of an affirmation and a request for confirmation regarding the impact of the API compliance change.

Based on this review, the changes look good and significantly improve the codebase. I am not authorized to approve pull requests, so please ensure it undergoes further review and approval as per your team's process before merging.

@@ -1327,7 +1327,6 @@ def v1_chat_generate_response(
tool_calls = [
ToolCall(
id=f"call_{base64.urlsafe_b64encode(uuid.uuid4().bytes).rstrip(b'=').decode()}",
index=call_info.tool_index,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Good catch removing the index field here for non-streaming tool calls! This aligns the response with the OpenAI API specification, which does not include an index field for tool_calls items in the chat completion object. This enhances API compliance.

Could you confirm if this index was indeed unused or potentially misleading for consumers expecting strict OpenAI compatibility for non-streaming responses?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is unused. See: https://platform.openai.com/docs/api-reference/chat/object
there is no index here.
Screenshot 2025-05-28 at 11 24 09 AM

@CatherineSue
Copy link
Collaborator Author

CatherineSue commented May 28, 2025

Test: Qwen/Qwen2.5-7B-Instruct
Before:
index for get_tourist_attraction is 1 first, then 0
Screenshot 2025-05-28 at 11 20 06 AM

After:
index for get_tourist_attraction is always 1, which means it is the 2nd function call in the tool_calls, as expected
Screenshot 2025-05-28 at 11 19 56 AM

Before:
the first function call index is 2, which is wrong
Screenshot 2025-05-28 at 11 21 57 AM

After:
the first function call index is 0
Screenshot 2025-05-28 at 11 22 07 AM

@CatherineSue CatherineSue mentioned this pull request May 28, 2025
7 tasks
zhyncs and others added 2 commits May 28, 2025 12:12
@CatherineSue
Copy link
Collaborator Author

Found a potential bug, fixing

- When self.current_tool_id is greater than 0, the bot_token or the text can start with `self.tool_call_separator + "{"`
- This helps to correctly detect the following tool calls after the first one
@CatherineSue CatherineSue force-pushed the chang/base-tool-index branch from d5f2b69 to 3c7b2a6 Compare May 29, 2025 02:41
@CatherineSue
Copy link
Collaborator Author

CatherineSue commented May 29, 2025

Test: mistralai/Devstral-Small-2505
Before:
Some of the text is not able to be extracted as function call, was put into content instead
Screenshot 2025-05-28 at 10 22 43 PM

After:
Screenshot 2025-05-28 at 7 44 10 PM

Add more doc for the separator in Llama32Detector
@zhyncs zhyncs merged commit c673727 into main May 29, 2025
37 of 41 checks passed
@zhyncs zhyncs deleted the chang/base-tool-index branch May 29, 2025 07:08
Layssy pushed a commit to Layssy/sglang-iaas that referenced this pull request Jun 9, 2025
xwu-intel pushed a commit to xwu-intel/sglang that referenced this pull request Jun 17, 2025
walker-ai pushed a commit to walker-ai/sglang that referenced this pull request Jul 8, 2025
Merge branch 'sgl_20250610_sync_tag047 of git@code.alipay.com:Theta/SGLang.git into main

https://code.alipay.com/Theta/SGLang/pull_requests/52


Reviewed-by: 剑川 <jianchuan.gys@antgroup.com>


* [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697)
* [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703)
* [CI] Fix setup of disaggregation with different tp (sgl-project#6706)
* [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712)
* Fuse routed_scaling_factor in DeepSeek (sgl-project#6710)
* Overlap two kernels in DeepSeek with communication (sgl-project#6711)
* Minor refactor two-batch overlap (sgl-project#6682)
* Speed up when having padding tokens two-batch overlap (sgl-project#6668)
* [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479)
* Fix LoRA bench (sgl-project#6719)
* temp
* Fix PP for Qwen3 MoE (sgl-project#6709)
* [feat] triton kernel for get_last_loc (sgl-project#6676)
* [fix] more mem for draft_extend cuda_graph (sgl-project#6726)
* [PD] bug fix:  Update status if nixl receiver send a a dummy req. (sgl-project#6720)
* Tune memory arguments on B200 (sgl-project#6718)
* Add DeepSeek-R1-0528 function call chat template (sgl-project#6725)
* refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715)
* Add draft extend CUDA graph for Triton backend (sgl-project#6705)
* refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545)
* [PD] Support completion endpoint (sgl-project#6729)
* PD Rust LB (PO2) (sgl-project#6437)
* Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680)
* Support picking variants of EPLB algorithms (sgl-project#6728)
* Support tuning DeepEP configs (sgl-project#6742)
* [test] add ut and bm for get_last_loc (sgl-project#6746)
* Fix mem_fraction_static for AMD CI (sgl-project#6748)
* [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265)
* Improve EPLB logical to physical dispatch map (sgl-project#6727)
* Update DeepSeek-R1-0528 function call chat template (sgl-project#6765)
* [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761)
* Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737)
* Support sliding window in triton backend (sgl-project#6509)
* Fix shared experts fusion error (sgl-project#6289)
* Fix one bug in the grouped-gemm triton kernel (sgl-project#6772)
* update llama4 chat template and pythonic parser (sgl-project#6679)
* feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784)
* Support token-level quantization for EP MoE (sgl-project#6782)
* Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785)
* ci: relax test_function_call_required (sgl-project#6786)
* Add intel_amx backend for Radix Attention for CPU (sgl-project#6408)
* Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734)
* fix(PD-disaggregation): Can not get local ip (sgl-project#6792)
* [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791)
* Bump torch to 2.7.0 (sgl-project#6788)
* chore: bump sgl-kernel v0.1.5 (sgl-project#6794)
* Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787)
* chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795)
* [Minor] Always append newline after image token when parsing chat message (sgl-project#6797)
* Update CI tests for Llama4 models (sgl-project#6421)
* [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981)
* chore: update blackwell docker (sgl-project#6800)
* misc: cache is_hopper_arch (sgl-project#6799)
* Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804)
* Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803)
* [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699)
* Add draft extend CUDA graph for flashinfer backend  (sgl-project#6805)
* Refactor CustomOp to avoid confusing bugs (sgl-project#5382)
* Tiny log prefill time (sgl-project#6780)
* Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813)
* Add simple utility to dump tensors for debugging (sgl-project#6815)
* Fix profiles do not have consistent names (sgl-project#6811)
* Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812)
* [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093)
* [Router] Fix k8s Service Discovery (sgl-project#6766)
* Add CPU optimized kernels for topk and rope fusions  (sgl-project#6456)
* fix new_page_count_next_decode (sgl-project#6671)
* Fix wrong weight reference in dynamic EPLB (sgl-project#6818)
* Minor add metrics to expert location updater (sgl-project#6816)
* [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735)
* [FEAT] Add transformers backend support  (sgl-project#5929)
* [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745)
* fix ep_moe_reorder kernel bugs (sgl-project#6858)
* [Refactor] Multimodal data processing for VLM (sgl-project#6659)
* Decoder-only Scoring API (sgl-project#6460)
* feat: add dp-rank to KV events (sgl-project#6852)
* Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736)
* Fix one missing arg in DeepEP (sgl-project#6878)
* Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861)
* support 1 shot allreduce  in 1-node and 2-node using mscclpp (sgl-project#6277)
* Fix Qwen3MoE missing token padding optimization (sgl-project#6820)
* Tiny update error hints (sgl-project#6846)
* Support layerwise rebalancing experts (sgl-project#6851)
* Tiny allow profiler API to auto create directory (sgl-project#6865)
* Support Blackwell DeepEP docker images (sgl-project#6868)
* [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837)
* [theta]merge 0605
* oai: fix openAI client error with single request via batch api (sgl-project#6170)
* [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764)
* Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890)
* [CUTLASS-FP4-MOE]  Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887)
* bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877)
* [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458)
* AITER backend extension and workload optimizations (sgl-project#6838)
* [theta]merge
* [theta]merge
* [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930)
* Fix a bug in abort & Improve docstrings for abort (sgl-project#6931)
* Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934)
* Sync the changes on cuda graph runners (sgl-project#6932)
* [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922)
* [Refactor] image data process in bench_serving (sgl-project#6879)
* [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767)
* Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939)
* [sgl-kernel] update deepgemm (sgl-project#6942)
* chore: bump sgl-kernel v0.1.6 (sgl-project#6943)
* Minor compile fused topk (sgl-project#6944)
* [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910)
* Tiny re-introduce profile id logging (sgl-project#6912)
* Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955)
* reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369)
* chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945)
* add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924)
* [Docker] Add docker file for SGL Router (sgl-project#6915)
* Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874)
* Add canary for EPLB rebalancing (sgl-project#6895)
* Refactor global_server_args_dict (sgl-project#6866)
* Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)
* Update server timeout time in AMD CI. (sgl-project#6953)
* [misc] add is_cpu() (sgl-project#6950)
* Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885)
* Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916)
* chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955)
* chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957)
* [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853)
* Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968)
* [AMD] Add more tests to per-commit-amd (sgl-project#6926)
* chore: bump sgl-kernel v0.1.7 (sgl-project#6963)
* Slightly improve the sampler to skip unnecessary steps (sgl-project#6956)
* rebase h20 fused_moe config (sgl-project#6966)
* Fix CI and triton moe Configs (sgl-project#6974)
* Remove unnecessary kernels of num_token_non_padded (sgl-project#6965)
* Extend cuda graph capture bs for B200 (sgl-project#6937)
* Fuse routed scaling factor in deepseek (sgl-project#6970)
* Sync cuda graph runners (sgl-project#6976)
* Fix draft extend ut stability with flush cache (sgl-project#6979)
* Fix triton sliding window test case (sgl-project#6981)
* Fix expert distribution dumping causes OOM (sgl-project#6967)
* Minor remove one kernel for DeepSeek (sgl-project#6977)
* [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929)
* Enable more unit tests for AMD CI. (sgl-project#6983)
* Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973)
* Eliminate stream sync to speed up LoRA batch init  (sgl-project#6960)
* support qwen3 emebedding (sgl-project#6990)
* Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557)
* chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958)
* cleanup tmp dir (sgl-project#7007)
* chore: update pr test xeon (sgl-project#7008)
* Fix cutlass MLA gets almost zero accuracy (sgl-project#6998)
* Update amd nightly models CI. (sgl-project#6992)
* feat: add direct routing strategy to DP worker (sgl-project#6884)
* Fallback to lower triton version for unfound fused moe configs (sgl-project#7013)
* Fix torchvision version for Blackwell (sgl-project#7015)
* Simplify prepare_extend_after_decode (sgl-project#6987)
* Migrate to assertEqual (sgl-project#6741)
* Fix torch version in blackwell dockerfile (sgl-project#7017)
* chore: update pr test xeon (sgl-project#7018)
* Update default settings for blackwell (sgl-project#7023)
* Support both approximate and exact expert distribution collection (sgl-project#6964)
* Add decode req pool (sgl-project#6980)
* [theta]merge 0610
* [theta]merge 0610
* [CI] Add CI workflow for sgl-router docker build (sgl-project#7027)
* Fix fused_moe triton configs (sgl-project#7029)
* CPU: map changes from developing branch in sgl-kernel (sgl-project#6833)
* chore: bump v0.4.7 (sgl-project#7038)
* Update README.md (sgl-project#7040)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] incorret tool_calls index when there are multi tool_calls and in stream mode
2 participants