Skip to content

[Bug] Streaming Response Missing index in tool_calls with LLaMA-3.3-70B using sglang #5661

@Arunachalamkalimuthu

Description

@Arunachalamkalimuthu

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

Describe the bug
When using the meta/llama-3.3-70b model via the sglang framework, the streaming response chunks do not include the required index property within the tool_calls array. This results in Zod validation errors or incorrect downstream processing where the tool call index is essential.

Expected behavior
Each tool_call object should include an index property like:

{
  "tool_calls": [
    {
      "index": 0,
      "id": "0",
      "type": "function",
      "function": {
        "name": "webSearch",
        "arguments": "{ \"query\": \"weather today\" }"
      }
    }
  ]
}

This behavior is correctly observed when using providers like:

OpenAI

DeepInfra

Actual behavior
The response lacks the index property:

{
  "tool_calls": [
    {
      "id": "0",
      "type": "function",
      "function": {
        "name": null,
        "arguments": " in"
      }
    }
  ]
}

This causes failures in tools relying on proper tool_calls indexing (e.g., Zod-based validators, function call dispatching).

Model
meta/llama-3.3-70b

Framework
sglang

Steps to reproduce

Stream a tool-calling enabled prompt using the sglang framework.

Observe the response structure in tool_calls.

Compare it with the response structure when using OpenAI or DeepInfra for the same prompt.

Additional context
This issue affects all applications expecting the standard OpenAI-compatible tool-calling response shape and breaks compatibility with validation schemas like zod.

Image
Screen.Recording.2025-04-22.at.2.30.16.PM-VEED.mp4

Reproduction

{
    "model": "meta/llama-3.3-70b",
    "stream": true,
    "messages": [
        {
            "role": "user",
            "content": "What is the current weather in  Dubai?"
        }
    ],
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "webSearch",
                "description": "Search the web for up-to-date information",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "query": {
                            "type": "string",
                            "description": "The search query"
                        }
                    },
                    "required": [
                        "query"
                    ]
                }
            }
        }
    ],
    "tool_choice": {
        "type": "function",
        "function": {
            "name": "webSearch"
        }
    }
}

Environment

Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H20
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.127.08
PyTorch: 2.5.1+cu124
sgl_kernel: 0.0.5
flashinfer: 0.2.3+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.13
fastapi: 0.115.11
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.23.2
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.3.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.66.3
tiktoken: 0.9.0
anthropic: Module Not Found
litellm: Module Not Found
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-47,96-143 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-47,96-143 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-47,96-143 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-47,96-143 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 48-95,144-191 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 48-95,144-191 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 48-95,144-191 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 48-95,144-191 1 N/A

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions