Skip to content

Conversation

GuanLuo
Copy link
Contributor

@GuanLuo GuanLuo commented Jul 17, 2025

Overview:

Details:

Where should the reviewer start?

This branch is built on top of branch krish/multimodal/v1. The additional files are llama 4 related files and README.md. Code changes on existing files are:

  • encode_worker.py: image loading code is extracted to image_loader.py which will be reused in VllmPDWorker in the case where prefill worker is also responsible for encoding. VllmEncodeWorker will clear image_url of the request as a hint that the prefill worker should consume image embedding.
  • worker.py: VllmPDWorker will either load image from url or image embedding from NIXL based on whether image_url is set in the request.

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • New Features

    • Introduced a comprehensive suite of multimodal model serving examples, including support for aggregated and disaggregated workflows with text and image inputs.
    • Added ready-to-use configuration files for popular models (LLaVA, Qwen, Phi3V, Llama 4).
    • Provided example FastAPI-based web frontends and detailed deployment instructions for both local and Kubernetes environments.
    • Enabled high-performance data transfer between distributed workers using RDMA and NATS queue integration.
  • Documentation

    • Added extensive README files detailing deployment scenarios, workflows, and usage instructions for multimodal model serving.
    • Included protocol specifications and utility documentation for advanced users.
  • Utilities

    • Added image loading, argument parsing, logging, and model handling utilities to streamline multimodal deployment and inference.
    • Provided a Python RDMA connector library for efficient GPU data transfer in distributed setups.

@GuanLuo GuanLuo requested review from nnshah1, whoisj and a team as code owners July 17, 2025 19:28
Copy link

copy-pr-bot bot commented Jul 17, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

coderabbitai bot commented Jul 17, 2025

Walkthrough

This change introduces a comprehensive multimodal example under examples/multimodal_v1, including detailed documentation, configuration files for various model setups, Python modules for component services (workers, processor, frontend), utility modules for image loading, RDMA data transfer, protocol definitions, and deployment graphs. The example supports both aggregated and disaggregated multimodal serving using the Dynamo framework and vLLM, with support for text and image input, streaming responses, and GPU-accelerated distributed inference.

Changes

File(s) Change Summary
examples/multimodal_v1/README.md, backup_README.md, connect/README.md Added comprehensive documentation for multimodal deployment, RDMA library, and Kubernetes usage.
examples/multimodal_v1/configs/*.yaml Added configuration files for different multimodal models and serving setups (LLaVA, Qwen, Phi3V, Llama).
examples/multimodal_v1/graphs/agg.py, graphs/disagg.py Defined component linking pipelines for aggregated and disaggregated serving graphs.
examples/multimodal_v1/components/encode_worker.py, components/worker.py Introduced GPU-enabled encode, decode, and prefill worker service classes for multimodal inference.
examples/multimodal_v1/components/processor.py, components/direct_processor.py Added processor service classes for pre/post-processing and orchestration of multimodal requests.
examples/multimodal_v1/components/web.py Added FastAPI-based frontend service for handling multimodal chat completion requests.
examples/multimodal_v1/connect/__init__.py Implemented RDMA Connector library for zero-copy data transfer between distributed workers.
examples/multimodal_v1/utils/args.py Added utility for parsing vLLM engine arguments with multimodal-specific options.
examples/multimodal_v1/utils/chat_processor.py Added utilities for chat/completion request preprocessing and streaming response postprocessing.
examples/multimodal_v1/utils/image_loader.py Introduced async image loader with caching for HTTP and data URLs.
examples/multimodal_v1/utils/logging.py Added async utility to monitor and log worker readiness.
examples/multimodal_v1/utils/model.py Added utilities for loading vision models and constructing multimodal data.
examples/multimodal_v1/utils/nats_queue.py Added async singleton wrapper for NATS-based task queues.
examples/multimodal_v1/utils/nixl.py Added utilities for handling and storing NixlMetadata in distributed environments.
examples/multimodal_v1/utils/protocol.py Defined Pydantic models and protocol for multimodal request/response serialization and validation.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Frontend
    participant Processor
    participant EncodeWorker
    participant PDWorker
    participant DecodeWorker

    Client->>Frontend: POST /v1/chat/completions (with text + image URL)
    Frontend->>Processor: generate(MultiModalRequest)
    Processor->>EncodeWorker: encode(vLLMMultimodalRequest)
    EncodeWorker->>PDWorker: generate(vLLMMultimodalRequest with embeddings)
    alt Disaggregated
        PDWorker->>DecodeWorker: generate(prefilled request)
        DecodeWorker-->>PDWorker: stream generation responses
        PDWorker-->>Processor: stream responses
    else Aggregated
        PDWorker-->>Processor: stream generation responses
    end
    Processor-->>Frontend: stream JSON responses
    Frontend-->>Client: stream JSON responses
Loading

Possibly related PRs

  • ai-dynamo/dynamo#1366: Adds prompt-template settings in multimodal config files and enhances deployment docs, directly relating to configuration and documentation aspects of the multimodal example.
  • ai-dynamo/dynamo#1882: Fixes a missing "av" package dependency in the container for multimodal examples, related due to deployment support for multimodal models.
  • ai-dynamo/dynamo#1388: Generalizes vision-language model embedding extraction and adds support for multiple models, overlapping with the new multimodal example in model handling utilities.

Poem

In the land of code, where models abound,
A rabbit hopped in and looked all around.
With images and text, it built a new show,
Streaming responses in a multimodal flow.
Workers and graphs, all linked with great care—
Now inference is faster, with RDMA flair!
🐰✨


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@GuanLuo
Copy link
Contributor Author

GuanLuo commented Jul 17, 2025

@krishung5 for you and others to review. This branch is built on top of branch krish/multimodal/v1. The additional files are llama 4 related files and README.md. Code changes on existing files are:

  • encode_worker.py: image loading code is extracted to image_loader.py which will be reused in VllmPDWorker in the case where prefill worker is also responsible for encoding. VllmEncodeWorker will clear image_url of the request as a hint that the prefill worker should consume image embedding.
  • worker.py: VllmPDWorker will either load image from url or image embedding from NIXL based on whether image_url is set in the request.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 36

🔭 Outside diff range comments (1)
examples/multimodal_v1/utils/protocol.py (1)

177-177: Add newline at end of file.

The file should end with a single newline character to comply with Black formatting requirements.

Add a newline after line 177.

🧹 Nitpick comments (27)
examples/multimodal_v1/utils/logging.py (1)

44-44: Use consistent logging approach.

The function uses both logger.info() and print() for output. Consider using the logger consistently for better log management and formatting.

-    print(f"Workers ready: {worker_ids}")
+    logger.info(f"Workers ready: {worker_ids}")
examples/multimodal_v1/utils/image_loader.py (1)

1-88: Apply Black formatting

The pipeline indicates Black formatting issues. Please run black on this file to fix formatting.

examples/multimodal_v1/utils/args.py (2)

16-17: Address TODO comment about naming ambiguity

The TODO comment indicates potential naming conflicts with the vllm package. Consider renaming this module to something more specific like multimodal_args.py or vllm_config_parser.py.

Would you like me to open an issue to track this renaming task?


23-23: Add type hints for function parameters

The function parameters lack type annotations for better code clarity and type safety.

-def parse_vllm_args(service_name, prefix) -> AsyncEngineArgs:
+def parse_vllm_args(service_name: str, prefix: str) -> AsyncEngineArgs:
examples/multimodal_v1/components/web.py (2)

27-27: Remove unused logger

The logger is defined but never used in this module.

Either remove the unused logger or add appropriate logging statements for debugging and monitoring.


62-62: Consider proper SSE formatting for streaming responses

The response uses text/event-stream media type but doesn't format the output as Server-Sent Events (SSE). For proper SSE streaming, each message should be prefixed with data: .

-                    yield s
+                    yield f"data: {json.dumps(s)}\n\n"
examples/multimodal_v1/configs/agg-phi3v.yaml (1)

33-34: Document the mm-processor-kwargs configuration

The mm-processor-kwargs with num_crops: 16 is model-specific configuration. Consider adding a comment explaining what this parameter controls for better maintainability.

   mm-processor-kwargs:
+    # Number of image crops for Phi3V's visual processing
     num_crops: 16
examples/multimodal_v1/utils/nats_queue.py (2)

81-81: Consider using more descriptive method names

The TODO raises a good point. Since this is a general-purpose queue, consider renaming to enqueue_message/dequeue_message or put/get to be more generic and align with common queue interfaces.


91-103: Add configurable timeout and consider async iteration

The hardcoded 0.1s timeout might not be suitable for all use cases. Also, the polling approach could be improved.

Consider this implementation:

-async def clear_queue(self) -> int:
+async def clear_queue(self, dequeue_timeout: float = 0.1) -> int:
     try:
         cleared_count = 0
         # Continue until we can't dequeue any more messages
         while True:
-            # use a small timeout
-            message = await self.dequeue_task(timeout=0.1)
+            message = await self.dequeue_task(timeout=dequeue_timeout)
             if message is None:
                 break
             cleared_count += 1
         return cleared_count
     except Exception as e:
         raise RuntimeError(f"Failed to clear queue: {e}")

For future enhancement, consider adding an async iterator interface for more Pythonic queue consumption.

examples/multimodal_v1/components/direct_processor.py (2)

171-173: Implement completion request handling

The completion request type is defined in the enum but not implemented. This could cause runtime errors if completion requests are sent to this processor.

Would you like me to help implement the completion request handling or create an issue to track this TODO?


99-103: Uncomment or remove etcd_kv_cache initialization

The etcd cache initialization is commented out. If it's not needed, remove it entirely. If it's planned for future use, add a TODO comment explaining when it will be enabled.

examples/multimodal_v1/utils/model.py (1)

51-58: Clean up or document commented code

The commented code shows alternative model loading methods but provides no context. Either remove it or add a comment explaining why these alternatives exist.

Consider adding a comment like:

# Alternative loading methods for specific model types:
# - Direct transformers loading (for debugging or when vLLM loading fails)
# model = LlavaForConditionalGeneration.from_pretrained(...)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(...)
examples/multimodal_v1/connect/README.md (3)

26-44: Fix list indentation for markdown consistency

The unordered lists have 2-space indentation but markdown linters expect 0-space indentation.

-  - **Register local readable memory**:
-
-    Register local memory buffer(s) with the RDMA subsystem to enable a remote worker to read from.
-
-  - **Register local writable memory**:
-
-    Register local memory buffer(s) with the RDMA subsystem to enable a remote worker to write to.
-
-  - **Read from registered, remote memory**:
-
-    Read remote memory buffer(s), registered by a remote worker to be readable, into local memory buffer(s).
-
-  - **Write to registered, remote memory**:
-
-    Write local memory buffer(s) to remote memory buffer(s) registered by a remote worker to writable.
+- **Register local readable memory**:
+
+  Register local memory buffer(s) with the RDMA subsystem to enable a remote worker to read from.
+
+- **Register local writable memory**:
+
+  Register local memory buffer(s) with the RDMA subsystem to enable a remote worker to write to.
+
+- **Read from registered, remote memory**:
+
+  Read remote memory buffer(s), registered by a remote worker to be readable, into local memory buffer(s).
+
+- **Write to registered, remote memory**:
+
+  Write local memory buffer(s) to remote memory buffer(s) registered by a remote worker to writable.

52-52: Fix typo in "requested"

-Local then sends metadata about the requuested RDMA operation to Remote.
+Local then sends metadata about the requested RDMA operation to Remote.

163-163: Consider replacing "prior to" with "before"

For more concise writing, consider replacing "prior to" with "before" throughout the document.

Also applies to: 176-176, 189-189, 202-202, 245-245, 269-269, 293-293, 317-317

examples/multimodal_v1/components/encode_worker.py (2)

68-68: Clean up commented code blocks

Large sections of commented code make the file harder to read. Consider:

  1. Remove if no longer needed
  2. Add explanatory comments if kept for reference
  3. Move to documentation if showing alternative approaches

The commented code appears to show:

  • Alternative vision model loading (line 68)
  • Different embedding extraction approach (lines 98-133)

Either remove or add a comment explaining why these alternatives exist.

Also applies to: 98-133


69-71: Consider using the load_vision_model utility

You have a commented line using load_vision_model from the utils module, but instead directly load the model. Consider using the utility function for consistency across the codebase.

-# self.vision_model = load_vision_model(self.MODEL_ID)
-self.vision_model = LlavaForConditionalGeneration.from_pretrained(
-    self.MODEL_ID, device_map="auto", torch_dtype=torch.float16
-).eval()
+from utils.model import load_vision_model
+self.vision_model = load_vision_model(self.MODEL_ID)
examples/multimodal_v1/connect/__init__.py (3)

80-86: Simplify redundant conditional assignment.

The notification_key is guaranteed to be non-None after line 80, making the conditional assignment on line 86 redundant.

-        notification_key = str(uuid.uuid4()) if notification_key is None else notification_key
-        if not isinstance(notification_key, str):
-            raise TypeError("Argument `notification_key` must be `str` or `None`.")
-        if len(notification_key) == 0:
-            raise ValueError("Argument `notification_key` must not be an empty string.")
-
-        self._notification_key: str = "" if notification_key is None else notification_key
+        notification_key = str(uuid.uuid4()) if notification_key is None else notification_key
+        if not isinstance(notification_key, str):
+            raise TypeError("Argument `notification_key` must be `str` or `None`.")
+        if len(notification_key) == 0:
+            raise ValueError("Argument `notification_key` must not be an empty string.")
+
+        self._notification_key: str = notification_key

860-860: Remove empty line with indentation.

Line 860 contains an empty line with indentation that should be completely empty.

Remove the indentation from the empty line 860.


727-727: Add space around comparison operator for consistency.

-            self._data_ref = data[3] if len(data) >=4 else None
+            self._data_ref = data[3] if len(data) >= 4 else None
examples/multimodal_v1/utils/protocol.py (2)

61-68: Document potential risks of monkey-patching.

While the monkey-patching is necessary for Pydantic compatibility, it could break if vLLM changes its internal structure. Consider adding a more detailed comment about this risk and potential version compatibility issues.

 # Monkey-patch the SamplingParams type to add a dummy core schema so pydantic can validate it
 # Sampling params is a mspspec struct
 # SamplingParams is defined here: https://github.com/vllm-project/vllm/blob/a4c402a756fa3213caf9d2cde0e4ceb2d57727f2/vllm/sampling_params.py#L88
+# WARNING: This monkey-patch may break if vLLM changes the SamplingParams implementation.
+# Consider pinning the vLLM version or implementing a wrapper class instead.
 
 SamplingParams.__get_pydantic_core_schema__ = classmethod(
     lambda cls, source, handler: core_schema.any_schema()
 )

150-150: Track the TODO for removing image_features.

The comment indicates that image_features field will be removed once NIXL support is added. This should be tracked properly.

Would you like me to create an issue to track the removal of the image_features field once NIXL support is fully implemented?

examples/multimodal_v1/backup_README.md (1)

89-89: Improve sentence variety to enhance readability

The static analysis correctly identifies three consecutive sentences starting with "If serving...". Consider rephrasing for better flow.

-If serving the example Qwen model, replace `"llava-hf/llava-1.5-7b-hf"` in the `"model"` field with `"Qwen/Qwen2.5-VL-7B-Instruct"`. If serving the example Phi3V model, replace `"llava-hf/llava-1.5-7b-hf"` in the `"model"` field with `"microsoft/Phi-3.5-vision-instruct"`.
+For the Qwen model example, replace `"llava-hf/llava-1.5-7b-hf"` in the `"model"` field with `"Qwen/Qwen2.5-VL-7B-Instruct"`. When using the Phi3V model example, replace it with `"microsoft/Phi-3.5-vision-instruct"`.
examples/multimodal_v1/components/processor.py (2)

97-101: Remove commented code

The commented etcd_kv_cache initialization should be removed if not needed, or implemented if required.

-        # self.etcd_kv_cache = await EtcdKvCache.create(
-        #     runtime.etcd_client(),
-        #     "/dynamo/processor/",
-        #     {"router": self.engine_args.router},
-        # )

198-206: Improve image URL extraction logic

The current implementation only extracts the first image URL found. Consider handling multiple images or validating the message structure more thoroughly.

         image_url = None
-
         for message in raw_request.messages:
             for item in message.content:
                 if item.type == "image_url":
                     image_url = item.image_url.url
+                    break  # Exit inner loop once found
+            if image_url:
+                break  # Exit outer loop once found
+        
         if image_url is None:
             raise ValueError("Image URL is required")
examples/multimodal_v1/utils/nixl.py (2)

25-25: Consider making metadata directory configurable

The hardcoded /tmp/nixl directory could cause issues in containerized environments or when running multiple instances.

-METADATA_DIR = "/tmp/nixl"
+METADATA_DIR = os.environ.get("NIXL_METADATA_DIR", "/tmp/nixl")

84-92: Optimize etcd lookup by using single-key get instead of prefix scan

Since you already know the exact key ({self._key_prefix}/{engine_id}), replace the prefix scan and loop with a direct kv_get call:

             key = "/".join([self._key_prefix, engine_id])
-            key_values = await self._client.kv_get_prefix(key)
-            deserialized_metadata = None
-
-            for item in key_values:
-                deserialized_metadata = msgspec.msgpack.decode(
-                    item["value"], type=NixlMetadata
-                )
-                break
+            # If the client supports direct key lookup, prefer this for performance
+            value = await self._client.kv_get(key)
+            deserialized_metadata = None
+
+            if value:
+                deserialized_metadata = msgspec.msgpack.decode(
+                    value, type=NixlMetadata
+                )

• Please verify that the etcd client returned by runtime.etcd_client() exposes a kv_get(key) method that returns the raw bytes (not a key/value list).
• If kv_get isn’t available or returns a different structure, continue using kv_get_prefix as-is.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1eadc01 and d4c6518.

📒 Files selected for processing (24)
  • examples/multimodal_v1/README.md (1 hunks)
  • examples/multimodal_v1/backup_README.md (1 hunks)
  • examples/multimodal_v1/components/direct_processor.py (1 hunks)
  • examples/multimodal_v1/components/encode_worker.py (1 hunks)
  • examples/multimodal_v1/components/processor.py (1 hunks)
  • examples/multimodal_v1/components/web.py (1 hunks)
  • examples/multimodal_v1/components/worker.py (1 hunks)
  • examples/multimodal_v1/configs/agg-llava.yaml (1 hunks)
  • examples/multimodal_v1/configs/agg-phi3v.yaml (1 hunks)
  • examples/multimodal_v1/configs/agg-qwen.yaml (1 hunks)
  • examples/multimodal_v1/configs/disagg.yaml (1 hunks)
  • examples/multimodal_v1/configs/llama.yaml (1 hunks)
  • examples/multimodal_v1/connect/README.md (1 hunks)
  • examples/multimodal_v1/connect/__init__.py (1 hunks)
  • examples/multimodal_v1/graphs/agg.py (1 hunks)
  • examples/multimodal_v1/graphs/disagg.py (1 hunks)
  • examples/multimodal_v1/utils/args.py (1 hunks)
  • examples/multimodal_v1/utils/chat_processor.py (1 hunks)
  • examples/multimodal_v1/utils/image_loader.py (1 hunks)
  • examples/multimodal_v1/utils/logging.py (1 hunks)
  • examples/multimodal_v1/utils/model.py (1 hunks)
  • examples/multimodal_v1/utils/nats_queue.py (1 hunks)
  • examples/multimodal_v1/utils/nixl.py (1 hunks)
  • examples/multimodal_v1/utils/protocol.py (1 hunks)
🧰 Additional context used
🧠 Learnings (10)
examples/multimodal_v1/configs/agg-qwen.yaml (2)
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: The current get_vision_embeddings_size() function in examples/multimodal/utils/model.py uses a hardcoded fallback of 4096 for hidden_size, which assumes all VLMs follow the LLaVA architecture pattern. This is problematic because different VLMs like Qwen2-VL, MiniCPM-V, and others have different hidden dimensions, making the fallback unreliable and potentially causing tensor shape mismatches.
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: Different Vision-Language Models (VLMs) have different hidden dimensions, so using a single hardcoded fallback value of 4096 in get_vision_embeddings_size() is problematic and can cause tensor shape mismatches. The current code uses getattr(config, "hidden_size", 4096) where 4096 is a fallback default based on LLaVA models, but other VLMs like Qwen2-VL may have different dimensions.
examples/multimodal_v1/configs/agg-phi3v.yaml (2)
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: The current get_vision_embeddings_size() function in examples/multimodal/utils/model.py uses a hardcoded fallback of 4096 for hidden_size, which assumes all VLMs follow the LLaVA architecture pattern. This is problematic because different VLMs like Qwen2-VL, MiniCPM-V, and others have different hidden dimensions, making the fallback unreliable and potentially causing tensor shape mismatches.
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: Different Vision-Language Models (VLMs) have different hidden dimensions, so using a single hardcoded fallback value of 4096 in get_vision_embeddings_size() is problematic and can cause tensor shape mismatches. The current code uses getattr(config, "hidden_size", 4096) where 4096 is a fallback default based on LLaVA models, but other VLMs like Qwen2-VL may have different dimensions.
examples/multimodal_v1/README.md (1)
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: The current get_vision_embeddings_size() function in examples/multimodal/utils/model.py uses a hardcoded fallback of 4096 for hidden_size, which assumes all VLMs follow the LLaVA architecture pattern. This is problematic because different VLMs like Qwen2-VL, MiniCPM-V, and others have different hidden dimensions, making the fallback unreliable and potentially causing tensor shape mismatches.
examples/multimodal_v1/configs/agg-llava.yaml (4)
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/job_script_template.j2:59-59
Timestamp: 2025-07-02T13:20:28.800Z
Learning: In the SLURM job script template at examples/sglang/slurm_jobs/job_script_template.j2, the `--total_nodes` parameter represents the total nodes per worker type (prefill or decode), not the total nodes in the entire cluster. Each worker type needs to know its own group size for distributed coordination.
Learnt from: biswapanda
PR: ai-dynamo/dynamo#1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: The current get_vision_embeddings_size() function in examples/multimodal/utils/model.py uses a hardcoded fallback of 4096 for hidden_size, which assumes all VLMs follow the LLaVA architecture pattern. This is problematic because different VLMs like Qwen2-VL, MiniCPM-V, and others have different hidden dimensions, making the fallback unreliable and potentially causing tensor shape mismatches.
examples/multimodal_v1/configs/disagg.yaml (5)
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
Learnt from: GuanLuo
PR: ai-dynamo/dynamo#1371
File: examples/llm/benchmarks/vllm_multinode_setup.sh:18-25
Timestamp: 2025-06-05T01:46:15.509Z
Learning: In multi-node setups with head/worker architecture, the head node typically doesn't need environment variables pointing to its own services (like NATS_SERVER, ETCD_ENDPOINTS) because local processes can access them via localhost. Only worker nodes need these environment variables to connect to the head node's external IP address.
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/job_script_template.j2:59-59
Timestamp: 2025-07-02T13:20:28.800Z
Learning: In the SLURM job script template at examples/sglang/slurm_jobs/job_script_template.j2, the `--total_nodes` parameter represents the total nodes per worker type (prefill or decode), not the total nodes in the entire cluster. Each worker type needs to know its own group size for distributed coordination.
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:113-116
Timestamp: 2025-07-03T09:44:41.470Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, logging the full configuration file content is acceptable because the config file is public, contains only placeholder replacements (no sensitive data), and provides debugging benefits for users who may want to create configurations based on the logged output.
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: Different Vision-Language Models (VLMs) have different hidden dimensions, so using a single hardcoded fallback value of 4096 in get_vision_embeddings_size() is problematic and can cause tensor shape mismatches. The current code uses getattr(config, "hidden_size", 4096) where 4096 is a fallback default based on LLaVA models, but other VLMs like Qwen2-VL may have different dimensions.
examples/multimodal_v1/configs/llama.yaml (2)
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
Learnt from: biswapanda
PR: ai-dynamo/dynamo#1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
examples/multimodal_v1/utils/nats_queue.py (1)
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
examples/multimodal_v1/components/worker.py (3)
Learnt from: PeaBrane
PR: ai-dynamo/dynamo#1392
File: launch/dynamo-run/src/subprocess/vllm_v1_inc.py:71-71
Timestamp: 2025-06-05T01:04:24.775Z
Learning: The `create_endpoint` method in `WorkerMetricsPublisher` has backward compatibility maintained through pyo3 signature annotation `#[pyo3(signature = (component, dp_rank = None))]`, making the `dp_rank` parameter optional with a default value of `None`.
Learnt from: biswapanda
PR: ai-dynamo/dynamo#1890
File: examples/vllm/deploy/agg.yaml:63-70
Timestamp: 2025-07-14T23:01:16.218Z
Learning: In vLLM worker deployments, grep-based log checks for "VllmWorker.*has been initialized" are appropriate for readiness probes to verify worker startup, but should not be used for liveness probes which need to detect ongoing worker health.
Learnt from: fsaady
PR: ai-dynamo/dynamo#1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
examples/multimodal_v1/utils/model.py (2)
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: The current get_vision_embeddings_size() function in examples/multimodal/utils/model.py uses a hardcoded fallback of 4096 for hidden_size, which assumes all VLMs follow the LLaVA architecture pattern. This is problematic because different VLMs like Qwen2-VL, MiniCPM-V, and others have different hidden dimensions, making the fallback unreliable and potentially causing tensor shape mismatches.
Learnt from: krishung5
PR: ai-dynamo/dynamo#1388
File: examples/multimodal/utils/model.py:47-53
Timestamp: 2025-06-09T17:52:06.761Z
Learning: Different Vision-Language Models (VLMs) have different hidden dimensions, so using a single hardcoded fallback value of 4096 in get_vision_embeddings_size() is problematic and can cause tensor shape mismatches. The current code uses getattr(config, "hidden_size", 4096) where 4096 is a fallback default based on LLaVA models, but other VLMs like Qwen2-VL may have different dimensions.
examples/multimodal_v1/utils/chat_processor.py (1)
Learnt from: ishandhanani
PR: ai-dynamo/dynamo#1626
File: lib/llm/src/preprocessor.rs:238-239
Timestamp: 2025-06-24T20:59:35.725Z
Learning: In lib/llm/src/preprocessor.rs, the `sampling_options` call in the `preprocess_request` method is placed in the common section after the match statement on `request.prompt_input_type()`, meaning it applies to both `PromptInput::Tokens` and `PromptInput::Text` request types.
🧬 Code Graph Analysis (1)
examples/multimodal_v1/utils/protocol.py (2)
examples/multimodal_v1/components/encode_worker.py (1)
  • encode (76-170)
examples/multimodal_v1/connect/__init__.py (1)
  • SerializedRequest (1316-1344)
🪛 GitHub Actions: Copyright Checks
examples/multimodal_v1/utils/image_loader.py

[error] 1-1: Invalid or missing copyright header detected. Please add the required SPDX copyright and license header.

🪛 GitHub Actions: Pre Merge Validation of (ai-dynamo/dynamo/refs/pull/1990/merge) by GuanLuo.
examples/multimodal_v1/utils/image_loader.py

[error] Black formatting check failed. Files were reformatted by the black hook.

examples/multimodal_v1/components/worker.py

[error] Black formatting check failed. Files were reformatted by the black hook.

examples/multimodal_v1/components/direct_processor.py

[error] 153-153: Ruff: Local variable prompt_idx is assigned to but never used. (F841)


[error] isort formatting check failed. Files were modified by the isort hook.


[error] Black formatting check failed. Files were reformatted by the black hook.

examples/multimodal_v1/utils/protocol.py

[error] Black formatting check failed. Files were reformatted by the black hook.

🪛 markdownlint-cli2 (0.17.2)
examples/multimodal_v1/README.md

264-264: Multiple headings with the same content

(MD024, no-duplicate-heading)

examples/multimodal_v1/connect/README.md

26-26: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


30-30: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


34-34: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


38-38: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


138-138: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


140-140: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


339-339: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


340-340: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


341-341: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)


342-342: Unordered list indentation
Expected: 0; Actual: 2

(MD007, ul-indent)

🪛 LanguageTool
examples/multimodal_v1/connect/README.md

[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...le a remote worker to read from. - Register local writable memory: Register ...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~145-~145: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ... been completed or cancel the operation prior to completion. The RDMA metadata must be ...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~163-~163: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~176-~176: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~189-~189: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~202-~202: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~245-~245: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~269-~269: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~293-~293: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)


[style] ~317-~317: ‘prior to’ might be wordy. Consider a shorter alternative.
Context: ...e awaited until complete or and deleted prior to completion when cancellation is intende...

(EN_WORDINESS_PREMIUM_PRIOR_TO)

examples/multimodal_v1/backup_README.md

[style] ~89-~89: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...d with "Qwen/Qwen2.5-VL-7B-Instruct". If serving the example Phi3V model, replac...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🪛 Ruff (0.12.2)
examples/multimodal_v1/components/worker.py

22-22: typing.AsyncGenerator imported but unused

Remove unused import: typing.AsyncGenerator

(F401)


26-26: utils.model.get_vision_embeddings_info imported but unused

Remove unused import: utils.model.get_vision_embeddings_info

(F401)


28-28: utils.protocol.vLLMGenerateRequest imported but unused

Remove unused import: utils.protocol.vLLMGenerateRequest

(F401)


205-205: f-string without any placeholders

Remove extraneous f prefix

(F541)

examples/multimodal_v1/utils/model.py

24-24: transformers.AutoImageProcessor imported but unused

Remove unused import

(F401)


24-24: transformers.LlavaForConditionalGeneration imported but unused

Remove unused import

(F401)


25-25: transformers.Qwen2_5_VLForConditionalGeneration imported but unused

Remove unused import

(F401)


25-25: transformers.AutoTokenizer imported but unused

Remove unused import

(F401)


25-25: transformers.AutoProcessor imported but unused

Remove unused import

(F401)

examples/multimodal_v1/components/direct_processor.py

21-21: torch imported but unused

Remove unused import: torch

(F401)


27-27: utils.protocol.EncodeRequest imported but unused

Remove unused import

(F401)


27-27: utils.protocol.EncodeResponse imported but unused

Remove unused import

(F401)


34-34: dynamo.runtime.EtcdKvCache imported but unused

Remove unused import: dynamo.runtime.EtcdKvCache

(F401)


149-149: Local variable prompt_idx is assigned to but never used

Remove assignment to unused variable prompt_idx

(F841)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (18)
examples/multimodal_v1/graphs/agg.py (2)

16-19: LGTM! Clean component imports.

The imports are well-organized and follow a logical pattern for the multimodal serving pipeline components.


21-21: LGTM! Clear aggregated pipeline definition.

The component linking establishes a logical flow: Frontend → Processor → VllmEncodeWorker → VllmPDWorker, which aligns with the aggregated serving architecture described in the AI summary.

examples/multimodal_v1/graphs/disagg.py (2)

16-19: LGTM! Consistent component imports for disaggregated architecture.

The imports correctly include both VllmPDWorker and VllmDecodeWorker from the components.worker module, supporting the disaggregated serving pattern.


21-23: LGTM! Well-formatted disaggregated pipeline definition.

The multi-line component linking clearly shows the disaggregated flow with the additional VllmDecodeWorker stage, improving readability compared to a single-line chain.

examples/multimodal_v1/configs/agg-llava.yaml (4)

15-21: LGTM! Well-structured common configuration.

The common parameters are appropriately defined for the LLaVA model with reasonable defaults for block size, max model length, and image processing parameters.


25-28: LGTM! Appropriate processor configuration.

The round-robin router and prompt template correctly integrate image and text inputs for the LLaVA model format.


30-37: LGTM! Consistent resource allocation pattern.

The single GPU allocation per worker aligns with the tensor-parallel-size of 1, following the established pattern from other Dynamo examples.


39-50: LGTM! Comprehensive VllmPDWorker configuration.

The configuration includes appropriate settings for batching, caching, and disaggregation control, with correct common-configs references for the LLaVA model parameters.

examples/multimodal_v1/configs/agg-qwen.yaml (4)

15-18: LGTM! Appropriate Qwen model configuration.

The common parameters are correctly configured for the Qwen2.5-VL-7B-Instruct model with standard block size and max model length settings.


23-26: LGTM! Qwen-specific prompt template.

The prompt template correctly uses Qwen's chat format with system, user, vision, and assistant tokens, properly integrating image and text inputs.


47-54: LGTM! Standard encode worker configuration.

The VllmEncodeWorker configuration follows the established pattern with appropriate GPU allocation and tensor parallelism settings.


28-45: Ignore worker type mismatch for agg-qwen.yaml
The naming and use of VllmDecodeWorker in agg-qwen.yaml is consistent with the other aggregated config (agg-phi3v.yaml)—aggregated multimodal configs don’t all use VllmPDWorker. No changes required.

Likely an incorrect or invalid review comment.

examples/multimodal_v1/configs/agg-phi3v.yaml (1)

36-37: Verify image-token-id and num-patches against Phi-3.5-vision-instruct preprocessing
The values in examples/multimodal_v1/configs/agg-phi3v.yaml are currently hard-coded. Please confirm:

  • num-patches (757): Does this match what calc_num_image_tokens_from_image_size returns for the default image size (e.g., 336×336) in the HF microsoft/Phi-3.5-vision-instruct repo’s processing_phi3_v.py?
  • image-token-id (32000): Run tokenizer.convert_tokens_to_ids("<|image_1|>") (or the appropriate image index) against the model’s tokenizer to ensure it resolves to 32000.
examples/multimodal_v1/configs/llama.yaml (1)

15-56: Configuration aligns with large model requirements

The resource allocation correctly configures 8 GPUs per worker for VllmDecodeWorker and VllmPDWorker, which is appropriate for the Llama-4-Maverick-17B-128E-Instruct-FP8 model as mentioned in the README.

examples/multimodal_v1/utils/nats_queue.py (1)

45-47: The lock is still recommended despite the GIL

While Python's GIL prevents true parallel execution of Python bytecode, the lock is still valuable for async code. Between await points, other coroutines can run and potentially create race conditions in the singleton initialization. Keep the lock for safety.

examples/multimodal_v1/connect/__init__.py (1)

26-38: Well-structured GPU acceleration fallback mechanism.

The module properly handles the availability of cupy for GPU acceleration with a graceful fallback to numpy. The logging messages clearly indicate which acceleration mode is being used.

examples/multimodal_v1/utils/chat_processor.py (1)

182-211: Well-implemented streaming delta extraction.

The logic for extracting only new content from the full accumulated text is correctly implemented. This ensures proper incremental updates in streaming responses.

examples/multimodal_v1/components/processor.py (1)

61-61: Sampling parameter retrieval is correct

The get_diff_sampling_param() method is the intended vLLM engine‐config API for extracting user‐specified sampling parameter diffs and is consistently used across the CLI and all example processors. No changes are required here.

GuanLuo and others added 2 commits July 17, 2025 16:42
Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
@GuanLuo GuanLuo requested a review from krishung5 July 18, 2025 18:23
Copy link
Collaborator

@whoisj whoisj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - shame that connect is getting copied again. @nnshah1 we really need to merge that lib code.

Copy link
Contributor

@krishung5 krishung5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Copy link
Contributor

@rmccorm4 rmccorm4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM other than open comments. Updates to move away from dynamo serve to new UX will be a separate PR.

@GuanLuo GuanLuo merged commit 353146e into main Jul 18, 2025
7 checks passed
@GuanLuo GuanLuo deleted the gluo/maverick branch July 18, 2025 20:33
ZichengMa added a commit that referenced this pull request Jul 18, 2025
commit e330d96
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <yanrpei@gmail.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
    Co-authored-by: krishung5 <krish@nvidia.com>

commit 1f07dab
Author: Jacky <18255193+kthui@users.noreply.github.com>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <grahamk@nvidia.com>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit cc90ca6
Author: atchernych <atchernych@nvidia.com>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <grclark@nvidia.com>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <grclark@nvidia.com>

commit b8474e5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <krish@nvidia.com>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <zaristei@berkeley.edu>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <grahamk@nvidia.com>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com>
    Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com>

commit 08891ff
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit 4d2a31a
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <213854356+keivenchang@users.noreply.github.com>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com>

commit fc402a3
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <zichengma1225@gmail.com>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <zichengma1225@gmail.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>

commit 901715b
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu>

commit 9e76590
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
    Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit 053041e
Author: Jorge António <matroid@outlook.com>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <tusharma@nvidia.com>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <neelays@nvidia.com>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <neelays@nvidia.com>
    Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com>

commit 3d17a49
Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>

commit 3e0cb07
Author: Anant Sharma <anants@nvidia.com>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>

commit df91fce
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
alec-flowers added a commit that referenced this pull request Jul 18, 2025
@GuanLuo GuanLuo restored the gluo/maverick branch July 19, 2025 00:35
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit cb6de94
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit fe63c17
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <yanrpei@gmail.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
    Co-authored-by: krishung5 <krish@nvidia.com>

commit 1f07dab
Author: Jacky <18255193+kthui@users.noreply.github.com>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <grahamk@nvidia.com>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit cc90ca6
Author: atchernych <atchernych@nvidia.com>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <grclark@nvidia.com>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <grclark@nvidia.com>

commit b8474e5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <krish@nvidia.com>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <zaristei@berkeley.edu>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <grahamk@nvidia.com>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com>
    Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com>

commit 08891ff
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit 4d2a31a
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <213854356+keivenchang@users.noreply.github.com>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com>

commit fc402a3
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <zichengma1225@gmail.com>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <zichengma1225@gmail.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>

commit 901715b
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu>

commit 9e76590
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
    Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit 053041e
Author: Jorge António <matroid@outlook.com>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <tusharma@nvidia.com>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <neelays@nvidia.com>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <neelays@nvidia.com>
    Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com>

commit 3d17a49
Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>

commit 3e0cb07
Author: Anant Sharma <anants@nvidia.com>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>

commit df91fce
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit d4b5414
Author: atchernych <atchernych@nvidia.com>
Date:   Mon Jul 21 13:10:24 2025 -0700

    fix: mypy error (#2029)

commit 79337c7
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Mon Jul 21 12:12:16 2025 -0700

    build: support custom TRTLLM build for commits not on main branch (#2021)

commit 95dd942
Author: atchernych <atchernych@nvidia.com>
Date:   Mon Jul 21 12:09:33 2025 -0700

    docs: Post-Merge cleanup of the deploy documentation (#1922)

commit cb6de94
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit fe63c17
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <yanrpei@gmail.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
    Co-authored-by: krishung5 <krish@nvidia.com>

commit 1f07dab
Author: Jacky <18255193+kthui@users.noreply.github.com>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <grahamk@nvidia.com>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit cc90ca6
Author: atchernych <atchernych@nvidia.com>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <grclark@nvidia.com>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <grclark@nvidia.com>

commit b8474e5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <krish@nvidia.com>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <zaristei@berkeley.edu>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <grahamk@nvidia.com>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com>
    Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com>

commit 08891ff
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit 4d2a31a
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <213854356+keivenchang@users.noreply.github.com>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com>

commit fc402a3
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <zichengma1225@gmail.com>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <zichengma1225@gmail.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>

commit 901715b
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu>

commit 9e76590
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
    Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit 053041e
Author: Jorge António <matroid@outlook.com>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <tusharma@nvidia.com>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <neelays@nvidia.com>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <neelays@nvidia.com>
    Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com>

commit 3d17a49
Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>

commit 3e0cb07
Author: Anant Sharma <anants@nvidia.com>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>

commit df91fce
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants