Skip to content

Conversation

ryanolson
Copy link
Contributor

@ryanolson ryanolson commented Jul 18, 2025

Summary by CodeRabbit

  • New Features
    • Improved detection and handling of client disconnects for both unary and streaming HTTP requests, ensuring prompt cancellation of long-running operations if the client disconnects.
    • Request IDs are now consistently extracted or generated and included in responses, enhancing traceability.
  • Improvements
    • Streaming and aggregation functions now accept a wider range of stream types, increasing flexibility and compatibility.
    • Additional accessors for context identifiers and contents are available.
    • Unified and refactored error handling for better consistency.
  • Bug Fixes
    • Enhanced error handling and response consistency for invalid requests.
  • Tests
    • Added comprehensive tests for client disconnect cancellation and request ID propagation in responses.

…le; added request_id annotation and setting the request id from either request.user, x-dynamo-request-id header or server generate
Copy link

copy-pr-bot bot commented Jul 18, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

coderabbitai bot commented Jul 18, 2025

Walkthrough

This update introduces a new disconnect module for robust client disconnect handling in HTTP services, refactors error handling and request ID propagation, generalizes stream types across protocol aggregators, and adds comprehensive tests for disconnect and annotation features. Public API surfaces are updated to use improved error and context handling throughout.

Changes

File(s) Change Summary
lib/llm/src/http/service.rs
lib/llm/src/http/service/disconnect.rs
Added disconnect module for monitoring client disconnects; exposes connection status and handle abstractions for both unary and streaming HTTP flows.
lib/llm/src/http/service/openai.rs Refactored error handling (ErrorResponseErrorMessage), unified error return types, added request ID extraction, restructured handlers for context and disconnect monitoring, and updated router/test usage.
lib/llm/src/protocols.rs
lib/llm/src/protocols/openai/chat_completions/aggregator.rs
lib/llm/src/protocols/openai/completions/aggregator.rs
lib/llm/src/protocols/openai/embeddings/aggregator.rs
Generalized stream input types in protocol aggregator functions from concrete type aliases to generic impl Stream trait bounds.
lib/llm/tests/http-service.rs Added LongRunningEngine for disconnect testing; introduced async tests for unary/streaming disconnect cancellation and request ID annotation propagation.
lib/runtime/src/pipeline/context.rs Added accessor methods id() and content() to Context<T> for identifier and content retrieval.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant HTTPService
    participant DisconnectMonitor
    participant EngineContext

    Client->>HTTPService: Sends request (unary or streaming)
    HTTPService->>DisconnectMonitor: Create connection handles
    HTTPService->>EngineContext: Start processing/generation
    Note over HTTPService,DisconnectMonitor: Monitor client connection state
    alt Client disconnects unexpectedly
        DisconnectMonitor->>EngineContext: kill() (cancel processing)
    else Client completes normally
        DisconnectMonitor->>EngineContext: No action (graceful close)
    end
    HTTPService-->>Client: Sends response or ends stream
Loading

Poem

🐇
New streams now flow, with context in tow,
If clients depart, the engines will know.
Errors renamed, IDs in the stream,
Disconnects handled, as smooth as a dream.
With tests that assert, and accessors neat—
This rabbit’s code hop is quite the feat!


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (2)
lib/llm/src/http/service/openai.rs (2)

210-210: Remove unused request_id variables

These request_id variables are generated but immediately overwritten or unused:

  • Line 210: Overwritten by context.id()
  • Line 310: Never used in embeddings handler
  • Line 634: Context already has the ID
-    // todo - extract distributed tracing id and context id from headers
-    let request_id = uuid::Uuid::new_v4().to_string();

     // todo - decide on default

Also applies to: 310-310, 634-634


608-618: Inconsistent validation function return types

The validate_response_unsupported_fields and validate_response_input_is_text_only functions return Option<impl IntoResponse> while other validation functions return Result<(), ErrorResponse>. This inconsistency makes error handling awkward.

Update these functions to return Result<(), ErrorResponse> for consistency:

-    if let Some(resp) = validate_response_unsupported_fields(&request) {
-        return Ok(resp.into_response());
-    }
+    validate_response_unsupported_fields(&request)?;

-pub fn validate_response_unsupported_fields(
-    request: &NvCreateResponse,
-) -> Option<impl IntoResponse> {
+pub fn validate_response_unsupported_fields(
+    request: &NvCreateResponse,
+) -> Result<(), ErrorResponse> {
     let inner = &request.inner;

     if inner.background == Some(true) {
-        return Some(ErrorMessage::not_implemented_error(
+        return Err(ErrorMessage::not_implemented_error(
             "`background: true` is not supported.",
         ));
     }
     // ... similar changes for other fields ...
-    None
+    Ok(())
}

Also applies to: 708-811

🧹 Nitpick comments (3)
lib/llm/src/http/service/disconnect.rs (1)

182-184: Track the TODO for dynamo sentinel event handling

The TODO comment indicates a potential ordering issue where a dynamo sentinel event might need to be yielded before the [DONE] event to prevent the async-openai client from "chomping" it.

Would you like me to create an issue to track this TODO item for proper sentinel event ordering?

lib/llm/tests/http-service.rs (2)

91-91: Document the reason for using deprecated max_tokens

The #[allow(deprecated)] attribute suppresses warnings but doesn't explain why max_tokens is still being used.

Add a comment explaining why the deprecated field is still necessary:

-        // ALLOW: max_tokens is deprecated in favor of completion_usage_tokens
+        // ALLOW: max_tokens is deprecated in favor of max_completion_tokens
+        // but we still use it for backward compatibility in tests
         #[allow(deprecated)]

1134-1135: Technical debt: Improve test fixtures

The TODO comment correctly identifies that the test setup is repetitive. Each test creates its own service, engines, and client configuration.

Would you like me to help refactor the test setup into reusable fixtures that could reduce duplication across these integration tests?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc12436 and d5a9897.

📒 Files selected for processing (9)
  • lib/llm/src/http/service.rs (1 hunks)
  • lib/llm/src/http/service/disconnect.rs (1 hunks)
  • lib/llm/src/http/service/openai.rs (29 hunks)
  • lib/llm/src/protocols.rs (3 hunks)
  • lib/llm/src/protocols/openai/chat_completions/aggregator.rs (3 hunks)
  • lib/llm/src/protocols/openai/completions/aggregator.rs (3 hunks)
  • lib/llm/src/protocols/openai/embeddings/aggregator.rs (3 hunks)
  • lib/llm/tests/http-service.rs (5 hunks)
  • lib/runtime/src/pipeline/context.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (8)
lib/llm/src/protocols.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
lib/llm/src/protocols/openai/embeddings/aggregator.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: t-ob
PR: ai-dynamo/dynamo#1290
File: launch/dynamo-run/src/subprocess/sglang_inc.py:80-110
Timestamp: 2025-06-03T10:17:51.711Z
Learning: The sglang `async_encode` method does not support streaming options, so collecting all embeddings before yielding is the correct approach for embedding requests.
lib/llm/src/protocols/openai/completions/aggregator.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
lib/llm/src/protocols/openai/chat_completions/aggregator.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
lib/llm/tests/http-service.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: oandreeva-nv
PR: ai-dynamo/dynamo#1195
File: lib/llm/tests/block_manager.rs:150-152
Timestamp: 2025-06-02T19:37:27.666Z
Learning: In Rust/Tokio applications, when background tasks use channels for communication, dropping the sender automatically signals task termination when the receiver gets `None`. The `start_batching_publisher` function in `lib/llm/tests/block_manager.rs` demonstrates this pattern: when the `KVBMDynamoRuntimeComponent` is dropped, its `batch_tx` sender is dropped, causing `rx.recv()` to return `None`, which triggers cleanup and task termination.
lib/runtime/src/pipeline/context.rs (1)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
lib/llm/src/http/service/disconnect.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1093
File: lib/llm/src/block_manager/block/registry.rs:98-122
Timestamp: 2025-05-29T06:20:12.901Z
Learning: In lib/llm/src/block_manager/block/registry.rs, the background task spawned for handling unregister notifications uses detached concurrency by design. The JoinHandle is intentionally not stored as this represents a reasonable architectural tradeoff for a long-running cleanup task.
Learnt from: oandreeva-nv
PR: ai-dynamo/dynamo#1195
File: lib/llm/tests/block_manager.rs:150-152
Timestamp: 2025-06-02T19:37:27.666Z
Learning: In Rust/Tokio applications, when background tasks use channels for communication, dropping the sender automatically signals task termination when the receiver gets `None`. The `start_batching_publisher` function in `lib/llm/tests/block_manager.rs` demonstrates this pattern: when the `KVBMDynamoRuntimeComponent` is dropped, its `batch_tx` sender is dropped, causing `rx.recv()` to return `None`, which triggers cleanup and task termination.
lib/llm/src/http/service/openai.rs (8)
Learnt from: PeaBrane
PR: ai-dynamo/dynamo#1392
File: lib/llm/src/kv_router/scoring.rs:35-46
Timestamp: 2025-06-05T01:02:15.318Z
Learning: In lib/llm/src/kv_router/scoring.rs, PeaBrane prefers panic-based early failure over Result-based error handling for the worker_id() method to catch invalid data early during development.
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
Learnt from: t-ob
PR: ai-dynamo/dynamo#1290
File: launch/dynamo-run/src/subprocess/sglang_inc.py:80-110
Timestamp: 2025-06-03T10:17:51.711Z
Learning: The sglang `async_encode` method does not support streaming options, so collecting all embeddings before yielding is the correct approach for embedding requests.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:32:05.022Z
Learning: In async-nats, the "no responders" error is represented as async_nats::error::RequestErrorKind::NoResponders. Use err.downcast_ref::<async_nats::error::RequestError>() and then check req_err.kind() against RequestErrorKind::NoResponders to handle this error properly.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:32:05.022Z
Learning: In async-nats, the "no responders" error is represented as async_nats::client::RequestErrorKind::NoResponders, not async_nats::Error::NoResponders. Use err.downcast_ref::<async_nats::client::RequestError>() and then check request_err.kind() against RequestErrorKind::NoResponders.
Learnt from: ishandhanani
PR: ai-dynamo/dynamo#1626
File: lib/llm/src/preprocessor.rs:238-239
Timestamp: 2025-06-24T20:59:35.725Z
Learning: In lib/llm/src/preprocessor.rs, the `sampling_options` call in the `preprocess_request` method is placed in the common section after the match statement on `request.prompt_input_type()`, meaning it applies to both `PromptInput::Tokens` and `PromptInput::Text` request types.
Learnt from: oandreeva-nv
PR: ai-dynamo/dynamo#1195
File: lib/llm/tests/block_manager.rs:150-152
Timestamp: 2025-06-02T19:37:27.666Z
Learning: In Rust/Tokio applications, when background tasks use channels for communication, dropping the sender automatically signals task termination when the receiver gets `None`. The `start_batching_publisher` function in `lib/llm/tests/block_manager.rs` demonstrates this pattern: when the `KVBMDynamoRuntimeComponent` is dropped, its `batch_tx` sender is dropped, causing `rx.recv()` to return `None`, which triggers cleanup and task termination.
🧬 Code Graph Analysis (1)
lib/runtime/src/pipeline/context.rs (3)
lib/runtime/src/service.rs (1)
  • id (66-74)
lib/llm/src/protocols.rs (1)
  • content (47-47)
lib/llm/src/protocols/openai/completions.rs (1)
  • content (51-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: pre-merge-rust (.)
  • GitHub Check: pre-merge-rust (lib/bindings/python)
  • GitHub Check: pre-merge-rust (lib/runtime/examples)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (18)
lib/llm/src/http/service.rs (1)

23-23: LGTM! Clean module addition.

The public disconnect module addition follows the established pattern and integrates well with the existing module structure.

lib/runtime/src/pipeline/context.rs (1)

78-86: LGTM! Well-designed accessor methods.

The new id() and content() methods provide clean, explicit access to the context's identifier and data. The implementation correctly delegates to the controller for the ID and returns a reference to the current data, following standard Rust conventions.

lib/llm/src/protocols.rs (2)

22-22: LGTM! Proper import addition.

Adding the Stream trait import alongside StreamExt supports the function signature generalization.


52-66: LGTM! Excellent generalization of stream handling.

The function signature has been properly generalized from a concrete DataStream to a generic impl Stream trait bound, which increases flexibility while maintaining the same functionality. The implementation correctly maps over the stream items and handles both success and error cases appropriately.

lib/llm/src/protocols/openai/embeddings/aggregator.rs (3)

23-23: LGTM! Consistent import update.

Adding the Stream trait import follows the established pattern across the codebase.


61-61: LGTM! Proper stream generalization.

The method signature has been correctly generalized to accept any stream implementing the required trait bounds, improving flexibility while maintaining type safety.


136-136: LGTM! Consistent API generalization.

The from_annotated_stream method signature follows the same generalization pattern, maintaining consistency across the aggregator interface.

lib/llm/src/protocols/openai/completions/aggregator.rs (3)

19-19: LGTM! Consistent import pattern.

The Stream trait import addition aligns with the generalization changes throughout the codebase.


67-67: LGTM! Proper method signature generalization.

The apply method signature has been correctly generalized to work with any stream implementing the required trait, maintaining consistency with other aggregators.


186-186: LGTM! Consistent API design.

The from_annotated_stream method signature follows the established generalization pattern, completing the uniform interface across all protocol aggregators.

lib/llm/src/protocols/openai/chat_completions/aggregator.rs (1)

16-16: Good API generalization with improved flexibility

The change from concrete DataStream to generic impl Stream trait bounds is a solid improvement that:

  • Reduces coupling to specific stream implementations
  • Maintains backward compatibility (DataStream implements Stream)
  • Uses zero-cost abstraction with static dispatch
  • Aligns with Rust's principle of accepting the most general interface

Also applies to: 97-97, 262-262

lib/llm/src/http/service/disconnect.rs (1)

1-196: Well-designed disconnect monitoring implementation

The module provides a robust solution for client disconnect detection with:

  • Clear separation of concerns between connection and stream lifecycle
  • Proper resource cleanup via Drop trait
  • Efficient concurrent monitoring using tokio::select!
  • Good documentation explaining the two-phase approach

The detached task pattern (line 110) aligns with the approved architectural approach from previous learnings.

lib/llm/tests/http-service.rs (2)

57-160: Well-designed test engine for cancellation testing

The LongRunningEngine implementation effectively simulates long-running operations with proper cancellation detection using atomic flags and tokio::select!. The pattern of initially setting the flag to true and only clearing it on successful completion is a good defensive approach.


955-1261: Comprehensive test coverage for disconnect handling

The three new tests provide excellent coverage for:

  • Unary request cancellation on client disconnect
  • Streaming request cancellation on client disconnect
  • Request ID annotation propagation in SSE streams

The tests properly verify timing constraints and use appropriate assertions. Good use of unique ports to avoid conflicts.

lib/llm/src/http/service/openai.rs (4)

12-28: Excellent error handling refactoring

The restructuring of error handling with ErrorMessage struct and ErrorResponse type alias provides:

  • Cleaner, more consistent error responses
  • Proper HTTP status code mapping
  • Good separation between client errors (4xx) and server errors (5xx)
  • Proper downcast handling for HttpError in from_anyhow

Also applies to: 48-131


133-156: Well-designed request ID extraction with proper fallbacks

The get_or_create_request_id function implements a robust three-tier fallback:

  1. Primary source (e.g., user field)
  2. Request header
  3. Generate new UUID

Good UUID validation and error handling throughout.


166-198: Excellent handler refactoring with connection monitoring

The pattern of splitting handlers into public entry points and private async implementations provides:

  • Clean separation of concerns
  • Proper connection lifecycle management
  • Consistent error handling with task spawning
  • Good integration with the disconnect monitoring module

Also applies to: 358-388, 564-594


649-652: Redundant context creation

The code creates a new context wrapper around the request, but the original context already exists and should be reused.

-    let request = context.map(|mut _req| {
-        request.inner.stream = Some(false);
-        request
-    });
+    let request = context.map(|mut req| {
+        req.inner.stream = Some(false);
+        req
+    });
⛔ Skipped due to learnings
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.

@ryanolson
Copy link
Contributor Author

@coderabbitai full review

Copy link
Contributor

coderabbitai bot commented Jul 18, 2025

✅ Actions performed

Full review triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (2)
lib/llm/src/http/service/openai.rs (2)

209-211: Remove dead code.

The request_id variable is generated but never used, as request ID handling has been moved to the handler function.

-    // todo - extract distributed tracing id and context id from headers
-    let request_id = uuid::Uuid::new_v4().to_string();

309-311: Remove unused request_id variable.

Similar to the completions function, this request_id is generated but never used.

-    // todo - extract distributed tracing id and context id from headers
-    let request_id = uuid::Uuid::new_v4().to_string();
🧹 Nitpick comments (3)
lib/llm/tests/http-service.rs (1)

1131-1261: Thorough request ID annotation test!

The test comprehensively validates request ID propagation through SSE annotations. The TODO comment about fixtures is valid - consider creating test helper functions to reduce setup boilerplate.

lib/llm/src/http/service/disconnect.rs (1)

1-30: Fix typo in documentation.

Line 4: "monitoring and responding" should be "monitor and respond".

-//! The `disconnect` module provides a mechanism for our axum http services to monitoring and responding
+//! The `disconnect` module provides a mechanism for our axum http services to monitor and respond
lib/llm/src/http/service/openai.rs (1)

133-156: Robust request ID resolution!

The function implements a good fallback chain: primary source → header → generated UUID. Consider adding debug logging when falling back to a generated UUID to help with troubleshooting.

     let uuid = match request_id_opt {
         Some(request_id) => {
-            uuid::Uuid::parse_str(request_id).unwrap_or_else(|_| uuid::Uuid::new_v4())
+            uuid::Uuid::parse_str(request_id).unwrap_or_else(|_| {
+                tracing::debug!("Invalid UUID in header, generating new one");
+                uuid::Uuid::new_v4()
+            })
         }
-        None => uuid::Uuid::new_v4(),
+        None => {
+            tracing::debug!("No request ID provided, generating new one");
+            uuid::Uuid::new_v4()
+        },
     };
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc12436 and d5a9897.

📒 Files selected for processing (9)
  • lib/llm/src/http/service.rs (1 hunks)
  • lib/llm/src/http/service/disconnect.rs (1 hunks)
  • lib/llm/src/http/service/openai.rs (29 hunks)
  • lib/llm/src/protocols.rs (3 hunks)
  • lib/llm/src/protocols/openai/chat_completions/aggregator.rs (3 hunks)
  • lib/llm/src/protocols/openai/completions/aggregator.rs (3 hunks)
  • lib/llm/src/protocols/openai/embeddings/aggregator.rs (3 hunks)
  • lib/llm/tests/http-service.rs (5 hunks)
  • lib/runtime/src/pipeline/context.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (8)
lib/runtime/src/pipeline/context.rs (1)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
lib/llm/src/protocols/openai/embeddings/aggregator.rs (1)
Learnt from: t-ob
PR: ai-dynamo/dynamo#1290
File: launch/dynamo-run/src/subprocess/sglang_inc.py:80-110
Timestamp: 2025-06-03T10:17:51.711Z
Learning: The sglang `async_encode` method does not support streaming options, so collecting all embeddings before yielding is the correct approach for embedding requests.
lib/llm/src/protocols/openai/completions/aggregator.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
lib/llm/src/protocols.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
lib/llm/src/protocols/openai/chat_completions/aggregator.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
lib/llm/src/http/service/disconnect.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1093
File: lib/llm/src/block_manager/block/registry.rs:98-122
Timestamp: 2025-05-29T06:20:12.901Z
Learning: In lib/llm/src/block_manager/block/registry.rs, the background task spawned for handling unregister notifications uses detached concurrency by design. The JoinHandle is intentionally not stored as this represents a reasonable architectural tradeoff for a long-running cleanup task.
Learnt from: oandreeva-nv
PR: ai-dynamo/dynamo#1195
File: lib/llm/tests/block_manager.rs:150-152
Timestamp: 2025-06-02T19:37:27.666Z
Learning: In Rust/Tokio applications, when background tasks use channels for communication, dropping the sender automatically signals task termination when the receiver gets `None`. The `start_batching_publisher` function in `lib/llm/tests/block_manager.rs` demonstrates this pattern: when the `KVBMDynamoRuntimeComponent` is dropped, its `batch_tx` sender is dropped, causing `rx.recv()` to return `None`, which triggers cleanup and task termination.
lib/llm/tests/http-service.rs (2)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: oandreeva-nv
PR: ai-dynamo/dynamo#1195
File: lib/llm/tests/block_manager.rs:150-152
Timestamp: 2025-06-02T19:37:27.666Z
Learning: In Rust/Tokio applications, when background tasks use channels for communication, dropping the sender automatically signals task termination when the receiver gets `None`. The `start_batching_publisher` function in `lib/llm/tests/block_manager.rs` demonstrates this pattern: when the `KVBMDynamoRuntimeComponent` is dropped, its `batch_tx` sender is dropped, causing `rx.recv()` to return `None`, which triggers cleanup and task termination.
lib/llm/src/http/service/openai.rs (8)
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.
Learnt from: PeaBrane
PR: ai-dynamo/dynamo#1285
File: lib/llm/src/kv_router/scoring.rs:58-63
Timestamp: 2025-05-30T06:38:09.630Z
Learning: In lib/llm/src/kv_router/scoring.rs, the user prefers to keep the panic behavior when calculating load_avg and variance with empty endpoints rather than adding guards for division by zero. They want the code to fail fast on this error condition.
Learnt from: t-ob
PR: ai-dynamo/dynamo#1290
File: launch/dynamo-run/src/subprocess/sglang_inc.py:80-110
Timestamp: 2025-06-03T10:17:51.711Z
Learning: The sglang `async_encode` method does not support streaming options, so collecting all embeddings before yielding is the correct approach for embedding requests.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:32:05.022Z
Learning: In async-nats, the "no responders" error is represented as async_nats::error::RequestErrorKind::NoResponders. Use err.downcast_ref::<async_nats::error::RequestError>() and then check req_err.kind() against RequestErrorKind::NoResponders to handle this error properly.
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:32:05.022Z
Learning: In async-nats, the "no responders" error is represented as async_nats::client::RequestErrorKind::NoResponders, not async_nats::Error::NoResponders. Use err.downcast_ref::<async_nats::client::RequestError>() and then check request_err.kind() against RequestErrorKind::NoResponders.
Learnt from: ishandhanani
PR: ai-dynamo/dynamo#1626
File: lib/llm/src/preprocessor.rs:238-239
Timestamp: 2025-06-24T20:59:35.725Z
Learning: In lib/llm/src/preprocessor.rs, the `sampling_options` call in the `preprocess_request` method is placed in the common section after the match statement on `request.prompt_input_type()`, meaning it applies to both `PromptInput::Tokens` and `PromptInput::Text` request types.
Learnt from: oandreeva-nv
PR: ai-dynamo/dynamo#1195
File: lib/llm/tests/block_manager.rs:150-152
Timestamp: 2025-06-02T19:37:27.666Z
Learning: In Rust/Tokio applications, when background tasks use channels for communication, dropping the sender automatically signals task termination when the receiver gets `None`. The `start_batching_publisher` function in `lib/llm/tests/block_manager.rs` demonstrates this pattern: when the `KVBMDynamoRuntimeComponent` is dropped, its `batch_tx` sender is dropped, causing `rx.recv()` to return `None`, which triggers cleanup and task termination.
🧬 Code Graph Analysis (2)
lib/runtime/src/pipeline/context.rs (3)
lib/runtime/src/service.rs (1)
  • id (66-74)
lib/llm/src/protocols.rs (1)
  • content (47-47)
lib/llm/src/protocols/openai/completions.rs (1)
  • content (51-53)
lib/llm/src/http/service/disconnect.rs (2)
lib/runtime/src/pipeline/context.rs (8)
  • id (79-81)
  • id (272-274)
  • id (342-344)
  • id (357-359)
  • context (226-228)
  • context (306-308)
  • stopped (296-298)
  • stopped (369-375)
lib/llm/src/http/service/metrics.rs (1)
  • mark_ok (326-328)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pre-merge-rust (.)
  • GitHub Check: pre-merge-rust (lib/bindings/python)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (27)
lib/llm/src/http/service.rs (1)

23-23: LGTM: Clean module integration

The public module declaration properly exposes the disconnect handling functionality to the HTTP service layer, enabling connection monitoring and cancellation features.

lib/runtime/src/pipeline/context.rs (1)

78-86: LGTM: Clean accessor methods enhance context usability

The new id() and content() accessor methods provide explicit, well-named access to the context's identifier and wrapped data. These methods support the disconnect handling functionality by facilitating request ID extraction and data access in HTTP handlers.

lib/llm/src/protocols.rs (2)

22-22: LGTM: Stream trait import for generalization

Adding the Stream trait import supports the generalization of stream handling functions.


52-66: LGTM: Excellent stream type generalization

The function signature generalization from concrete DataStream to impl Stream improves flexibility while maintaining the same functionality. The direct return of the mapped stream without boxing is more efficient and cleaner.

lib/llm/src/protocols/openai/completions/aggregator.rs (3)

19-19: LGTM: Stream trait import for consistency

Adding the Stream trait import maintains consistency with the generalization pattern across protocol aggregators.


67-67: LGTM: Stream type generalization enhances flexibility

Generalizing the apply method parameter from concrete DataStream to impl Stream improves flexibility while maintaining the same aggregation logic.


186-186: LGTM: Consistent stream generalization

The from_annotated_stream method generalization aligns with the pattern established in other protocol aggregators, maintaining consistency across the codebase.

lib/llm/src/protocols/openai/embeddings/aggregator.rs (3)

23-23: LGTM: Stream trait import completes generalization pattern

Adding the Stream trait import maintains consistency with the generalization pattern across all protocol aggregators.


61-61: LGTM: Stream type generalization for embeddings

Generalizing the apply method parameter from concrete DataStream to impl Stream maintains consistency with other protocol aggregators while improving flexibility.


136-136: LGTM: Consistent stream interface generalization

The from_annotated_stream method generalization completes the consistent pattern across all protocol aggregators, enhancing flexibility for stream handling.

lib/llm/src/protocols/openai/chat_completions/aggregator.rs (3)

16-16: LGTM!

The import addition is correct and necessary for the generic Stream trait bounds used in the updated function signatures.


96-98: Good API improvement!

Generalizing the stream parameter to accept any impl Stream makes the function more flexible and follows best practices by accepting trait bounds rather than concrete types.


261-263: Consistent API improvement!

The generalization to impl Stream maintains consistency with the apply method and improves API flexibility.

lib/llm/tests/http-service.rs (5)

31-32: LGTM!

All new imports are properly used in the test implementations.

Also applies to: 50-52


57-73: Well-designed test utility!

The LongRunningEngine provides a good way to test cancellation behavior with proper atomic operations for thread-safe cancellation tracking.


114-160: Clever cancellation detection design!

The implementation correctly handles cancellation by defaulting to cancelled=true and only setting it to false on successful completion. The tokio::select! properly monitors both the delay and context cancellation.


955-1037: Comprehensive disconnect test for unary requests!

The test properly simulates client disconnect using timeout and verifies that cancellation propagates to the engine within the expected timeframe.


1039-1129: Well-designed streaming disconnect test!

The test correctly simulates streaming client disconnect by dropping the stream after reading one chunk and verifies timely cancellation propagation.

lib/llm/src/http/service/disconnect.rs (4)

38-92: Well-designed connection handle abstraction!

The ConnectionHandle provides a clean API for monitoring connection state with proper drop semantics. The multiple factory methods offer good ergonomics for different use cases.


94-121: Good connection monitoring setup!

The function correctly creates two handles with appropriate initial states and spawns the monitoring task. The detached concurrency pattern is appropriate here as the task naturally terminates when both handles are dropped.


123-151: Robust connection monitoring logic!

The function correctly handles all connection states and appropriately triggers cancellation via engine_context.kill() for unexpected closures.


153-196: Excellent stream monitoring implementation!

The function properly handles all cases: normal completion, errors, and context cancellation. The use of tokio::select! ensures responsive cancellation detection.

Note the TODO about yielding a dynamo sentinel event - this might be needed for proper client compatibility.

lib/llm/src/http/service/openai.rs (5)

12-47: Clean imports and well-defined constants!

The new imports support the refactored functionality, and the constants for request ID handling are properly defined.


48-131: Excellent error handling refactoring!

The separation of ErrorMessage struct from ErrorResponse type alias improves code clarity and makes error handling more consistent throughout the module.


166-198: Excellent handler refactoring with disconnect monitoring!

The split between the public handler and private async function improves code organization. The connection monitoring is properly integrated with correct handle lifecycle management.


238-269: Well-implemented annotation handling!

The code correctly filters for request ID annotations and prepends them to the response stream using a clean functional approach.


358-388: Consistent handler pattern across endpoints!

The refactoring pattern is consistently applied to chat_completions and responses handlers, maintaining good code organization and connection monitoring throughout.

Also applies to: 564-594

@ryanolson ryanolson requested a review from kthui July 18, 2025 19:14
@ryanolson ryanolson enabled auto-merge (squash) July 18, 2025 21:07
@ryanolson ryanolson merged commit 343a481 into main Jul 18, 2025
12 checks passed
@ryanolson ryanolson deleted the ryan/http-disconnect branch July 18, 2025 22:22
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit cb6de94
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit fe63c17
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <yanrpei@gmail.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
    Co-authored-by: krishung5 <krish@nvidia.com>

commit 1f07dab
Author: Jacky <18255193+kthui@users.noreply.github.com>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <grahamk@nvidia.com>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit cc90ca6
Author: atchernych <atchernych@nvidia.com>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <grclark@nvidia.com>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <grclark@nvidia.com>

commit b8474e5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <krish@nvidia.com>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <zaristei@berkeley.edu>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <grahamk@nvidia.com>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com>
    Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com>

commit 08891ff
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit 4d2a31a
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <213854356+keivenchang@users.noreply.github.com>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com>

commit fc402a3
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <zichengma1225@gmail.com>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <zichengma1225@gmail.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>

commit 901715b
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu>

commit 9e76590
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
    Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit 053041e
Author: Jorge António <matroid@outlook.com>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <tusharma@nvidia.com>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <neelays@nvidia.com>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <neelays@nvidia.com>
    Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com>

commit 3d17a49
Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>

commit 3e0cb07
Author: Anant Sharma <anants@nvidia.com>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>

commit df91fce
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit d4b5414
Author: atchernych <atchernych@nvidia.com>
Date:   Mon Jul 21 13:10:24 2025 -0700

    fix: mypy error (#2029)

commit 79337c7
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Mon Jul 21 12:12:16 2025 -0700

    build: support custom TRTLLM build for commits not on main branch (#2021)

commit 95dd942
Author: atchernych <atchernych@nvidia.com>
Date:   Mon Jul 21 12:09:33 2025 -0700

    docs: Post-Merge cleanup of the deploy documentation (#1922)

commit cb6de94
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit fe63c17
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <yanrpei@gmail.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
    Co-authored-by: krishung5 <krish@nvidia.com>

commit 1f07dab
Author: Jacky <18255193+kthui@users.noreply.github.com>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <grahamk@nvidia.com>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit cc90ca6
Author: atchernych <atchernych@nvidia.com>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <grclark@nvidia.com>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <grclark@nvidia.com>

commit b8474e5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <krish@nvidia.com>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <zaristei@berkeley.edu>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <grahamk@nvidia.com>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com>
    Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com>

commit 08891ff
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit 4d2a31a
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <213854356+keivenchang@users.noreply.github.com>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com>

commit fc402a3
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <zichengma1225@gmail.com>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <zichengma1225@gmail.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>

commit 901715b
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu>

commit 9e76590
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
    Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit 053041e
Author: Jorge António <matroid@outlook.com>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <tusharma@nvidia.com>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <neelays@nvidia.com>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <neelays@nvidia.com>
    Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com>

commit 3d17a49
Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>

commit 3e0cb07
Author: Anant Sharma <anants@nvidia.com>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>

commit df91fce
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants