Skip to content

Conversation

ptarasiewiczNV
Copy link
Contributor

@ptarasiewiczNV ptarasiewiczNV commented Jul 18, 2025

Overview:

Currently runtime container is still installing ai_dynamo_vllm from wheel. It's deprecated, we need to install vLLM and WideEP kernels. Currently we install them from source so I have added a script that's run in dev and runtime containers. Dev containers install editable vllm version.

Will switch to main after #2009 is merged.

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • Refactor

    • Simplified and unified the installation process for vllm by centralizing logic into an external installation script.
    • Removed architecture-specific installation steps from the Dockerfile, streamlining build instructions.
    • Adjusted the runtime image to reuse the base Python environment and updated package installation options.
  • Chores

    • Introduced a new script to automate and customize vllm installation, supporting multiple architectures and additional dependencies.

Copy link

copy-pr-bot bot commented Jul 18, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

coderabbitai bot commented Jul 18, 2025

Walkthrough

The Dockerfile for the vllm container was refactored to delegate the installation of vllm and its dependencies to a new external script, install_vllm.sh. This script centralizes architecture-specific logic and installation steps, replacing previous inline commands. The runtime image now reuses the base virtual environment and modifies package installation accordingly.

Changes

File(s) Change Summary
container/Dockerfile.vllm Refactored to use install_vllm.sh for vllm installation; unified logic; adjusted runtime image setup.
container/deps/vllm/install_vllm.sh Added new script to automate vllm installation with architecture and mode options, including dependencies.

Sequence Diagram(s)

sequenceDiagram
    participant Dockerfile
    participant install_vllm.sh
    participant vllm Repo
    participant DeepGEMM Repo
    participant ep_kernels

    Dockerfile->>install_vllm.sh: Invoke with args (editable, ref, jobs, arch)
    install_vllm.sh->>vllm Repo: Clone at specified ref
    install_vllm.sh->>install_vllm.sh: Install dependencies (pip, cuda-python, torch)
    install_vllm.sh->>install_vllm.sh: Install vllm (editable/non-editable, arch-specific)
    install_vllm.sh->>ep_kernels: Install Python libraries
    install_vllm.sh->>DeepGEMM Repo: Clone and update submodules
    install_vllm.sh->>DeepGEMM Repo: Run install script
    install_vllm.sh-->>Dockerfile: vllm and dependencies installed
Loading

Possibly related PRs

Poem

In Docker’s warren, scripts now hop,
Installing vllm with a single stop.
No more tangled shell commands to chase—
Just one neat script, in a tidy place!
🐇✨
“With every build, I thump with glee:
Unified installs, dependency-free!”


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
container/deps/vllm/install_vllm.sh (3)

66-67: Re-installing pip is redundant and may mask upstream pinning

uv pip install pip cuda-python downgrades/overwrites the pip bundled with uv’s venv and provides no benefit. Safer:

-uv pip install pip cuda-python
+uv pip install --upgrade cuda-python

68-74: Clone shallowly to cut image size & build time

Pulling the full history (~100 MB) is unnecessary in CI images.

-mkdir -p /opt/vllm
-cd /opt/vllm
-git clone https://github.com/vllm-project/vllm.git
+mkdir -p /opt/vllm && cd /opt/vllm
+git clone --depth 1 --branch "$VLLM_REF" https://github.com/vllm-project/vllm.git
 cd vllm
-git checkout $VLLM_REF

102-106: cat install.sh looks like leftover debug noise

Dumping the entire script into the build log adds ~1 000 lines and no functional value.

-cat install.sh
 ./install.sh
container/Dockerfile.vllm (2)

187-192: Avoid copying then executing – just invoke the script directly

RUN --mount=type=bind … cp … && chmod +x … && /tmp/install_vllm.sh …
The extra cp inflates a layer; you can execute the mounted file in-place:

-    cp /tmp/deps/vllm/install_vllm.sh /tmp/install_vllm.sh && \
-    chmod +x /tmp/install_vllm.sh && \
-    /tmp/install_vllm.sh --editable --vllm-ref $VLLM_REF --max-jobs $MAX_JOBS --arch $ARCH
+    bash /tmp/deps/vllm/install_vllm.sh --editable --vllm-ref $VLLM_REF --max-jobs $MAX_JOBS --arch $ARCH

468-483: vllm is built twice (editable in base, non-editable in runtime) – bloats image by ~1 GiB

The runtime stage copies the pre-built venv (already containing editable vllm) and then recompiles vllm again. Consider:

  1. Drop the second invocation and rely on the copied wheel, or
  2. Skip installing vllm in the base stage and only build once for runtime.

This shaves several minutes off build time and significantly reduces final size.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f6f392c and cf30952.

📒 Files selected for processing (2)
  • container/Dockerfile.vllm (4 hunks)
  • container/deps/vllm/install_vllm.sh (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: grahamking
PR: ai-dynamo/dynamo#1177
File: container/Dockerfile.vllm:102-105
Timestamp: 2025-05-28T22:54:46.875Z
Learning: In Dockerfiles, when appending to environment variables that may not exist in the base image, Docker validation will fail if you reference undefined variables with ${VARIABLE} syntax. In such cases, setting the environment variable directly (e.g., ENV CPATH=/usr/include) rather than appending is the appropriate approach.
container/Dockerfile.vllm (1)
Learnt from: grahamking
PR: ai-dynamo/dynamo#1177
File: container/Dockerfile.vllm:102-105
Timestamp: 2025-05-28T22:54:46.875Z
Learning: In Dockerfiles, when appending to environment variables that may not exist in the base image, Docker validation will fail if you reference undefined variables with ${VARIABLE} syntax. In such cases, setting the environment variable directly (e.g., ENV CPATH=/usr/include) rather than appending is the appropriate approach.
🔇 Additional comments (1)
container/deps/vllm/install_vllm.sh (1)

75-85: Nightly Torch wheels are unstable – pin an explicit version or add retry logic

--pre torch … nightly/cu128 can change or disappear daily, breaking reproducible builds.
Please confirm a specific commit hash / version range or add a fallback.

@ptarasiewiczNV ptarasiewiczNV force-pushed the ptarasiewicz/install-vllm-in-runtime-container branch from 080c6ce to e07482c Compare July 18, 2025 11:40
@ptarasiewiczNV ptarasiewiczNV marked this pull request as draft July 18, 2025 11:41
@ptarasiewiczNV ptarasiewiczNV changed the base branch from main to ptarasiewicz/clone-nixl-in-dockerfile July 18, 2025 11:41
Base automatically changed from ptarasiewicz/clone-nixl-in-dockerfile to main July 18, 2025 16:47
@alec-flowers alec-flowers marked this pull request as ready for review July 18, 2025 16:48
@alec-flowers
Copy link
Contributor

/ok to test 16b9495

Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com>
@alec-flowers
Copy link
Contributor

/ok to test 0dc5b74

@alec-flowers
Copy link
Contributor

/ok to test 996873b

@alec-flowers
Copy link
Contributor

/ok to test 378c81f

@alec-flowers alec-flowers merged commit cb6de94 into main Jul 20, 2025
9 of 10 checks passed
@alec-flowers alec-flowers deleted the ptarasiewicz/install-vllm-in-runtime-container branch July 20, 2025 20:34
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit cb6de94
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit fe63c17
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <yanrpei@gmail.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
    Co-authored-by: krishung5 <krish@nvidia.com>

commit 1f07dab
Author: Jacky <18255193+kthui@users.noreply.github.com>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <grahamk@nvidia.com>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit cc90ca6
Author: atchernych <atchernych@nvidia.com>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <grclark@nvidia.com>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <grclark@nvidia.com>

commit b8474e5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <krish@nvidia.com>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <zaristei@berkeley.edu>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <grahamk@nvidia.com>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com>
    Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com>

commit 08891ff
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit 4d2a31a
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <213854356+keivenchang@users.noreply.github.com>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com>

commit fc402a3
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <zichengma1225@gmail.com>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <zichengma1225@gmail.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>

commit 901715b
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu>

commit 9e76590
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
    Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit 053041e
Author: Jorge António <matroid@outlook.com>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <tusharma@nvidia.com>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <neelays@nvidia.com>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <neelays@nvidia.com>
    Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com>

commit 3d17a49
Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>

commit 3e0cb07
Author: Anant Sharma <anants@nvidia.com>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>

commit df91fce
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit d4b5414
Author: atchernych <atchernych@nvidia.com>
Date:   Mon Jul 21 13:10:24 2025 -0700

    fix: mypy error (#2029)

commit 79337c7
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Mon Jul 21 12:12:16 2025 -0700

    build: support custom TRTLLM build for commits not on main branch (#2021)

commit 95dd942
Author: atchernych <atchernych@nvidia.com>
Date:   Mon Jul 21 12:09:33 2025 -0700

    docs: Post-Merge cleanup of the deploy documentation (#1922)

commit cb6de94
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>
    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit fe63c17
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <yanrpei@gmail.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
    Co-authored-by: krishung5 <krish@nvidia.com>

commit 1f07dab
Author: Jacky <18255193+kthui@users.noreply.github.com>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <grahamk@nvidia.com>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit cc90ca6
Author: atchernych <atchernych@nvidia.com>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <grclark@nvidia.com>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <grclark@nvidia.com>

commit b8474e5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <krish@nvidia.com>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <zaristei@berkeley.edu>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <grahamk@nvidia.com>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <41310872+GuanLuo@users.noreply.github.com>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <tedzhouhk@gmail.com>
    Co-authored-by: julienmancuso <161955438+julienmancuso@users.noreply.github.com>

commit 08891ff
Author: Ryan McCormick <rmccormick@nvidia.com>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <aflowers@nvidia.com>

commit 4d2a31a
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <99709299+mc-nv@users.noreply.github.com>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <grahamk@nvidia.com>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <tedzhouhk@gmail.com>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <454891+saturley-hall@users.noreply.github.com>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <ryanolson@users.noreply.github.com>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <grahamk@nvidia.com>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <jwillthomson19@gmail.com>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <35311602+alec-flowers@users.noreply.github.com>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <213854356+keivenchang@users.noreply.github.com>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <keivenchang@users.noreply.github.com>

commit fc402a3
Author: Biswa Panda <biswa.panda@gmail.com>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <zichengma1225@gmail.com>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <zichengma1225@gmail.com>
    Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>

commit 901715b
Author: Tanmay Verma <tanmayv@nvidia.com>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <furkhan324@berkeley.edu>

commit 9e76590
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
    Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>

commit 053041e
Author: Jorge António <matroid@outlook.com>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <tusharma@nvidia.com>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <neelays@nvidia.com>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <neelays@nvidia.com>
    Co-authored-by: Neelay Shah <neelays@a4u8g-0057.ipp2u2.colossus.nvidia.com>

commit 3d17a49
Author: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <17022745+galletas1712@users.noreply.github.com>

commit 3e0cb07
Author: Anant Sharma <anants@nvidia.com>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <82981111+ishandhanani@users.noreply.github.com>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <zixuanzhang@bytedance.com>

commit df91fce
Author: Yan Ru Pei <yanrpei@gmail.com>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <grahamk@nvidia.com>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants