Skip to content

Conversation

Swipe4057
Copy link
Contributor

The current docker image uses nvcr.io/nvidia/tritonserver:24.04-py3-min with CUDA 12.4 as the base, then when sglang and torch packages are installed, CUDA version gets reinstalled to 12.6, which can cause errors and increases build time, and is actually pointless. This PR simply safely upgrades the base image version to nvcr.io/nvidia/tritonserver:24.12-py3-min with CUDA 12.6.

https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/rel-25-05.html#rel-25-05
image

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Swipe4057, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue where installing packages like sglang and torch in the Docker image would cause a CUDA version mismatch and trigger a redundant reinstallation of CUDA 12.6 on top of the base image's CUDA 12.4. By upgrading the base Triton server image to version 24.12, which already includes CUDA 12.6, we eliminate this conflict, reduce build time, and improve stability.

Highlights

  • Base Image Upgrade: Upgraded the Docker base image for the Triton server from nvcr.io/nvidia/tritonserver:24.04-py3-min to nvcr.io/nvidia/tritonserver:24.12-py3-min.
  • CUDA Version Alignment: The upgrade aligns the base image's CUDA version (12.6 in 24.12) with the version required by subsequent package installations (like sglang and torch), preventing unnecessary CUDA reinstallation and potential errors.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@zhyncs zhyncs merged commit 27196d4 into sgl-project:main Jun 10, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the base Docker image in docker/Dockerfile from nvcr.io/nvidia/tritonserver:24.04-py3-min (CUDA 12.4) to nvcr.io/nvidia/tritonserver:24.12-py3-min (CUDA 12.6). This is a well-justified and positive change, as it aligns the CUDA version of the base image with the CUDA version of PyTorch (cu126) that is installed by subsequent commands in the Dockerfile. This alignment is expected to prevent potential CUDA version conflicts, reduce redundant installations, and improve build stability and time.

The change itself is correct and beneficial. A line comment has been added with a suggestion for future maintainability concerning the ARG CUDA_VERSION.

One minor note regarding the pull request description: the linked NVIDIA release notes and the embedded image pertain to release 25.05 (which uses CUDA 12.8). However, this pull request correctly implements an upgrade to version 24.12 (CUDA 12.6) as intended by the title and the core of the description. This is just a small clarification for anyone reviewing the PR details.

@@ -1,6 +1,6 @@
ARG CUDA_VERSION=12.4.1

FROM nvcr.io/nvidia/tritonserver:24.04-py3-min
FROM nvcr.io/nvidia/tritonserver:24.12-py3-min
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Updating the base image to nvcr.io/nvidia/tritonserver:24.12-py3-min (which provides CUDA 12.6) is a good change, as it aligns the base environment's CUDA version with the PyTorch cu126 build installed by subsequent steps in this Dockerfile. This should improve build reliability and potentially reduce build times.

A point for future consideration related to this change: the ARG CUDA_VERSION (currently 12.4.1 on line 1) is used in the conditional logic (lines 28-44) to select the cu126 PyTorch build. Now that this base image inherently provides CUDA 12.6, the ARG CUDA_VERSION could potentially be updated to a 12.6.x value, and the conditional logic adjusted accordingly. This would make the Dockerfile's CUDA version handling more explicit and improve clarity. This is a suggestion for a potential follow-up, as it would involve changes to lines not modified in this pull request.

@zhyncs zhyncs mentioned this pull request Jun 10, 2025
6 tasks
almaslof pushed a commit to mpashkovskii/sglang that referenced this pull request Jun 11, 2025
jianan-gu pushed a commit to jianan-gu/sglang that referenced this pull request Jun 12, 2025
@Swipe4057 Swipe4057 deleted the update_dockerfile branch August 17, 2025 19:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants