Skip to content

Conversation

parthchadha
Copy link
Contributor

@parthchadha parthchadha commented Apr 29, 2025

What does this PR do ?

Screenshot 2025-05-12 at 10 08 59 AM

Add a one line overview of what this PR aims to accomplish.

Issues

Closes #259.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

@parthchadha parthchadha added the CI:L0 Run doctests and unit tests label Apr 29, 2025
Signed-off-by: Parth Chadha <pchadha@nvidia.com>
@parthchadha parthchadha added CI:L0 Run doctests and unit tests and removed CI:L0 Run doctests and unit tests labels May 8, 2025
terrykong
terrykong previously approved these changes May 9, 2025
@parthchadha parthchadha added this pull request to the merge queue May 12, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks May 12, 2025
@parthchadha parthchadha added this pull request to the merge queue May 12, 2025
Merged via the queue into main with commit 0a674bb May 12, 2025
13 checks passed
@parthchadha parthchadha deleted the pchadha/volta-fix branch May 12, 2025 22:01
terrykong pushed a commit that referenced this pull request May 15, 2025
Signed-off-by: Parth Chadha <pchadha@nvidia.com>
YzjiaoNvd pushed a commit to YzjiaoNvd/NeMo-RL that referenced this pull request Jun 10, 2025
Signed-off-by: Parth Chadha <pchadha@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI:L0 Run doctests and unit tests r0.2.1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

V100 (compute capability 7.0) fails with default bfloat16 in vLLM — should fall back to float16 or be configurable via vllm_cfg
3 participants