Skip to content

Conversation

tlrmchlsmth
Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth commented Jul 1, 2025

Honestly I did not know that you could just run vllm serve and it would default to serving opt-125m before 15 minutes ago.

This changes the default to Qwen3-06.B since it's good and small.

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Copy link

github-actions bot commented Jul 1, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @tlrmchlsmth, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the default large language model configured for the vllm serve command. The change aims to provide a more modern and efficient default model for users, improving the out-of-the-box experience by switching from an older, smaller model to a more capable yet still compact alternative.

Highlights

  • Default Model Configuration: The default model used by vllm serve has been updated from facebook/opt-125m to Qwen/Qwen3-0.6B within the ModelConfig class. This change aims to provide a more modern and efficient default model for users.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the qwen Related to Qwen models label Jul 1, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request changes the default model for vllm serve from facebook/opt-125m to Qwen/Qwen3-0.6B. While using a more modern small model is a good goal, the chosen model introduces a couple of issues for a default setting. Most critically, it appears to require trust_remote_code=True, which would break the default vllm serve command and require users to enable a security-sensitive flag. Additionally, the new model is significantly larger, which could negatively impact the first-time user experience. I've provided detailed feedback on these points.

@tlrmchlsmth tlrmchlsmth marked this pull request as ready for review July 1, 2025 20:50
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, better to have a useable model by default

TBH I think there should be no model at all, but we've already signed up for this behavior

@tlrmchlsmth
Copy link
Collaborator Author

TBH I think there should be no model at all, but we've already signed up for this behavior

+1

Copy link
Collaborator

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this model could be better

@hmellor
Copy link
Member

hmellor commented Jul 1, 2025

I agree no model at all would be better.

I left this alone during the great config refactor because I was already making lots of changes and didn't want too many design decisions that would slow down the maintainability focused refactor.

@mgoin mgoin enabled auto-merge (squash) July 1, 2025 23:22
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 1, 2025
@DarkLight1337
Copy link
Member

Actually, I think we can't do that yet because our transformers version requirement is still at 4.51.1

@mgoin mgoin disabled auto-merge July 3, 2025 22:16
@hmellor
Copy link
Member

hmellor commented Jul 14, 2025

The transformers version has now been updated on main so we should be able to continue with this PR

@DarkLight1337
Copy link
Member

Need to merge from main

@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) July 14, 2025 13:56
@vrdn-23
Copy link
Contributor

vrdn-23 commented Jul 14, 2025

TBH I think there should be no model at all

+1
As a user, it can sometimes lead to cases where the model is not captured correctly from e.g. an environment variable, and the Qwen model is served instead without specific definition from the user, which can be misleading / confusing in some scenarios.

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would concern more on the CI, whether this change will cause more OOMs in CI. Maybe I concern too much.

@tlrmchlsmth tlrmchlsmth merged commit 5597562 into main Jul 14, 2025
68 checks passed
@tlrmchlsmth tlrmchlsmth deleted the default_to_qwen branch July 14, 2025 16:54
@tlrmchlsmth
Copy link
Collaborator Author

As a user, it can sometimes lead to cases where the model is not captured correctly from e.g. an environment variable, and the Qwen model is served instead without specific definition from the user, which can be misleading / confusing in some scenarios.

This is a good point -- perhaps we should consider deprecating this feature or at least adding a --no-default-model flag

@vrdn-23
Copy link
Contributor

vrdn-23 commented Jul 14, 2025

perhaps we should consider deprecating this feature or at least adding a --no-default-model flag

I think it makes more sense to deprecate the feature because ideally I would not expect any users to be relying on this behavior to run their systems. And even if they did, this PR that got merged in, is likely going to break what they were doing. Just to be pragmatic, I don't see any reason why we need to introduce a new flag for a feature that should not be relied on in the first place.
cc @tlrmchlsmth

py-andy-c pushed a commit to py-andy-c/vllm that referenced this pull request Jul 14, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
py-andy-c pushed a commit to py-andy-c/vllm that referenced this pull request Jul 14, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
patrickvonplaten pushed a commit to patrickvonplaten/vllm that referenced this pull request Jul 15, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: x22x22 <wadeking@qq.com>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Paul Pak <paulpak58@gmail.com>
taneem-ibrahim pushed a commit to taneem-ibrahim/vllm that referenced this pull request Aug 14, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 27, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants