-
-
Notifications
You must be signed in to change notification settings - Fork 9.8k
[Misc] unify variable for LLM instance v2 #21356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc] unify variable for LLM instance v2 #21356
Conversation
Signed-off-by: Andy Xie <andy.xning@gmail.com>
4b997b3
to
8c2bb04
Compare
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request continues the refactoring from a previous PR to unify the variable name for the LLM instance, changing llm.model
to llm.llm
in a test file. While the renaming is consistent, it appears to have propagated an incorrect attribute access path. I've pointed out a likely AttributeError
and suggested a fix. It's important to ensure tests are run to catch such issues.
lambda self: self.model_runner.model.language_model.model. | ||
normalizer.cpu().item()) | ||
config = llm.model.llm_engine.model_config.hf_config.text_config | ||
config = llm.llm.llm_engine.model_config.hf_config.text_config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems there's a potential AttributeError
here. Based on the LLMEngine
implementation, an LLMEngine
instance has a model_config
attribute directly, but not an llm_engine
attribute.
Assuming llm.llm
is an instance of LLMEngine
, the path to the config should be llm.llm.model_config.hf_config.text_config
. The extra .llm_engine
seems incorrect and will likely cause the test to fail.
config = llm.llm.model_config.hf_config.text_config
lambda self: self.model_runner.model.model.normalizer.cpu( | ||
).item()) | ||
config = llm.model.llm_engine.model_config.hf_config | ||
config = llm.llm.llm_engine.model_config.hf_config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the case above, it appears there's an incorrect attribute access here. The LLMEngine
class does not have an llm_engine
attribute.
If llm.llm
is an LLMEngine
instance, the correct way to access the model config would be directly via llm.llm.model_config
.
config = llm.llm.model_config.hf_config
Can you check the latest nightly run which runs all tests: https://buildkite.com/vllm/ci/builds?query=Nightly and fix any issues resulting from the naming change? |
Will try this later. Let's first fix this problem.
|
I prefer fixing them all at once since CI is broken until then anyway |
OK,will do this later today. |
Seems that there is only one error is related to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current changes LGTM.
@andyxning you can check the latest nightly build (https://buildkite.com/vllm/ci/builds?branch=main&query=nightly) to find any other failing tests related to the renaming.
Thanks for checking this! LGTM then |
Yes. I have checked this. For now, there is only one. |
@DarkLight1337 @hmellor Seems the ci failure is unrelated with this PR. Can you help verifying this? Seems flakes. |
Retrying |
The failure is unrelated so I'll just merge this |
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: qizixi <qizixi@meta.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: shuw <shuw@nvidia.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: x22x22 <wadeking@qq.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: Paul Pak <paulpak58@gmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: Diego-Castan <diego.castan@ibm.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Fix #20996 (comment)
Purpose
This is a follow up about PR #20996 and is used to fix comment: #20996 (comment)
Test Plan
NA
Test Result
NA
(Optional) Documentation Update