Skip to content

Conversation

NickLucche
Copy link
Contributor

@NickLucche NickLucche commented Jul 2, 2025

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @NickLucche, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue with a specific test that has been identified as flaky on the continuous integration system. By marking this test to be skipped, the PR aims to improve the stability and reliability of the CI pipeline, ensuring that other changes can be validated without being blocked by an unreliable test.

Highlights

  • Test Stability: The test_streaming_response test in tests/entrypoints/openai/test_transcription_validation.py has been marked to be skipped. This is due to its reported flakiness on the upstream CI, aiming to prevent intermittent build failures.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request disables a flaky test, test_streaming_response, to stabilize the CI pipeline. My review suggests creating a follow-up issue to investigate and fix the underlying cause of the flakiness to ensure test coverage is restored in the future.

@@ -137,6 +137,7 @@ async def test_completion_endpoints():
assert res.message == "The model does not support Completions API"


@pytest.mark.skip(reason="Flaky test on upstream CI")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While skipping a flaky test is a reasonable temporary measure to unblock the CI pipeline, it's important that this doesn't become a permanent solution as it reduces our test coverage.

To ensure this is addressed, could you please create a follow-up issue to investigate the root cause of the flakiness in test_streaming_response? This will help us track the problem and work towards re-enabling the test, preventing potential regressions in the streaming response functionality.

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit concerned about just flagging test as flaky. Wondering if we understand the root cause of the flakiness.

Copy link

github-actions bot commented Jul 2, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@njhill
Copy link
Member

njhill commented Jul 2, 2025

Yes this seems to be failing pretty consistently now but only just started AFAIK... so we should try to pinpoint the cause.

@NickLucche
Copy link
Contributor Author

I was just testing how CI would behave. I can't reproduce the issue locally just yet.

Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
@NickLucche
Copy link
Contributor Author

There's some issue with CI right now so I can't test.. at least I have a clue of what the root cause of this might be

@NickLucche NickLucche changed the title Flaky test_streaming_response test [Bugfix] Fix flaky test_streaming_response test Jul 2, 2025
@aarnphm aarnphm added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 2, 2025
@NickLucche
Copy link
Contributor Author

NickLucche commented Jul 2, 2025

@aarnphm let's hold until we can get the CI to actually build once network issues are gone

@orozery
Copy link
Contributor

orozery commented Jul 2, 2025

What about entrypoints/openai/test_translation_validation.py::test_streaming_response?

Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
@mergify mergify bot added the ci/build label Jul 3, 2025
@aarnphm aarnphm self-requested a review July 3, 2025 11:11
@NickLucche
Copy link
Contributor Author

I have investigated the issue and I don't think this is caused by anything vllm side, as I went through the commits around the time we started experiencing the failure.
The tedious part is my inability to reproduce the issue locally on same hw, even using build images where the failure was reported.

My impression is that the failure might be caused by a change in the test environment; in particular the transcription client was recently upgraded openai-python side to enable streaming (which was not the case when I first wrote the tests).
This is at least compatible with the fact that around the time the first failures are reported we can find a different openai version.

cc @DarkLight1337 @aarnphm

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) July 3, 2025 13:22
@DarkLight1337
Copy link
Member

As discussed offline, let's merge this to unblock CI first

@DarkLight1337 DarkLight1337 merged commit d1b689c into vllm-project:main Jul 3, 2025
105 checks passed
sfeng33 pushed a commit to sfeng33/vllm that referenced this pull request Jul 6, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
huydhn pushed a commit to huydhn/vllm that referenced this pull request Jul 8, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
Chen-zexi pushed a commit to Chen-zexi/vllm that referenced this pull request Jul 13, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: avigny <47987522+avigny@users.noreply.github.com>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Signed-off-by: NickLucche <nlucches@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI Failure]: entrypoints-test: test_streaming_response
6 participants