-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Add test environment approval step for CI #13297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some suggestions for better communication. In general, I think this is a great WAR against GHs design
if result: | ||
total_workflows += 1 | ||
else: | ||
print(f"Failed to approve deployment {deployment['id']}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are reasons for a failed deployment? What else could do we in that situation? Can we just retry? Or would it better to send a team alert? I guess in this situation we have a free slot but we’re not making use it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, a retry is probably just something that happens in a 5 minutes. Probably we can send an alert to the channel. I'll see if I can add it.
Co-authored-by: oliver könig <okoenig@nvidia.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: oliver könig <okoenig@nvidia.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: oliver könig <okoenig@nvidia.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Add test environment approval step for CI Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Add cron job to approve test workflows in the queue Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Run Approve Test Queue on push for testing Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Debug test queue script Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Rewrite and debug test approval in Python Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Fix script to approve workflows Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Revert "Run Approve Test Queue on push for testing" This reverts commit 889d48a. Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Update .github/workflows/cicd-approve-test-queue.yml Co-authored-by: oliver könig <okoenig@nvidia.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Update .github/workflows/cicd-approve-test-queue.yml Co-authored-by: oliver könig <okoenig@nvidia.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Update .github/workflows/cicd-approve-test-queue.yml Co-authored-by: oliver könig <okoenig@nvidia.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> * Notify automation team via slack if test queue approval bot failed Signed-off-by: Charlie Truong <chtruong@nvidia.com> --------- Signed-off-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: oliver könig <okoenig@nvidia.com> Signed-off-by: Yuanzhe Dong <yudong@nvidia.com>
What does this PR do ?
Add test environment approval step for CI
Our current CI tests will run unit tests first and then e2e tests across different groups. However, if there are many PRs opened at the same time, then GHA will do a round robin on all the PRs rather than focusing on a single PR to completion. This can cause the queue to grow over time. This funnels all CI tests through a deployment approval first. Then a cron job is ran every 5 minutes to check if more jobs can be approved given a MAX_CONCURRENCY variable.
I ran this manually by pushing to this branch successfully.
https://github.com/NVIDIA/NeMo/actions/runs/14696800163/job/41239549859?pr=13297
MAX_CONCURRENCY is set to
1
for now in themain
environment. We could even pause CI tests now by setting it to0
and then also allow other jobs to jump the queue by manually approving ourselves if necessary.Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use this
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information