Skip to content

Conversation

mkoeppe
Copy link
Contributor

@mkoeppe mkoeppe commented Oct 31, 2023

As proposed in #36610 (comment)

📝 Checklist

  • The title is concise, informative, and self-explanatory.
  • The description explains in detail what this PR is about.
  • I have linked a relevant issue or discussion.
  • I have created tests covering the changes.
  • I have updated the documentation accordingly.

⌛ Dependencies

Copy link
Collaborator

@kwankyu kwankyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 1, 2023

Thanks!

@tobiasdiez
Copy link
Contributor

The problem is not that the default image takes so long (that adds maybe 30 mins to the build & test workflow, see https://github.com/sagemath/sage/actions/runs/6703096803/job/18216119546). The problem is that the ci linux workflow spawns already to many runs.

Just extract the "default" steps to a new workflow.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 1, 2023

No, Tobias, I explained it correctly in #36610 (comment), and, no, extracting the "default" step to another workflow will not change anything.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 1, 2023

As I don't know how github schedules runners, I cannot be sure. But decreasing "max-parallel" for "minimal", "maximal", "experimental" may allow rooms for other PR workflow runs. We have 7 days before "release". That would be enough time for runs of "minimal" and others will finish.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 1, 2023

That is part of what Tobias' PR does.

@tobiasdiez
Copy link
Contributor

Currently, because "default" is queued after the "standard-pre" jobs, it has to wait until a runner becomes available.

Yes, I agree that the problem is that default is queued after the standard-pre jobs. This is not changed in this PR, but would be fixed by moving the default job to its own independent workflow. Together with a reasonable limit on the max jobs of the ci-linux workflow, you ensure that "default" is run more or less after "default-pre" (it might get delayed by other workflows, but the same problems happens also with the current setup of default after standard-pre).

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 1, 2023

Currently, because "default" is queued after the "standard-pre" jobs, it has to wait until a runner becomes available.

Yes, I agree that the problem is that default is queued after the standard-pre jobs. This is not changed in this PR, but would be fixed by moving the default job to its own independent workflow.

As I understand it, since "default-pre" is a job separate from "standard-pre", it is already queued separately from "standard-pre" even though they are in the same workflow.

I guess that "same file or not" doesn't matter in how jobs get runners.

Together with a reasonable limit on the max jobs of the ci-linux workflow, you ensure that "default" is run more or less after "default-pre"

"default" is already run after "default-pre" by needs: [default-pre].

@tobiasdiez
Copy link
Contributor

tobiasdiez commented Nov 1, 2023

You might be right, maybe you don't need to move it to a separate file (I couldn't find good documentation about how github queues jobs). I still maintain that decreasing the max-parallel is the way to go because of how other jobs are added to the queue in the meantime.

With max-parallel = 1 for standard-pre (1 for illustration, not as actual proposal): default-pre runs, and 1 standard-pre. Once default-pre is finished, default is added to the queue. At that point the queue contains the other 44 standard-pre and maybe a couple of other jobs (from say PRs). Now every available runner first picks up the other jobs and finally default - it cannot pickup one of the standard-pre jobs since due to the max-parallel limit. (If the standard-pre jobs finishes in the meantime, then another one might be picked up before default). Essentially, there are 59 runners that are not running standard-pre jobs so the "other jobs" queue is minimal when default-pre is finished. So in this situation, only one (or say 2) stardard-pre jobs are executed before default is started.

With max-parallel=45 for standard-pre (again for illustration as the other extreme): default-pre runs, and all standard-pre jobs. Since there are only 60 - 46 = 14 runners not busy with the ci-linux, there will be a huge queue of jobs that pill up during the default-pre run. So once that job finishes and default is added to the queue, it takes some time until the queue is minimized and default is picked up. Note that some other jobs might get the priority above ci-linux, so that not even all 46 standard-pre jobs are already executed at the beginning. In that case, an available runner first picks the remaining standard-pre job before default. So in this situation, at least every standard-pre job has to be started before default is executed.

And based on looking at the logs, we are already in the later situation: default is only started once every standard-pre job is executed.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 1, 2023

I guess that "same file or not" doesn't matter in how jobs get runners.

That's correct.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 1, 2023

Since there are only 60 - 46 = 14 runners not busy with the ci-linux, there will be a huge queue of jobs that pile up during the default-pre run.

I've already explained in #36610 (comment) that this is a feature, not a bug. These jobs should really be waiting until the "default" job has pushed the new image.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 1, 2023

based on looking at the logs,

I would strongly suggest to generally include links to what you actually looked at so that we don't have to guess.

we are already in the later situation: default is only started once every standard-pre job is executed.

No, that's incorrect, as is easily verified.

From "default", https://github.com/sagemath/sage/actions/runs/6700323366/job/18235514573 ("View Raw Logs"):

2023-10-31T01:59:40.9489593Z Requested labels: ubuntu-latest
2023-10-31T01:59:40.9489902Z Job defined at: sagemath/sage/.github/workflows/docker.yml@refs/tags/10.2.beta9
2023-10-31T01:59:40.9490118Z Reusable workflow chain:
2023-10-31T01:59:40.9490252Z sagemath/sage/.github/workflows/ci-linux.yml@refs/tags/10.2.beta9 (eb8417b6107049f287bb8e77844732659bbe86eb)
2023-10-31T01:59:40.9490390Z -> sagemath/sage/.github/workflows/docker.yml@refs/tags/10.2.beta9 (eb8417b6107049f287bb8e77844732659bbe86eb)
2023-10-31T01:59:40.9490518Z Waiting for a runner to pick up this job...
2023-10-31T03:51:55.0386427Z Job is waiting for a hosted runner to come online.
2023-10-31T03:51:59.3104217Z Job is about to start running on the hosted runner: GitHub Actions 57 (hosted)
2023-10-31T03:52:02.6522405Z Current runner version: '2.311.0'

From "opensuse-tumbleweed", standard-pre:

2023-10-31T02:18:41.9831445Z Requested labels: ubuntu-latest
2023-10-31T02:18:41.9831723Z Job defined at: sagemath/sage/.github/workflows/docker.yml@refs/tags/10.2.beta9
2023-10-31T02:18:41.9831983Z Reusable workflow chain:
2023-10-31T02:18:41.9832108Z sagemath/sage/.github/workflows/ci-linux.yml@refs/tags/10.2.beta9 (eb8417b6107049f287bb8e77844732659bbe86eb)
2023-10-31T02:18:41.9832253Z -> sagemath/sage/.github/workflows/docker.yml@refs/tags/10.2.beta9 (eb8417b6107049f287bb8e77844732659bbe86eb)
2023-10-31T02:18:41.9832392Z Waiting for a runner to pick up this job...
2023-10-31T04:14:26.2739100Z Job is waiting for a hosted runner to come online.
2023-10-31T04:14:30.1397059Z Job is about to start running on the hosted runner: GitHub Actions 53 (hosted)
2023-10-31T04:14:33.1014367Z Current runner version: '2.311.0'

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 1, 2023

But decreasing "max-parallel" for "minimal", "maximal", "experimental" may allow room for other PR workflow runs. We have 7 days before "release". That would be enough time for runs of "minimal" and others will finish.

I have now pushed a small reduction to correct for standard-sitepackages, as discussed in #36610 (comment)

Copy link

github-actions bot commented Nov 1, 2023

Documentation preview for this PR (built with commit 35910e2; changes) is ready! 🎉

Copy link
Collaborator

@kwankyu kwankyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks reasonable to me.

@tobiasdiez
Copy link
Contributor

Since there are only 60 - 46 = 14 runners not busy with the ci-linux, there will be a huge queue of jobs that pile up during the default-pre run.

I've already explained in #36610 (comment) that this is a feature, not a bug. These jobs should really be waiting until the "default" job has pushed the new image.

But all of them are still executed before default because they are added to the queue before default-pre is finished.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 2, 2023

Also false, as the data I posted above show.

@tobiasdiez
Copy link
Contributor

Also false, as the data I posted above show.

This week's data is hard to interpret since the workflow was canceled and then restarted. But I think you agree that at least most standard-pre jobs are started before default is started.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 2, 2023

"At least most" is a meaningless phrase.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 2, 2023

I've already explained in #36610 (comment) that this is a feature, not a bug. These jobs should really be waiting until the "default" job has pushed the new image.

But all of them are still executed before default because they are added to the queue before default-pre is finished.

I guess Matthias's "These jobs" refers to PR jobs. Tobias, by "all of them" do you refer to those PR jobs or 46 "standard-pre" jobs? And do you distinguish "execution" from "queuing"? I am trying to understand what you mean.

By the way, frankly, I am sick of this discussion and labeling war (repeated similarly on PRs on our CI infrastructure). Both of you proposed solutions of the same problem. None of us are completely sure if a solution will solve the problem. We should experiment and observe, to be sure. I expect that Matthias' solution will work well as far as I understand the situation. We could confirm that after mere some days. Even if it fails, it is not a big deal. We are not dealing with a rocket. If it fails, we can just prepare a new solution. Then why are we spending time and mental energy to prove (to oneself) that a solution will work? Let's be practical and constructive.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 5, 2023

This is working pretty well (in combination with #36627, which was merged at the same time).

A few jobs slipped through because there was not enough clogging in the last 30 minutes or so of the "default" job.

There were ~45 workflows in the queue at the time that "default" completed; after an hour, this had reduced to ~30 workflows, and 25 workflows were showing "in progress". Some pushes to PRs (likely merging the updated develop branch) temporarily brought the queue up to ~50 again, then it went down to 10.

At the time that the long tail of "standard-pre" finished (= after 6 hours because of the unexplained timeouts of ubuntu-trusty-toolchain-gcc_9 and ubuntu-mantic), the queue had gone down to 4 workflows; all from the push to the release tag, not from PRs. 16 workflows showing as "in progress", one of which was the CI Linux workflow with 40 running jobs from "minimal-pre", "standard" and "standard-site-packages".

We could do some fine-tuning here by:

  • increasing "minimal-pre" by ~5 jobs
  • to balance, decreasing "standard" + "standard-sitepackages" by 5–10 jobs combined

That's done in #36660.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 5, 2023

In addition to that, I still think "minimal" and others should be put to the queue by smaller chunks, max-parallel 10 or something (instead of current default 30), to allow room for PRs.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 5, 2023

--> #36660

@tobiasdiez
Copy link
Contributor

tobiasdiez commented Nov 7, 2023

This is working pretty well (in combination with #36627, which was merged at the same time).

I don't know what what you take as your evaluation, but here are the average total execution times (wait + actual run) of the lint workflow runs 48h after the release:
2023-11-05 14:03:35+00:00 0:27:38.435484
2023-10-30 23:08:32+00:00 1:21:56.908163
2023-10-21 15:07:11+00:00 0:05:46.085366
2023-10-14 23:09:15+00:00 0:05:46.052632
2023-10-08 13:36:24+00:00 0:40:14.670732
2023-09-27 22:06:42+00:00 0:45:05
2023-09-24 20:23:49+00:00 0:08:51.106667
2023-09-16 14:19:08+00:00 0:11:33.983333
2023-09-10 22:26:52+00:00 0:06:06.437500
2023-09-01 20:02:22+00:00 0:46:29.742424
2023-08-27 19:52:50+00:00 1:26:55.392857
2023-08-20 11:27:14+00:00 0:07:27.966667
2023-08-13 11:44:02+00:00 0:04:06.026316
2023-08-05 23:16:49+00:00 0:04:02.300000
2023-07-30 13:31:38+00:00 0:03:42.760870
2023-07-20 22:46:03+00:00 0:33:37.571429

As you see, this week its quite bad.

So can we please merge #36610 next to see how it compares.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 7, 2023

How should I read this

2023-11-05 14:03:35+00:00 0:27:38.435484

Is this about 14 hours? Does this mean that lint workflow took 14 hours to finish after it started waiting in the queue? Is it started waiting in the queue after 48 hours of the release?

@tobiasdiez
Copy link
Contributor

The first colum is the datetime of the release /tag, the second one is the average execution time of all runs in 48h after that date.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 7, 2023

Ah, thanks. So 11-05 record is pretty well compared with the previous 10-30 record. But as 11-05 record is in 10th place among 16 records, you may say it is still pretty bad.

To improve the situation further, we may experiment with the idea of decreasing max-parallel of "minimal" and others. Would you rebase your PR and only do this, as Matthias is fine-tuning "standard-" things in #36660?

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 7, 2023

Anyway, as we are in release-candidate state, I am not sure if we can compare the next record with others fairly...

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 7, 2023

I agree that the time to completion of the short running jobs like the linter is a meaningful metric. The 0:27:38 time for the latest release seems consistent/plausible based on my observations. Perhaps Tobias could share the script with which he obtained this metric so that we can continue to monitor it.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 7, 2023

I'll also note that the specter raised in #36627 (comment) of about "20 hours" wait time didn't show

@tobiasdiez
Copy link
Contributor

tobiasdiez commented Nov 7, 2023

Ah, thanks. So 11-05 record is pretty well compared with the previous 10-30 record. But as 11-05 record is in 10th place among 16 records, you may say it is still pretty bad.

The comparison with 10-30 is misleading since back then the ci-linux workflow has been restarted which completely overwhelmed the github runners. But you do see from that day that the more time you give to ci-linux early on the worse the other work flows behave. Better compare it to 14th or 21th.

To improve the situation further, we may experiment with the idea of decreasing max-parallel of "minimal" and others. Would you rebase your PR and only do this, as Matthias is fine-tuning "standard-" things in #36660?

The problem is the standard runs not minimal. You see this by looking at this week's runs, the first ones after a release are delayed for 30min (even after default is finished).

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 7, 2023

You see this by looking at this week's runs, the first ones after a release are delayed for 30min (even after default is finished).

Where can I see the start time (of the first one)? I see the duration time, but cannot find the time when it started...

@tobiasdiez
Copy link
Contributor

In the raw logs for example.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 7, 2023

OK. So this is when "default" started

2023-11-05T14:24:50.3061975Z Requested labels: ubuntu-latest
2023-11-05T14:24:50.3062295Z Job defined at: sagemath/sage/.github/workflows/docker.yml@refs/tags/10.2.rc0

and this

2023-11-05T14:30:56.7308939Z Requested labels: ubuntu-latest
2023-11-05T14:30:56.7309238Z Job defined at: sagemath/sage/.github/workflows/lint.yml@refs/pull/36504/merge
2023-11-05T14:30:56.7309349Z Waiting for a runner to pick up this job...
2023-11-05T16:16:17.6214605Z Job is waiting for a hosted runner to come online.
2023-11-05T16:16:20.7801235Z Job is about to start running on the hosted runner: GitHub Actions 17 (hosted)

seems the first linter workflow started after it. So it waited for about 2 hours before starting. So this one was delayed by 2 hours.

And do you claim that, including the above one, the first ones (for 48 hours after the release) were delayed by 30 minutes in average?

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 7, 2023

Matthias thinks that PRs can wait for (be delayed) for ~4 hours after release since they should be checked by B&T after "default" is built. So we are allowing ~4 hours delay for PRs after release. During the ~4 hours, we aim at processing most of "standard-" jobs. I think this strategy is reasonable.

@tobiasdiez
Copy link
Contributor

tobiasdiez commented Nov 8, 2023

And do you claim that, including the above one, the first ones (for 48 hours after the release) were delayed by 30 minutes in average?

Yes, they are delayed by about 25 mins (30 mins of overall execution time - about 5mins actual runtime). It's not possible to get the time of delay from the github api, only the start and end time.

Matthias thinks that PRs can wait for (be delayed) for ~4 hours after release since they should be checked by B&T after "default" is built. So we are allowing ~4 hours delay for PRs after release. During the ~4 hours, we aim at processing most of "standard-" jobs. I think this strategy is reasonable.

And I'm not agreeing that this delay is helpful to reduce the average execution time of the PR workflows for the reasons explained in #36627 (comment). Again, compare the following options:

  • Clog everything with ci-linux runs. Assume for simplicity, that we are the perfectly fine-tuned scenario where all clogging runs end exactly at the same moment as the default run (say this is after exactly 4h) and neither any other workflows were run, nor are there any unstarted clogging runs still left after the ci-linux. So after 4h you start to work on the queue.
  • No clogging (i.e. only a minimal amount of ci-linux runs). In this case, "Build & Test" runs submitted in the first 4h (the time it needs to finish default have some additional build time coming from the incremental run. Say this is 2h per run.

In both cases, PRs submitted after 4h are using the fresh docker image - so they have the same run time. So the most unfortunate situation is a PR submitted directly at 3.59. With option a) it would be delayed by 1min, and then finishes after say 4h run time, i.e. exactly 8h after the new release. With option b) it is started immediately, and then finishes after 4+2=6h run time, i.e. 10h after the release. A PR submitted at 2h after a release finishes in both cases at 8h, while a PR submitted directly at the release finishes after 8 and 6h, respectively. So with option a) one would only get a quicker drain of the queue between 8h to 10h after an release. But from the observation at #36616 (comment)

There were ~45 workflows in the queue at the time that "default" completed; after an hour, this had reduced to ~30 workflows, and 25 workflows were showing "in progress". Some pushes to PRs (likely merging the updated develop branch) temporarily brought the queue up to ~50 again, then it went down to 10.

there was no shortage of runners after 4h, so the quicker drain at the 8h mark was without advantage. I go even so far to conjecture that with 10 (instead of 50) standard ci-runs, not a single workflow run would have been delayed this week.

But as you said in #36616 (comment), there is no harm in simply trying it out with my alternative solution (after the next release, to still test out #36660) and then choose the one that worked best.

@kwankyu
Copy link
Collaborator

kwankyu commented Nov 8, 2023

And I'm not agreeing that this delay is helpful to reduce the average execution time of the PR workflows for the reasons explained in #36627 (comment).

It seems that you and Matthias have different priorities. After release, Matthias uses the delay (~4 hours long) to finish as much of the "standard-" jobs as possible so that the results can be used to fix things with enough time before the next release. But you are concerned of PR jobs getting run without much interruption after release.

I am inclined to Matthias' strategy but want the "standard-" jobs not slip over the ~4 hours slot so as to hinder PR jobs started after the delay. It is also important that the ~4 hours delay is usually in the weekend, when few people work for sage.

@tobiasdiez
Copy link
Contributor

It's not like I'm proposing to disable the ci-linux work flow. With 10 parallel runs, the standard runs still complete in 16h. That should give plenty of time to discover issues.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 10, 2023

Sharing my observations (similar to #36616 (comment)) for the current run of 10.2.rc1 (started at T = Fri, 10 Nov 2023 16:14:50 GMT; note that the fine-tuning #36660 has not been merged yet):

  • again, a small number of jobs slipped through the clogging
  • at T + 2.5 hours, "default" completed
  • at T + 3 hours, in-progress workflows had ramped up to 15 workflows (55 queued)
  • at T + 3.5 hours, in_progress had ramped up to 23 workflows (32 queued)
  • at T + 6 hours, "standard-pre" timed out (ubuntu-mantic maxima build hangs on ubuntu-mantic #36672), so "standard" and "standard-sitepackages" started
  • at T + 48 hours (Sun Nov 12 2023 15:55:26 GMT), 10.2.rc2 was pushed. At this time, the CI Linux run was still busy with the penultimate block of jobs "experimental-0-o".

vbraun pushed a commit to vbraun/sage that referenced this pull request Nov 14, 2023
sagemathgh-36660: .github/workflows/ci-linux.yml: Fine-tune max-parallel
    
... for constructive clogging,
sagemath#36616 (comment)

<!-- ^^^^^
Please provide a concise, informative and self-explanatory title.
Don't put issue numbers in there, do this in the PR body below.
For example, instead of "Fixes sagemath#1234" use "Introduce new method to
calculate 1+1"
-->
<!-- Describe your changes here in detail -->

<!-- Why is this change required? What problem does it solve? -->
<!-- If this PR resolves an open issue, please link to it here. For
example "Fixes sagemath#12345". -->
<!-- If your change requires a documentation PR, please link it
appropriately. -->

### 📝 Checklist

<!-- Put an `x` in all the boxes that apply. -->
<!-- If your change requires a documentation PR, please link it
appropriately -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
<!-- Feel free to remove irrelevant items. -->

- [x] The title is concise, informative, and self-explanatory.
- [ ] The description explains in detail what this PR is about.
- [x] I have linked a relevant issue or discussion.
- [ ] I have created tests covering the changes.
- [ ] I have updated the documentation accordingly.

### ⌛ Dependencies

<!-- List all open PRs that this PR logically depends on
- sagemath#12345: short description why this is a dependency
- sagemath#34567: ...
-->

<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
    
URL: sagemath#36660
Reported by: Matthias Köppe
Reviewer(s): Kwankyu Lee
@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 14, 2023

The run for 10.2.rc2 (https://github.com/sagemath/sage/actions/runs/6842945144) was still busy with the last block ("experimental-p-z") of jobs when 10.2.rc3 was pushed. I've canceled it now.

@mkoeppe
Copy link
Contributor Author

mkoeppe commented Nov 14, 2023

Run for 10.2.rc3 (https://github.com/sagemath/sage/actions/runs/6870325951) started at T = Tue, 14 Nov 2023 22:33:32 GMT; empty queue

@tobiasdiez
Copy link
Contributor

Around 12 workflows had slipped through the clogging.

So which B&T workflows actually profited from the clogging (i.e. got delayed long enough to run after the default workflow finished)?

@tobiasdiez
Copy link
Contributor

tobiasdiez commented Nov 21, 2023

Here are the last recent runs:

2023-11-17 22:22:26+00:00 0:17:49.867925 53
2023-11-14 22:13:59+00:00 0:14:07.058824 68
2023-11-12 20:06:12+00:00 0:34:31.106667 75
2023-11-10 15:41:57+00:00 0:30:56.926829 82
2023-11-05 14:03:35+00:00 0:24:32.520548 73
2023-10-21 15:07:11+00:00 0:05:46.085366 82

The last column shows the number of runs (as an indication of how busy it was). Although every of these periods has seen less runs than at 10-21, the execution runs are a factor of 3 to 7 of the execution time before this PR was merged.

vbraun pushed a commit to vbraun/sage that referenced this pull request Dec 22, 2023
sagemathgh-36941: Reduce number of systems test with minimal config
    
<!-- ^^^^^
Please provide a concise, informative and self-explanatory title.
Don't put issue numbers in there, do this in the PR body below.
For example, instead of "Fixes sagemath#1234" use "Introduce new method to
calculate 1+1"
-->
<!-- Describe your changes here in detail -->

Alternative solution for sagemath#36726. Since the purpose of the "minimal" ci
runs is to check the documentation of the minimal build requirements, it
is overkill to run it for all systems. Thus, we only run it for "ubuntu-
focal", which is also the default system for PRs.

This also hugely reduces the load of onto the ci, somewhat leviating the
issues introduced by sagemath#36616 (see eg
sagemath#36616 (comment)).

<!-- Why is this change required? What problem does it solve? -->
<!-- If this PR resolves an open issue, please link to it here. For
example "Fixes sagemath#12345". -->
<!-- If your change requires a documentation PR, please link it
appropriately. -->

### 📝 Checklist

<!-- Put an `x` in all the boxes that apply. -->
<!-- If your change requires a documentation PR, please link it
appropriately -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
<!-- Feel free to remove irrelevant items. -->

- [x] The title is concise, informative, and self-explanatory.
- [x] The description explains in detail what this PR is about.
- [x] I have linked a relevant issue or discussion.
- [ ] I have created tests covering the changes.
- [ ] I have updated the documentation accordingly.

### ⌛ Dependencies

<!-- List all open PRs that this PR logically depends on
- sagemath#12345: short description why this is a dependency
- sagemath#34567: ...
-->

<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
    
URL: sagemath#36941
Reported by: Tobias Diez
Reviewer(s): Dima Pasechnik
vbraun pushed a commit to vbraun/sage that referenced this pull request Dec 23, 2023
<!-- ^^^^^
Please provide a concise, informative and self-explanatory title.
Don't put issue numbers in there, do this in the PR body below.
For example, instead of "Fixes sagemath#1234" use "Introduce new method to
calculate 1+1"
-->
<!-- Describe your changes here in detail -->

Alternative solution for sagemath#36726. Since the purpose of the "minimal" ci
runs is to check the documentation of the minimal build requirements, it
is overkill to run it for all systems. Thus, we only run it for "ubuntu-
focal", which is also the default system for PRs.

This also hugely reduces the load of onto the ci, somewhat leviating the
issues introduced by sagemath#36616 (see eg
sagemath#36616 (comment)).

<!-- Why is this change required? What problem does it solve? -->
<!-- If this PR resolves an open issue, please link to it here. For
example "Fixes sagemath#12345". -->
<!-- If your change requires a documentation PR, please link it
appropriately. -->

### 📝 Checklist

<!-- Put an `x` in all the boxes that apply. -->
<!-- If your change requires a documentation PR, please link it
appropriately -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
<!-- Feel free to remove irrelevant items. -->

- [x] The title is concise, informative, and self-explanatory.
- [x] The description explains in detail what this PR is about.
- [x] I have linked a relevant issue or discussion.
- [ ] I have created tests covering the changes.
- [ ] I have updated the documentation accordingly.

### ⌛ Dependencies

<!-- List all open PRs that this PR logically depends on
- sagemath#12345: short description why this is a dependency
- sagemath#34567: ...
-->

<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->

URL: sagemath#36941
Reported by: Tobias Diez
Reviewer(s): Dima Pasechnik
vbraun pushed a commit to vbraun/sage that referenced this pull request Feb 19, 2024
sagemathgh-36694: CI conda: On `pull_request`, only run 1 macOS job and 1 Linux job
    
<!-- ^^^^^
Please provide a concise, informative and self-explanatory title.
Don't put issue numbers in there, do this in the PR body below.
For example, instead of "Fixes sagemath#1234" use "Introduce new method to
calculate 1+1"
-->
<!-- Describe your changes here in detail -->

<!-- Why is this change required? What problem does it solve? -->
as discussed in
sagemath#36678 (comment),
sagemath#36616 (comment)

On pushes to a tag, all 3 Linux and 3 macOS jobs are still run, as
demonstrated in https://github.com/mkoeppe/sage/actions/runs/6829619271

<!-- If this PR resolves an open issue, please link to it here. For
example "Fixes sagemath#12345". -->
<!-- If your change requires a documentation PR, please link it
appropriately. -->

After merging, the branch protection rule https://github.com/sagemath/sa
ge/settings/branch_protection_rules/33965567 will need to be updated. It
controls what workflows show as "Required" in the PR Checks.

### 📝 Checklist

<!-- Put an `x` in all the boxes that apply. -->
<!-- If your change requires a documentation PR, please link it
appropriately -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
<!-- Feel free to remove irrelevant items. -->

- [x] The title is concise, informative, and self-explanatory.
- [ ] The description explains in detail what this PR is about.
- [x] I have linked a relevant issue or discussion.
- [ ] I have created tests covering the changes.
- [ ] I have updated the documentation accordingly.

### ⌛ Dependencies

<!-- List all open PRs that this PR logically depends on
- sagemath#12345: short description why this is a dependency
- sagemath#34567: ...
-->

<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
    
URL: sagemath#36694
Reported by: Matthias Köppe
Reviewer(s): Kwankyu Lee, Tobias Diez
vbraun pushed a commit to vbraun/sage that referenced this pull request Feb 22, 2024
sagemathgh-36694: CI conda: On `pull_request`, only run 1 macOS job and 1 Linux job
    
<!-- ^^^^^
Please provide a concise, informative and self-explanatory title.
Don't put issue numbers in there, do this in the PR body below.
For example, instead of "Fixes sagemath#1234" use "Introduce new method to
calculate 1+1"
-->
<!-- Describe your changes here in detail -->

<!-- Why is this change required? What problem does it solve? -->
as discussed in
sagemath#36678 (comment),
sagemath#36616 (comment)

On pushes to a tag, all 3 Linux and 3 macOS jobs are still run, as
demonstrated in https://github.com/mkoeppe/sage/actions/runs/6829619271

<!-- If this PR resolves an open issue, please link to it here. For
example "Fixes sagemath#12345". -->
<!-- If your change requires a documentation PR, please link it
appropriately. -->

After merging, the branch protection rule https://github.com/sagemath/sa
ge/settings/branch_protection_rules/33965567 will need to be updated. It
controls what workflows show as "Required" in the PR Checks.

### 📝 Checklist

<!-- Put an `x` in all the boxes that apply. -->
<!-- If your change requires a documentation PR, please link it
appropriately -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
<!-- Feel free to remove irrelevant items. -->

- [x] The title is concise, informative, and self-explanatory.
- [ ] The description explains in detail what this PR is about.
- [x] I have linked a relevant issue or discussion.
- [ ] I have created tests covering the changes.
- [ ] I have updated the documentation accordingly.

### ⌛ Dependencies

<!-- List all open PRs that this PR logically depends on
- sagemath#12345: short description why this is a dependency
- sagemath#34567: ...
-->

<!-- If you're unsure about any of these, don't hesitate to ask. We're
here to help! -->
    
URL: sagemath#36694
Reported by: Matthias Köppe
Reviewer(s): Kwankyu Lee, Tobias Diez
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c: scripts disputed PR is waiting for community vote, see https://groups.google.com/g/sage-devel/c/IgBYUJl33SQ
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants