-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Enable compile_and_test_for_board to skip if nothing changed #19546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Maybe @Teufelchen1 or @kaspar030 would be interested in this. Some open questions are:
Follow-up ideas (not for this PR) would be:
|
f17daf3
to
940dac4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine and works. Please add the documentation.
940dac4
to
38d8cd4
Compare
I added a bit more info in a comment, I don't want to clutter the help string. |
38d8cd4
to
0465b6d
Compare
I also thought about that but couldn't come up with an ideal solution. Have you tried a status code?
👍 |
Currently the hashes are stored in the following format:
The results are stored in a different format:
By default, the script will clean the results if they exist by removing the My solution for this was to namespace the hashes directory so it does not get removed between runs (which would defeat the point). The first question would be, is this a good way to preserve the hashes, by namespacing the The second question would be to add a command (arg flag) to clean the hashes since they will persist until either they are manually removed or the results directory changes. |
0465b6d
to
c0c0623
Compare
ping |
1 similar comment
ping |
1d8eb0b
to
a439442
Compare
I think namespacing is fine. I would rather have a different folder than messing with clean. I would be confused when I clean something and some arbitrary hashes remain.
Hm voting against that. If you don't care about the hashes, don't pass On a site note, this tool suffers from the common RIOT issue of not being exposed to user. Ideally, this PR would add a change to a handbook somewhere to explain its abilities to future RIOT developers... |
True, maybe an fairly obvious entrypoint in the docs for all the tools, why solve this problem when you can try to tackle all the problems. |
bors merge |
🕐 Waiting for PR status (GitHub check) to be set, probably by CI. Bors will automatically try to run when all required PR statuses are set. |
bors merge |
19546: Enable compile_and_test_for_board to skip if nothing changed r=benpicco a=MrKevinWeiss ### Contribution description The overall goal of this PR is to be able to keep running the `compile_and_test_for_board.py` script without destroying ones boards. Useful if you are a board owner that wants to keep testing on master (say with nightlies, say in a CI). For example, I could now just run and rerun the following: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . <MY_BOARD> -c ``` and monitor the results, even after updating a branch. This is because the hashes now get stored with the results (thanks to the `RIOT_TEST_HASH_DIR` var) which means cleaning will not wipe them out. Specifically in (`<results_base>/<board>/hashes/<app_dir>/test-input-hash.sha1`) I tried to do as much as I could with make and only alter the python script when needed. ### Testing procedure Clear results folder and run: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell -c ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> it should execute the test... Then the same command again should skip it. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: True** INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> Try changing something (say the `tests/shell/01-run.py`) and rerun, it should trigger the test again. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ...finally, running without the changes check should run the test as usual, even if nothing changes: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ### Issues/PRs references Might help the surprises we got with #19469 20022: pkg/lwip: add support for slipdev r=benpicco a=benpicco 20025: tests/drivers/at: fix device table overflow r=benpicco a=krzysztof-cabaj ### Contribution description This PR fix device table overflow in `tests/driver/at`, which could lead to device crash. ### Testing procedure PR was tested on two nucleo boards with 2 and 3 UARTs (nucleo-l476rg and nucleo-l496zg). Flash `tests/driver/at` with and without this PR. Output with this PR: ``` > main(): This is RIOT! (Version: 2022.07-devel-5083-g2b9e8-tests-drivers-at) AT command test app > init 5 9600 Wrong UART device number - should by in range 0-2. > ``` Output without this PR: ``` > main(): This is RIOT! (Version: 2022.07-devel-5083-g2b9e8) AT command test app > init 5 9600 8001afd *** RIOT kernel panic: FAILED ASSERTION. *** halted. Context before hardfault: r0: 0x0000000a r1: 0x00000000 . . . ``` ### Issues/PRs references None Co-authored-by: MrKevinWeiss <weiss.kevin604@gmail.com> Co-authored-by: Benjamin Valentin <benjamin.valentin@ml-pa.com> Co-authored-by: krzysztof-cabaj <kcabaj@gmail.com>
Build failed (retrying...): |
19546: Enable compile_and_test_for_board to skip if nothing changed r=benpicco a=MrKevinWeiss ### Contribution description The overall goal of this PR is to be able to keep running the `compile_and_test_for_board.py` script without destroying ones boards. Useful if you are a board owner that wants to keep testing on master (say with nightlies, say in a CI). For example, I could now just run and rerun the following: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . <MY_BOARD> -c ``` and monitor the results, even after updating a branch. This is because the hashes now get stored with the results (thanks to the `RIOT_TEST_HASH_DIR` var) which means cleaning will not wipe them out. Specifically in (`<results_base>/<board>/hashes/<app_dir>/test-input-hash.sha1`) I tried to do as much as I could with make and only alter the python script when needed. ### Testing procedure Clear results folder and run: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell -c ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> it should execute the test... Then the same command again should skip it. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: True** INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> Try changing something (say the `tests/shell/01-run.py`) and rerun, it should trigger the test again. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ...finally, running without the changes check should run the test as usual, even if nothing changes: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ### Issues/PRs references Might help the surprises we got with #19469 Co-authored-by: MrKevinWeiss <weiss.kevin604@gmail.com>
Build failed: |
Well that seems unrelated... |
bors merge |
19546: Enable compile_and_test_for_board to skip if nothing changed r=MrKevinWeiss a=MrKevinWeiss ### Contribution description The overall goal of this PR is to be able to keep running the `compile_and_test_for_board.py` script without destroying ones boards. Useful if you are a board owner that wants to keep testing on master (say with nightlies, say in a CI). For example, I could now just run and rerun the following: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . <MY_BOARD> -c ``` and monitor the results, even after updating a branch. This is because the hashes now get stored with the results (thanks to the `RIOT_TEST_HASH_DIR` var) which means cleaning will not wipe them out. Specifically in (`<results_base>/<board>/hashes/<app_dir>/test-input-hash.sha1`) I tried to do as much as I could with make and only alter the python script when needed. ### Testing procedure Clear results folder and run: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell -c ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> it should execute the test... Then the same command again should skip it. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: True** INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> Try changing something (say the `tests/shell/01-run.py`) and rerun, it should trigger the test again. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ...finally, running without the changes check should run the test as usual, even if nothing changes: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ### Issues/PRs references Might help the surprises we got with #19469 Co-authored-by: MrKevinWeiss <weiss.kevin604@gmail.com>
Build failed: |
a439442
to
650354c
Compare
I am starting to think that it is me... |
bors merge |
19546: Enable compile_and_test_for_board to skip if nothing changed r=benpicco a=MrKevinWeiss ### Contribution description The overall goal of this PR is to be able to keep running the `compile_and_test_for_board.py` script without destroying ones boards. Useful if you are a board owner that wants to keep testing on master (say with nightlies, say in a CI). For example, I could now just run and rerun the following: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . <MY_BOARD> -c ``` and monitor the results, even after updating a branch. This is because the hashes now get stored with the results (thanks to the `RIOT_TEST_HASH_DIR` var) which means cleaning will not wipe them out. Specifically in (`<results_base>/<board>/hashes/<app_dir>/test-input-hash.sha1`) I tried to do as much as I could with make and only alter the python script when needed. ### Testing procedure Clear results folder and run: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell -c ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> it should execute the test... Then the same command again should skip it. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: True** INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> Try changing something (say the `tests/shell/01-run.py`) and rerun, it should trigger the test again. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ...finally, running without the changes check should run the test as usual, even if nothing changes: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ### Issues/PRs references Might help the surprises we got with #19469 19944: cpu/esp32: add low-level LCD parallel interface using LCD peripheral r=benpicco a=gschorcht ### Contribution description This PR the implementation of the LCD low-level MCU 8080 parallel interface using the LCD peripheral for ESP32, ESP32-S2 and ESP32-S3 or `periph_gpio_ll` for the ESP32-C3. ### Testing procedure ``` BOARD=esp32s3-wt32-sc01-plus make -C tests/drivers/st77xx flash ``` should work on top of PR #19941. Drawing operations should be much faster. ### Issues/PRs references Co-authored-by: MrKevinWeiss <weiss.kevin604@gmail.com> Co-authored-by: Gunar Schorcht <gunar@schorcht.net>
Build failed (retrying...): |
19546: Enable compile_and_test_for_board to skip if nothing changed r=benpicco a=MrKevinWeiss ### Contribution description The overall goal of this PR is to be able to keep running the `compile_and_test_for_board.py` script without destroying ones boards. Useful if you are a board owner that wants to keep testing on master (say with nightlies, say in a CI). For example, I could now just run and rerun the following: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . <MY_BOARD> -c ``` and monitor the results, even after updating a branch. This is because the hashes now get stored with the results (thanks to the `RIOT_TEST_HASH_DIR` var) which means cleaning will not wipe them out. Specifically in (`<results_base>/<board>/hashes/<app_dir>/test-input-hash.sha1`) I tried to do as much as I could with make and only alter the python script when needed. ### Testing procedure Clear results folder and run: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell -c ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> it should execute the test... Then the same command again should skip it. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: True** INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> Try changing something (say the `tests/shell/01-run.py`) and rerun, it should trigger the test again. <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation **INFO:native.tests/shell:Hashes match: False** INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ...finally, running without the changes check should run the test as usual, even if nothing changes: ``` ./dist/tools/compile_and_test_for_board/compile_and_test_for_board.py . native --applications tests/shell ``` <details> ``` INFO:native:Saving toolchain INFO:native.tests/shell:Board supported: True INFO:native.tests/shell:Board has enough memory: True INFO:native.tests/shell:Application has test: True INFO:native.tests/shell:Run compilation INFO:native.tests/shell:Run test INFO:native.tests/shell:Run test.flash INFO:native.tests/shell:Success INFO:native:Tests successful ``` </details> ### Issues/PRs references Might help the surprises we got with #19469 Co-authored-by: MrKevinWeiss <weiss.kevin604@gmail.com>
Build failed: |
650354c
to
328487c
Compare
It was just a copy pasta error ( |
RIOT_TEST_HASH_DIR represents the dir to generate the test-input-hash.sha1 file for checking if a test has changed, uses BINDIR by default. Since it defaults to BINDIR it should have no effect on running systems. This allows a bit more fine grain control, especially when using the compile_and_test_for_board.py script.
This target allows one to check if a test-input-hash is there and if it differs from a new one, adding basic support to skip tests if nothing needs to be run.
This will use the make test-input-hash-changed feature to save the test hashes with the results and optionally skip running the test if nothing has changed. Murdock already has this feature but it is not easily accessible. This should prevent unneeded flash cycles as well as speeding up constant rerunning of tests for boards.
328487c
to
ceec795
Compare
rebased |
Thanks! |
Contribution description
The overall goal of this PR is to be able to keep running the
compile_and_test_for_board.py
script without destroying ones boards. Useful if you are a board owner that wants to keep testing on master (say with nightlies, say in a CI).For example, I could now just run and rerun the following:
and monitor the results, even after updating a branch.
This is because the hashes now get stored with the results (thanks to the
RIOT_TEST_HASH_DIR
var) which means cleaning will not wipe them out. Specifically in (<results_base>/<board>/hashes/<app_dir>/test-input-hash.sha1
)I tried to do as much as I could with make and only alter the python script when needed.
Testing procedure
Clear results folder and run:
it should execute the test...
Then the same command again should skip it.
Try changing something (say the
tests/shell/01-run.py
) and rerun, it should trigger the test again....finally, running without the changes check should run the test as usual, even if nothing changes:
Issues/PRs references
Might help the surprises we got with #19469