-
Notifications
You must be signed in to change notification settings - Fork 26
Feature 3109 benchmarking grid diag #3155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…h the summary table
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last "benchmarking" PR, #3065, included changes to ensemble_stat.cc like this:
#ifdef WITH_PROFILER
#include "ctrack.hpp"
#endif
...
// Save the CTRACK metrics
#ifdef WITH_PROFILER
ctrack::result_print();
#endif
I was expecting this PR to include similar changes in the Grid-Diag source code, but none are included.
Is this intentional or do you have uncommitted changes on your feature branch that you meant to include in this PR?
I didn't want to include the updates to the grid-diag code since the
changes to benchmark.py were for supporting MET multiple times. I can
include them for reference.
|
…e_3109_benchmarking_grid_diag
…pace formatting and removing commented out code.
…marking_grid_diag
… regarding running only once.
… the script is invoked in the correct directory
…y is run outside the MET/internal/script/benchmark directory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bikegeek I approve of these changes. Thanks for your work on this.
FYI, I tested on seneca with the following:
runas met_test
cd /d1/projects/MET/MET_pull_requests/met-12.1.0/rc1/MET-feature_3109_benchmarking_grid_diag/internal/scripts/benchmark
conda activate /d1/personal/mwin/miniconda3/envs/mp_py312
# Edit benchmark.yaml with my command
python benchmark.py
And it ran as expected.
I did also check that running from any directory other than benchmark
does produce a failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I approve of these changes. I built and ran it. The output files were generated at $BENCHMARK_OUTPUT_BASE.
Expected Differences
Do these changes introduce new tools, command line arguments, or configuration file options? [NO]
If yes, please describe:
Do these changes modify the structure of existing or add new output data types (e.g. statistic line types or NetCDF variables)? [No]
If yes, please describe:
Pull Request Testing
Describe testing already performed for these changes:
instrumented Grid-Diag code
ran benchmarking tool for multiple runs
verified output was in expected location and contained correct information
Recommend testing for the reviewer(s) to perform, including the location of input datasets, and any additional instructions:
Refer to the ReadtheDocs Contributor's Guide :
https://metplus.readthedocs.io/projects/met/en/feature_3109_benchmarking_grid_diag/Contributors_Guide/code_profiling.html
Experiment with code that is instrumented on 'seneca'
/d1/projects/GRID_DIAG_OPTIMIZATION/latest/MET/internal/scripts/benchmark
run in bash
Change the num_runs in the benchmark.yaml config file and generate consolidated reports in the output directory specified in benchmark.yaml
Python code (to run the MET commands and create final reports) is located on 'seneca': /d1/projects/GRID_DIAG_OPTIMIZATION/latest/MET/internal/scripts/benchmark
Source the /d1/projects/GRID_DIAG_OPTIMIZATION/latest/setup_latest.bash to set up envs
use a python 3.12 environment**
modify the benchmark.yaml config file
run the Python script
python benchmark.py
Verify that output is generated in the specified output directory
Instrument any other functions or remove instrumentation and rebuild MET and re-run benchmarking tool (benchmark.py)
Do these changes include sufficient documentation updates, ensuring that no errors or warnings exist in the build of the documentation? [Yes]
Do these changes include sufficient testing updates? [No]
Will this PR result in changes to the MET test suite? [ No]
If yes, describe the new output and/or changes to the existing output:
Will this PR result in changes to existing METplus Use Cases? [No]
If yes, create a new Update Truth METplus issue to describe them.
Do these changes introduce new SonarQube findings? [Yes or No]
If yes, please describe:
Please complete this pull request review by for RC1 release.
Pull Request Checklist
See the METplus Workflow for details.
Select: Reviewer(s) and Development issue
Select: Milestone as the version that will include these changes
Select: METplus-X.Y Support project for bugfix releases or MET-X.Y Development project for the next coordinated release