Skip to content

Conversation

bikegeek
Copy link
Contributor

@bikegeek bikegeek commented May 18, 2025

Expected Differences

  • Do these changes introduce new tools, command line arguments, or configuration file options? [NO]

    If yes, please describe:

  • Do these changes modify the structure of existing or add new output data types (e.g. statistic line types or NetCDF variables)? [No]

    If yes, please describe:

Pull Request Testing

  • Describe testing already performed for these changes:

    instrumented Grid-Diag code
    ran benchmarking tool for multiple runs
    verified output was in expected location and contained correct information

  • Recommend testing for the reviewer(s) to perform, including the location of input datasets, and any additional instructions:

    Refer to the ReadtheDocs Contributor's Guide :
    https://metplus.readthedocs.io/projects/met/en/feature_3109_benchmarking_grid_diag/Contributors_Guide/code_profiling.html

    Experiment with code that is instrumented on 'seneca'

  • /d1/projects/GRID_DIAG_OPTIMIZATION/latest/MET/internal/scripts/benchmark

  • run in bash

  • Change the num_runs in the benchmark.yaml config file and generate consolidated reports in the output directory specified in benchmark.yaml

  • Python code (to run the MET commands and create final reports) is located on 'seneca': /d1/projects/GRID_DIAG_OPTIMIZATION/latest/MET/internal/scripts/benchmark

  •  cd /d1/projects/GRID_DIAG_OPTIMIZATION/latest/MET/internal/scripts/benchmark
    
  • Source the /d1/projects/GRID_DIAG_OPTIMIZATION/latest/setup_latest.bash to set up envs

  •   cd /d1/projects/GRID_DIAG_OPTIMIZATION
    
  •   source ./setup_latest.bash
    
  • use a python 3.12 environment**

  •   conda activate   /d1/personal/mwin/miniconda3/envs/mp_py312
    
  • modify the benchmark.yaml config file

  • run the Python script

  • python benchmark.py

Verify that output is generated in the specified output directory

Instrument any other functions or remove instrumentation and rebuild MET and re-run benchmarking tool (benchmark.py)

  • Do these changes include sufficient documentation updates, ensuring that no errors or warnings exist in the build of the documentation? [Yes]

  • Do these changes include sufficient testing updates? [No]

  • Will this PR result in changes to the MET test suite? [ No]

    If yes, describe the new output and/or changes to the existing output:

  • Will this PR result in changes to existing METplus Use Cases? [No]

    If yes, create a new Update Truth METplus issue to describe them.

  • Do these changes introduce new SonarQube findings? [Yes or No]

    If yes, please describe:

  • Please complete this pull request review by for RC1 release.

Pull Request Checklist

See the METplus Workflow for details.

  • Review the source issue metadata (required labels, projects, and milestone).
  • Complete the PR definition above.
  • Ensure the PR title matches the feature or bugfix branch name.
  • Define the PR metadata, as permissions allow.
    Select: Reviewer(s) and Development issue
    Select: Milestone as the version that will include these changes
    Select: METplus-X.Y Support project for bugfix releases or MET-X.Y Development project for the next coordinated release
  • After submitting the PR, select the ⚙️ icon in the Development section of the right hand sidebar. Search for the issue that this PR will close and select it, if it is not already selected.
  • After the PR is approved, merge your changes. If permissions do not allow this, request that the reviewer do the merge.
  • Close the linked issue and delete your feature or bugfix branch from GitHub.

@bikegeek bikegeek moved this from 🩺 Needs Triage to 🔎 In review in METplus-6.1 Development May 20, 2025
Copy link
Collaborator

@JohnHalleyGotway JohnHalleyGotway left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last "benchmarking" PR, #3065, included changes to ensemble_stat.cc like this:

#ifdef WITH_PROFILER
#include "ctrack.hpp"
#endif
...
  // Save the CTRACK metrics 
  #ifdef WITH_PROFILER
  ctrack::result_print();
  #endif

I was expecting this PR to include similar changes in the Grid-Diag source code, but none are included.

Is this intentional or do you have uncommitted changes on your feature branch that you meant to include in this PR?

@bikegeek
Copy link
Contributor Author

bikegeek commented May 20, 2025 via email

Copy link
Collaborator

@JohnHalleyGotway JohnHalleyGotway left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bikegeek I approve of these changes. Thanks for your work on this.

FYI, I tested on seneca with the following:

runas met_test
cd /d1/projects/MET/MET_pull_requests/met-12.1.0/rc1/MET-feature_3109_benchmarking_grid_diag/internal/scripts/benchmark
conda activate /d1/personal/mwin/miniconda3/envs/mp_py312
# Edit benchmark.yaml with my command
python benchmark.py

And it ran as expected.

I did also check that running from any directory other than benchmark does produce a failure.

Copy link
Collaborator

@hsoh-u hsoh-u left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I approve of these changes. I built and ran it. The output files were generated at $BENCHMARK_OUTPUT_BASE.

@bikegeek bikegeek merged commit 8a3327f into develop May 21, 2025
41 checks passed
@github-project-automation github-project-automation bot moved this from 🔎 In review to 🏁 Done in METplus-6.1 Development May 21, 2025
@bikegeek bikegeek deleted the feature_3109_benchmarking_grid_diag branch May 21, 2025 16:33
JohnHalleyGotway added a commit that referenced this pull request May 21, 2025
* Per #3155, update the version and release notes for version 12.1.0-rc1.

* Per #3155, add upgrade instructions.

* Per #3157, tweak bugfix wording.

* Per #3157, improve formatting of configuration options.

* Per #3157, remove empty METbaseimage testing environment section from RC1 releates notes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Status: 🏁 Done
Development

Successfully merging this pull request may close these issues.

Enhance the benchmarking logic to support multiple runs and instrument the Grid-Diag tool to report metrics
3 participants