-
Notifications
You must be signed in to change notification settings - Fork 38
Feature #2999 improve diffs for met tests #3019
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature #2999 improve diffs for met tests #3019
Conversation
incorporating latest changes to develop (mostly so this branch will pass tests)
@georgemccabe , I hope you're OK with me requesting you to review this? If not, please let me know who would be best. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes look good. It is helpful to have a count of files compared for future enhancements to gather metrics of use case test runs. I added a few unit tests to execute the new value diff logic (e.g. significant figures, close to zero, etc.) and the new file types to skip (e.g. logs and no extension files).
I see that a use case output was changed so that the output data does not include the now skipped .log
extension. The truth data for that use case group will have to be updated.
Thanks for making these updates! It is exciting that are closer to using the same diff utility for MET and METplus tests!
Pull Request Testing
Describe testing already performed for these changes:
I've run the full (including all use case tests) Testing workflow in GHA on this branch.
I've also run the full (including all unit tests) Testing workflow in GHA on MET branch feature_2718_use-python-diff, which is currently cloning this METplus branch for running diff tests.
I've also run the tests on MET locally on seneca.
When making changes to handle specific cases that arose in MET, I ran the METplus tests with some debugging checks (e.g. to flag certain file types, which is how I identified a .log file in use case output, for example) to make sure I wasn't making changes that would (negatively) impact what was being compared in the METplus workflow. (METplus unit tests proved helpful in this way, too.)
Recommend testing for the reviewer(s) to perform, including the location of input datasets, and any additional instructions:
Manually trigger the METplus Testing workflow in GHA on this branch. NOTE: Use Case Test (marine_and_cryosphere:6) will fail, because I changed an output file name, and the truth data has not been updated yet.
Manually trigger the MET Testing workflow in GHA on branch feature_2718_use-python-diff
Do these changes include sufficient documentation updates, ensuring that no errors or warnings exist in the build of the documentation? [Yes or No]
Use case documentation was updated to reflect the change in the filename.
Do these changes include sufficient testing updates? [Yes or No]
I did not add any new unit tests.
Will this PR result in changes to the test suite? [Yes]
If yes, describe the new output and/or changes to the existing output:
Output shouldn't change.
Do these changes introduce new SonarQube findings? [No]
If yes, please describe:
Please complete this pull request review by [6/20/25].
NOTE: See comments in #2999 for reasoning behind some decisions made.
Pull Request Checklist
See the METplus Workflow for details.
Select: Reviewer(s) and Development issue
Select: Milestone as the version that will include these changes
Select: Coordinated METplus-X.Y Support project for bugfix releases or METplus-Wrappers-X.Y.Z Development project for official releases