Skip to content

Conversation

natalieb-noaa
Copy link

@natalieb-noaa natalieb-noaa commented Jun 16, 2025

Pull Request Testing

  • Describe testing already performed for these changes:

    I've run the full (including all use case tests) Testing workflow in GHA on this branch.
    I've also run the full (including all unit tests) Testing workflow in GHA on MET branch feature_2718_use-python-diff, which is currently cloning this METplus branch for running diff tests.
    I've also run the tests on MET locally on seneca.
    When making changes to handle specific cases that arose in MET, I ran the METplus tests with some debugging checks (e.g. to flag certain file types, which is how I identified a .log file in use case output, for example) to make sure I wasn't making changes that would (negatively) impact what was being compared in the METplus workflow. (METplus unit tests proved helpful in this way, too.)

  • Recommend testing for the reviewer(s) to perform, including the location of input datasets, and any additional instructions:

    Manually trigger the METplus Testing workflow in GHA on this branch. NOTE: Use Case Test (marine_and_cryosphere:6) will fail, because I changed an output file name, and the truth data has not been updated yet.
    Manually trigger the MET Testing workflow in GHA on branch feature_2718_use-python-diff

  • Do these changes include sufficient documentation updates, ensuring that no errors or warnings exist in the build of the documentation? [Yes or No]
    Use case documentation was updated to reflect the change in the filename.

  • Do these changes include sufficient testing updates? [Yes or No]
    I did not add any new unit tests.

  • Will this PR result in changes to the test suite? [Yes]

    If yes, describe the new output and/or changes to the existing output:

    Output shouldn't change.

  • Do these changes introduce new SonarQube findings? [No]

    If yes, please describe:

  • Please complete this pull request review by [6/20/25].

NOTE: See comments in #2999 for reasoning behind some decisions made.

Pull Request Checklist

See the METplus Workflow for details.

  • Add any new Python packages to the METplus Components Python Requirements table.
  • For any new datasets, an entry to the METplus Verification Datasets Guide.
  • Review the source issue metadata (required labels, projects, and milestone).
  • Complete the PR definition above.
  • Ensure the PR title matches the feature or bugfix branch name.
  • Define the PR metadata, as permissions allow.
    Select: Reviewer(s) and Development issue
    Select: Milestone as the version that will include these changes
    Select: Coordinated METplus-X.Y Support project for bugfix releases or METplus-Wrappers-X.Y.Z Development project for official releases
  • After submitting the PR, select the ⚙️ icon in the Development section of the right hand sidebar. Search for the issue that this PR will close and select it, if it is not already selected.
  • After the PR is approved, merge your changes. If permissions do not allow this, request that the reviewer do the merge.
  • Close the linked issue and delete your feature or bugfix branch from GitHub.

@natalieb-noaa
Copy link
Author

@georgemccabe , I hope you're OK with me requesting you to review this? If not, please let me know who would be best. Thanks!

@natalieb-noaa natalieb-noaa added this to the METplus-6.2.0 milestone Jun 16, 2025
@natalieb-noaa natalieb-noaa moved this from 🩺 Needs Triage to 🔎 In review in METplus-6.2 Development Jun 16, 2025
Copy link
Collaborator

@georgemccabe georgemccabe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes look good. It is helpful to have a count of files compared for future enhancements to gather metrics of use case test runs. I added a few unit tests to execute the new value diff logic (e.g. significant figures, close to zero, etc.) and the new file types to skip (e.g. logs and no extension files).

I see that a use case output was changed so that the output data does not include the now skipped .log extension. The truth data for that use case group will have to be updated.

Thanks for making these updates! It is exciting that are closer to using the same diff utility for MET and METplus tests!

@georgemccabe georgemccabe merged commit bff0393 into develop Jun 18, 2025
80 of 81 checks passed
@github-project-automation github-project-automation bot moved this from 🔎 In review to 🏁 Done in METplus-6.2 Development Jun 18, 2025
@georgemccabe georgemccabe deleted the feature_2999_improve_diffs_for_met_tests branch June 18, 2025 16:28
@georgemccabe georgemccabe changed the title Feature 2999 improve diffs for met tests Feature #2999 improve diffs for met tests Jun 18, 2025
georgemccabe pushed a commit that referenced this pull request Jun 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 🏁 Done
Development

Successfully merging this pull request may close these issues.

Enhancement: Improve differencing logic so that it does not falsely flag differences when used for MET unit test output
2 participants