-
Notifications
You must be signed in to change notification settings - Fork 38
Feature #1626 TCDiag updates to wrapper and basic use case #2248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ccurring, ci-run-diff, ci-skip-unit-tests
…ci-run-diff, ci-skip-unit-tests
…f LOOP_ORDER is still used and warn that it should be removed. Add debug log message to alert user that only the first init/valid time is being processed and how to change that if desired
…operly check for null/NaN values
…n, call that function in NC value diff to prevent differences that are smaller than 5 decimal places
…-DIAG in the METplus wrappers page, as the initial description seems overly terse.
…d to each relevant use case
…orm (Bret 2023).
… using a TECH of GFSO.
… to reflect the init time and lead times for this case. Also updated the expected output.
…ot split off from the main part.
…gConfig_wrapped to 11.1.0. Is this correct?
I have tested in my own environment on Seneca and it works. Would you like me to test this again by pulling down the tarball of test data and re-running the use case? I have reviewed the documentation and found some discrepancies from the old use case (Matthew). I have fixed these. I note that the version of the MET config wrapped file was 11.0.0. I changed this to 11.1.0, but did not check to ensure that it matches with an actual MET 11.1.0 config file. Please let me know if a deeper check would have been needed (e.g., cross checking against the MET v11.1.0-rc1 version). Do we will want @KathrynNewman to do a final review? If not, I approve this pull request. |
…eatures of the use case broke the documentation build. Trying this mod.
@jvigh, I don't think it's necessary to pull the data you added for the tests and rerun. If you'd like, you can download the output data from the GitHub Actions workflow run and review it, but I think it is unlikely that the output will differ from what you already ran for this use case. Please be mindful that each push to the branch in an open pull request will re-trigger the full suite of tests, so it is best to either wait to push until you are satisfied with all of your commits or temporarily close the pull request and reopen it when you are ready. You can also create a branch off of the branch to test and merge that branch into this branch when you are ready. If the changes are related to documentation formatting, you can build the documentation locally and view them in a web browser. The version number in the wrapped MET config file is commented out and has no effect. In other files it is either removed completely or left commented out to show relatively when it was added/updated, but we don't have a strict policy besides it should not be set to any value to prevent issues when upgrading to a newer version. It's up to you who else you'd like to review these changes. Feel free to reassign it to someone else if that makes sense to you. |
Thanks for the tips @georgemccabe. My dev situation is not as good as the
METplus software engineers. I'm on a slow internet connection and can't
test locally (maybe there's a way on Windows, but I would need help getting
set up to test locally). So pushing code through GitHub is the best
solution for me. Editing remotely was tortuously slow to well nigh
impossible due to internet issues previously, but that's mostly fixed now.
Sorry it was triggering so many tests. Perhaps the 2nd option you mention,
doing a separate branch of that branch, would be a good solution.
Similarly, if you can point me to instructions on how to build
documentation locally, I'm happy to try it.
I don't see a need to test with the data tarball (since I'm the one who
created it) or to compare outputs. I'm sure it's the same. Just check
with @KathrynNewman if she plans to review it. If not, go ahead and merge.
Thanks,
Jonathan
Message ID: ***@***.***>
… |
Pull Request Testing
Confirmed TCDiag.conf basic use case runs successfully in automated tests after changes to input data
Do these changes include sufficient documentation updates, ensuring that no errors or warnings exist in the build of the documentation? [Yes]
Do these changes include sufficient testing updates? [Yes]
Will this PR result in changes to the test suite? [Yes]
The new use case will create new output so we will need to update the truth data after this PR is merged
Pull Request Checklist
See the METplus Workflow for details.
Select: Reviewer(s)
Select: Organization level software support Project or Repository level development cycle Project
Select: Milestone as the version that will include these changes