-
Notifications
You must be signed in to change notification settings - Fork 116
Test fixture #303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Test fixture #303
Conversation
Codecov ReportPatch coverage has no change and project coverage change:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## develop #303 +/- ##
===========================================
- Coverage 86.09% 86.07% -0.02%
===========================================
Files 23 23
Lines 6069 6069
===========================================
- Hits 5225 5224 -1
- Misses 844 845 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Signed-off-by: Cristian Le <git@lecris.dev>
Signed-off-by: Cristian Le <git@lecris.dev>
Is the goal of this fully replace pytest, or to have at least a set of tests for every API functions? |
Yes, the goal is to replace them for the main C interface. We can run more complete checks with specific compiler flags, while the other api tests should test only that, api specific stuff, including memory allocation/deallocation to the language native objects. |
If so, it will not be a small work. |
Well, no. These are all functional (regression tests), so the only missing part is getting input data and compared data which are already present in the python database. That is not so difficult, I just need to actually sit down and do it. However, these are not appropriate tests to rely on for the long-run, since it is hard to pin down where things are breaking and they are vulnerable to numerical fluctuations. The difficult work is to do proper unit (+ integration) testing, which requires going into each component and writing minimal tests. The framework for that is already in place it's just time and work needed to go through each one. I am not tackling this issue for now, but if I have some time to do #301 (#262) that will be unit tested as it evolves. |
This is an example of using googletest test parameterized test fixture. With this it should be easier to get the test data from the test files that are used in python.
See
/test/functional/test_dataset_access.cpp
for example usage.TODO:
spg_get_dataset
spg_get_layer_dataset
spgat_get_dataset
spg_get_dataset_with_hall_number
spgat_get_dataset_with_hall_number
from_file
spg_get_magnetic_dataset
spgms_get_magnetic_dataset
from_file
?