-
Notifications
You must be signed in to change notification settings - Fork 2.1k
dist/pythonlibs: provide unittest TestCase wrapper for testrunner #10431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dist/pythonlibs: provide unittest TestCase wrapper for testrunner #10431
Conversation
edit added quote for reference
e.g. this whole section in #10392 could be methods of a RIOT/tests/gnrc_rpl_srh/tests/01-run.py Lines 166 to 310 in dc7012b
(also Codacy is being stressed out about all the |
@MrKevinWeiss and @smlng are working on their comparison, so I would not want to assume the outcome. There is no question that the existing testrunner approach can be improved with the additional tools in pytest or Robot. I agree that it is worthwhile now to try writing tests using these frameworks so we make an informed decision, and even better if it helps with PRs you're working on right now! IMO the ability to easily port and improve existing tests is a good reason to use pytest. Not sure that I would take it farther than that at the moment. Copying @cladmi. He commented with some code for /tests, so may have some feedback. |
Just to be clear I did not create this PR with the intention to sway the discussion regarding framework in one direction or another. It was just an idea I has during implementing a set of tests of mine to reduce code duplication between tests and only later I realized, that it might make the porting to a test framework easier. |
(that's why I used the unittest module of python, because I assume most high-level test frameworks will have some way to integrate them). |
I would get it differently, the thing I did not wanted to do in my But my first step, if I allow myself to modify it, is to split the In my With this, you can implement the Then your test implementation TestCase class would just inherit from it. I used this as a base class a long time ago https://github.com/iot-lab/iot-lab-gateway/blob/344a3e166fff1af1c14fbf95c132ef1906251c7d/gateway_code/integration/test_integration_mock.py#L50-L89 Then running the test would only be to call the default I could propose something along those lines if you want. And the second one, would be to write, not even in As there is an object, you can subclass it (intelligently of course to be able to merge many class together), and then have a |
Sounds a lot like #7258. ;) |
+2,131 −22 line changes. No it is definitely something more in a single PR. You do not need JSON to just write a python class. |
Arghs, I never said "rewrite everything". I just wanted a way to deduplicate code in my PRs. Porting the tests to this approach could however have benefits.
That is a completely different direction from what I did here, but okay.
Don't really know how this is related, but okay (link leads to an empty diff btw).
I am not really convinced if either
Yeah, we might be able to remove that, if the
Most important thing for me was that it is still compatible with
Let me try fixing it myself first.
It would be a follow-up step to have module-specific mixins, as we have with the ReleaseSpec tests. |
I'm talking about the concept. Node class, RiotNode class, ShellNodeClass. You'll end up with JSON (or some other configuration format) once you realize that at some point, configuration of tests doesn't fit so nicely in code anymore. |
So how do we proceed with this? The decision of the framework should be unrelated to the fact if we have this or not, right? |
@miri64 Agreed, I don't think that framework will replace this anyways, meaning that it is still needed. Any improvements on this would be valuable. It is also not unreasonable to think the framework could eventually take advantage of this as well. |
Ping @cladmi @MrKevinWeiss @smlng? |
I would split in two commits, the first one where And as
In the long term, I am in favor or providing these methods in a non test class, so in your case would result in a testcase method Providing a base class for this is in my goals during this release. |
Does this mean I should squash? |
Will do.
Yes, maybe having some shell interaction class in-between would be a good idea! |
cb2c45b
to
ac50244
Compare
Rebased to current master and squashed as a consequence to also address @cladmi's comment. |
except subprocess.CalledProcessError: | ||
# make reset yields error on some boards even if successful | ||
pass | ||
child = setup_child(timeout, env=os.environ, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is not needed but was here before, so I can remove it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the os.environ
? Yeah, better save than sorry ;-)
|
||
class PexpectTestCase(unittest.TestCase): | ||
TIMEOUT = 10 | ||
LOGFILE = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure in unittest will be able to capture test output without LOGFILE
to stdout or something, even on failures.
But I cannot really verify without using it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure either.. I usually use the output mainly for debugging, so that's why I deactivated it and didn't really think much about actual unittest integration. Let's tackle this when it actually becomes an issue.
I tested tests with also failed outputs and the behavior is the same as before. Please squash and enable murdock, I would do a final pass then. |
7a04ca8
to
47992d6
Compare
Squashed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found some potential issues.
Also one fundamental question, how the PexpectTestCase is supposed to be used ? What is the benefit compared to the testrunner.run function ?
47992d6
to
acb2380
Compare
This allows for re-usability of those functions in other contexts.
acb2380
to
a6594c5
Compare
I had this idea when implementing RIOT-OS#10382 and RIOT-OS#10392 as I introduced a very similar structure to python's standard unittests in those and it could also reduce some code duplication between those two tests.
a6594c5
to
4b5b5d9
Compare
And it is right: assert is a keyword and should not be used as a function (already said that today ;) ). Having |
I also used |
Also: wrong PR I believe. I don't use |
I know it's off topic, just noticed it while reading the history of comments ;) |
Anyway. Murdock is now happy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK I reviewed what I could now. Other unrelated changes will come later.
I tested both test
with both normal and error case, and I could reuse the pytest_child
in my own context.
@aabadie, your comments were addressed. What is your verdict? |
I give the CI another run and then I would merge. Having this ACK'd without merge is a long enough period for someone to complain, I believe ;-) |
Contribution description
I had this idea when implementing #10382 and #10392 as I introduced a very similar structure to python's standard unittests in those and it could also reduce some code duplication between those two tests.
Regarding ongoing discussions about test frameworks
As far as I understand
pytest
so far it would also be beneficial for porting our current tests with this approach. I don't know how this looks like for RobotFramework.Testing procedure
Currently no tests are ported and this is just an RFC. If the discussion steers into the direction of approval (or indecisiveness ;-)) I'll try to port one or two tests to this approach.
Issues/PRs references
#10382 and #10392 could greatly benefit from it, discussions after #10241 (comment) could be related.