Skip to content

Conversation

Grazfather
Copy link
Collaborator

We used -k "not benchmark" which would filter out ANY test that contained the word 'benchmark'.

This change fixes that by using the -m argument to only filter on marks.

We used `-k "not benchmark"` which would filter out ANY test that
contained the word 'benchmark'.

This change fixes that by using the `-m` argument to only filter on
marks.
Copy link

🤖 Coverage update for 4686b47 🔴

Old New
Commit ece5728 4686b47
Score 71.5843% 71.565% (-0.0193)

1 similar comment
Copy link

🤖 Coverage update for 4686b47 🔴

Old New
Commit ece5728 4686b47
Score 71.5843% 71.565% (-0.0193)

@Grazfather
Copy link
Collaborator Author

Grazfather commented Jan 30, 2024

Demonstration of the issue:

root@pwnvm:~/code/gef# pytest --collect-only -k "benchmark" tests/
=========================================== test session starts ============================================
platform linux -- Python 3.10.12, pytest-7.4.0, pluggy-1.2.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /root/code/gef/tests
configfile: pytest.ini
plugins: xdist-3.5.0, cov-4.1.0, benchmark-4.0.0
collected 172 items / 168 deselected / 4 selected

<Package api>
  <Module deprecated.py>
    <UnitTestCase GefFuncDeprecatedApi>
      <TestCaseFunction test_benchmark>
<Package perf>
  <Module benchmark.py>
    <UnitTestCase BenchmarkBasicApi>
      <TestCaseFunction test_cmd_context>
      <TestCaseFunction test_elf_parsing>
      <TestCaseFunction test_gef_memory_maps>

============================= 4/172 tests collected (168 deselected) in 0.33s ==============================

We see with -k we matched a test with the word benchmark in its name, even though it's not marked as a benchmark.

root@pwnvm:~/code/gef# pytest --collect-only -m "benchmark" tests/
=========================================== test session starts ============================================
platform linux -- Python 3.10.12, pytest-7.4.0, pluggy-1.2.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /root/code/gef/tests
configfile: pytest.ini
plugins: xdist-3.5.0, cov-4.1.0, benchmark-4.0.0
collected 172 items / 169 deselected / 3 selected

<Package perf>
  <Module benchmark.py>
    <UnitTestCase BenchmarkBasicApi>
      <TestCaseFunction test_cmd_context>
      <TestCaseFunction test_elf_parsing>
      <TestCaseFunction test_gef_memory_maps>

============================= 3/172 tests collected (169 deselected) in 0.26s ==============================

Here we don't find that test.

In the workflow we use the opposite logic, so we would accidentally NOT run tests with that word.

@hugsy hugsy merged commit e123b87 into main Jan 30, 2024
@hugsy hugsy deleted the fix_pytest_benchmark branch January 30, 2024 01:31
hugsy pushed a commit to hugsy/gef-extras that referenced this pull request Jan 30, 2024
We used `-k "not benchmark"` which would filter out ANY test that
contained the word 'benchmark'.

This change fixes that by using the `-m` argument to only filter on
marks.

Same change as hugsy/gef#1064
@hugsy hugsy added this to the 2024.05 milestone Apr 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants