Skip to content

Conversation

mdhaber
Copy link
Contributor

@mdhaber mdhaber commented May 9, 2024

Reference issue

Towards gh-20544

What does this implement/fix?

Adds array API support to scipy.stats.entropy, special.entr, special.rel_entr.

Additional information

torch doesn't have rel_entr, so it falls back to back and forth conversion. One could implement a rel_entr calculation as an enhancement.

@mdhaber mdhaber added scipy.stats enhancement A new feature or improvement scipy.special array types Items related to array API support and input array validation (see gh-18286) labels May 9, 2024
@mdhaber mdhaber requested review from person142 and steppi as code owners May 9, 2024 01:25
# Test for PR-479
assert_almost_equal(stats.entropy([0, 1, 2]), 0.63651416829481278,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every line is changed, so I suppose git can't tell, but it's a very straightforward conversion. Consider using a diff tool locally.

@mdhaber
Copy link
Contributor Author

mdhaber commented May 9, 2024

CI failure is just mypy complaints. LMK what else is needed and I'll silence them at the same time.

@mdhaber mdhaber requested review from lucascolley and j-bowhay and removed request for person142 May 11, 2024 19:22
Copy link
Member

@lucascolley lucascolley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice to have such a clean diff! Happy to merge once MyPy is...

@lucascolley lucascolley added this to the 1.14.0 milestone May 11, 2024
pk = xp.asarray([[0.1, 0.2], [0.6, 0.3], [0.3, 0.5]])
qk = xp.asarray([[0.2, 0.1], [0.3, 0.6], [0.5, 0.3]])
xp_assert_close(stats.entropy(pk, qk, axis=1),
xp.asarray([0.23104906, 0.23104906, 0.12770641]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm seeing new GPU CI failures locally here and in a few other places. I'll just paste the output below for inspection since I'm about to head out (you can ignore the baseline FFT thread fails).

_______________________________________________________________________________ TestFFTThreadSafe.test_fft[torch] ________________________________________________________________________________
[gw6] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/fft/tests/test_basic.py:436: in test_fft
    self._test_mtsame(fft.fft, a, xp=xp)
        a          = tensor([[1.+0.j, 1.+0.j, 1.+0.j,  ..., 1.+0.j, 1.+0.j, 1.+0.j],
        [1.+0.j, 1.+0.j, 1.+0.j,  ..., 1.+0.j, 1.+0.j,...+0.j],
        [1.+0.j, 1.+0.j, 1.+0.j,  ..., 1.+0.j, 1.+0.j, 1.+0.j]],
       device='cuda:0', dtype=torch.complex128)
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d616ea10>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/fft/tests/test_basic.py:430: in _test_mtsame
    q.get(timeout=5), expected,
        args       = (tensor([[1.+0.j, 1.+0.j, 1.+0.j,  ..., 1.+0.j, 1.+0.j, 1.+0.j],
        [1.+0.j, 1.+0.j, 1.+0.j,  ..., 1.+0.j, 1.+0.j....j],
        [1.+0.j, 1.+0.j, 1.+0.j,  ..., 1.+0.j, 1.+0.j, 1.+0.j]],
       device='cuda:0', dtype=torch.complex128),)
        expected   = tensor([[200.+0.j,   0.+0.j,   0.+0.j,  ...,   0.+0.j,   0.+0.j,   0.+0.j],
        [200.+0.j,   0.+0.j,   0.+0.j,  .....   [200.+0.j,   0.+0.j,   0.+0.j,  ...,   0.+0.j,   0.+0.j,   0.+0.j]],
       device='cuda:0', dtype=torch.complex128)
        func       = <uarray multimethod 'fft'>
        i          = 15
        q          = <queue.Queue object at 0x7fb5c6baff50>
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d616ea10>
        t          = [<Thread(Thread-33 (worker), stopped 140418285024832)>, <Thread(Thread-34 (worker), stopped 140418531137088)>, <Thread...8)>, <Thread(Thread-37 (worker), stopped 140418285024832)>, <Thread(Thread-38 (worker), stopped 140418531137088)>, ...]
        worker     = <function TestFFTThreadSafe._test_mtsame.<locals>.worker at 0x7fb5be284ae0>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
../../../../../spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/queue.py:179: in get
    raise Empty
E   _queue.Empty
        block      = True
        endtime    = 6496614.722814094
        remaining  = -0.00012497138231992722
        self       = <queue.Queue object at 0x7fb5c6baff50>
        timeout    = 5

During handling of the above exception, another exception occurred:
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/_pytest/runner.py:340: in from_call
    result: Optional[TResult] = func()
        cls        = <class '_pytest.runner.CallInfo'>
        duration   = 5.113782258704305
        excinfo    = <ExceptionInfo PytestUnhandledThreadExceptionWarning('Exception in thread Thread-33 (worker)\n\nTraceback (most recent... to set the primary context... (Triggered internally at ../aten/src/ATen/native/cuda/SpectralOps.cpp:313.)\n') tblen=9>
        func       = <function call_and_report.<locals>.<lambda> at 0x7fb5be284fe0>
        precise_start = 6496609.609832196
        precise_stop = 6496614.723614454
        reraise    = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
        result     = None
        start      = 1715465665.421142
        stop       = 1715465670.5349245
        when       = 'call'
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/_pytest/runner.py:240: in <lambda>
    lambda: runtest_hook(item=item, **kwds), when=when, reraise=reraise
        item       = <Function test_fft[torch]>
        kwds       = {}
        runtest_hook = <HookCaller 'pytest_runtest_call'>
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/pluggy/_hooks.py:501: in __call__
    return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
        firstresult = False
        kwargs     = {'item': <Function test_fft[torch]>}
        self       = <HookCaller 'pytest_runtest_call'>
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/pluggy/_manager.py:119: in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
        firstresult = False
        hook_name  = 'pytest_runtest_call'
        kwargs     = {'item': <Function test_fft[torch]>}
        methods    = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/...=None>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7fb5e4e21dd0>>, ...]
        self       = <_pytest.config.PytestPluginManager object at 0x7fb6ec2d5510>
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/_pytest/threadexception.py:87: in pytest_runtest_call
    yield from thread_exception_runtest_hook()
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/_pytest/threadexception.py:77: in thread_exception_runtest_hook
    warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))
E   pytest.PytestUnhandledThreadExceptionWarning: Exception in thread Thread-33 (worker)
E   
E   Traceback (most recent call last):
E     File "/home/treddy/github_projects/spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
E       self.run()
E     File "/home/treddy/github_projects/spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/threading.py", line 975, in run
E       self._target(*self._args, **self._kwargs)
E     File "/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/fft/tests/test_basic.py", line 415, in worker
E       q.put(func(*args))
E             ^^^^^^^^^^^
E     File "/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/fft/_backend.py", line 28, in __ua_function__
E       return fn(*args, **kwargs)
E              ^^^^^^^^^^^^^^^^^^^
E     File "/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/fft/_basic_backend.py", line 60, in fft
E       return _execute_1D('fft', _pocketfft.fft, x, n=n, axis=axis, norm=norm,
E              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E     File "/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/fft/_basic_backend.py", line 34, in _execute_1D
E       return xp_func(x, n=n, axis=axis, norm=norm)
E              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E   UserWarning: Attempting to run cuFFT, but there was no current CUDA context! Attempting to set the primary context... (Triggered internally at ../aten/src/ATen/native/cuda/SpectralOps.cpp:313.)
        cm         = <_pytest.threadexception.catch_threading_exception object at 0x7fb5d6589790>
        msg        = 'Exception in thread Thread-33 (worker)\n\nTraceback (most recent call last):\n  File "/home/treddy/github_projects/sp...Attempting to set the primary context... (Triggered internally at ../aten/src/ATen/native/cuda/SpectralOps.cpp:313.)\n'
        thread_name = 'Thread-33 (worker)'
________________________________________________________________________________ TestFFTThreadSafe.test_ifft[cupy] ________________________________________________________________________________
[gw6] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/fft/tests/test_basic.py:440: in test_ifft
    self._test_mtsame(fft.ifft, a, xp=xp)
        a          = array([[1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1.+0.j],
       [1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1.+...  [1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1.+0.j],
       [1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1.+0.j]])
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d6174590>
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
scipy/fft/tests/test_basic.py:429: in _test_mtsame
    xp_assert_equal(
        args       = (array([[1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1.+0.j],
       [1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1....[1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1.+0.j],
       [1.+0.j, 1.+0.j, 1.+0.j, ..., 1.+0.j, 1.+0.j, 1.+0.j]]),)
        expected   = array([[1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+...  [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j]])
        func       = <uarray multimethod 'ifft'>
        i          = 13
        q          = <queue.Queue object at 0x7fb5bdf1c490>
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d6174590>
        t          = [<Thread(Thread-113 (worker), stopped 140418531137088)>, <Thread(Thread-114 (worker), stopped 140418293425728)>, <Thre...>, <Thread(Thread-117 (worker), stopped 140417737684544)>, <Thread(Thread-118 (worker), stopped 140417729291840)>, ...]
        worker     = <function TestFFTThreadSafe._test_mtsame.<locals>.worker at 0x7fb5bff2bf60>
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/testing/_array.py:95: in assert_array_equal
    numpy.testing.assert_array_equal(
        err_msg    = 'Function returned wrong value in multithreaded context'
        kwargs     = {}
        strides_check = False
        verbose    = True
        x          = array([[0.005+0.j, 0.005+0.j, 0.005+0.j, ..., 0.005+0.j, 0.005+0.j,
        0.005+0.j],
       [0.005+0.j, 0.005+0.j, ...0.005+0.j,
        0.005+0.j],
       [0.005+0.j, 0.005+0.j, 0.005+0.j, ..., 0.005+0.j, 0.005+0.j,
        0.005+0.j]])
        y          = array([[1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+...  [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j]])
../../../../../spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
E   AssertionError: 
E   Arrays are not equal
E   Function returned wrong value in multithreaded context
E   Mismatched elements: 160000 / 160000 (100%)
E   Max absolute difference: 0.995
E   Max relative difference: 0.995
E    x: array([[0.005+0.j, 0.005+0.j, 0.005+0.j, ..., 0.005+0.j, 0.005+0.j,
E           0.005+0.j],
E          [0.005+0.j, 0.005+0.j, 0.005+0.j, ..., 0.005+0.j, 0.005+0.j,...
E    y: array([[1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
E          [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
E          [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],...
        args       = (<built-in function eq>, array([[0.005+0.j, 0.005+0.j, 0.005+0.j, ..., 0.005+0.j, 0.005+0.j,
        0.005+0.j],
     ... [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j]]))
        func       = <function assert_array_compare at 0x7fb6e3f31f80>
        kwds       = {'err_msg': 'Function returned wrong value in multithreaded context', 'header': 'Arrays are not equal', 'strict': False, 'verbose': True}
        self       = <contextlib._GeneratorContextManager object at 0x7fb6e3f51c90>
________________________________________________________________________________ TestFFTThreadSafe.test_rfft[cupy] ________________________________________________________________________________
[gw6] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/fft/tests/test_basic.py:444: in test_rfft
    self._test_mtsame(fft.rfft, a, xp=xp)
        a          = array([[1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       ...,
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.]])
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d6175d90>
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
scipy/fft/tests/test_basic.py:429: in _test_mtsame
    xp_assert_equal(
        args       = (array([[1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
   ....,
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.]]),)
        expected   = array([[200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],
       [200.+0.j,   0.+0.j,   0.+0.j, ...,  ... 0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],
       [200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j]])
        func       = <uarray multimethod 'rfft'>
        i          = 0
        q          = <queue.Queue object at 0x7fb5c4832d10>
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d6175d90>
        t          = [<Thread(Thread-177 (worker), stopped 140418531137088)>, <Thread(Thread-178 (worker), stopped 140418285033024)>, <Thre...>, <Thread(Thread-181 (worker), stopped 140418531137088)>, <Thread(Thread-182 (worker), stopped 140418285033024)>, ...]
        worker     = <function TestFFTThreadSafe._test_mtsame.<locals>.worker at 0x7fb5be0fbf60>
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/testing/_array.py:95: in assert_array_equal
    numpy.testing.assert_array_equal(
        err_msg    = 'Function returned wrong value in multithreaded context'
        kwargs     = {}
        strides_check = False
        verbose    = True
        x          = array([[1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 1.+0.j, 0.+...  [0.+0.j, 0.+0.j, 1.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j]])
        y          = array([[200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],
       [200.+0.j,   0.+0.j,   0.+0.j, ...,  ... 0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],
       [200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j]])
../../../../../spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
E   AssertionError: 
E   Arrays are not equal
E   Function returned wrong value in multithreaded context
E   Mismatched elements: 1200 / 80800 (1.49%)
E   Max absolute difference: 200.
E   Max relative difference: 1.
E    x: array([[1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
E          [0.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 1.+0.j, 0.+0.j],
E          [0.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],...
E    y: array([[200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],
E          [200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],
E          [200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],...
        args       = (<built-in function eq>, array([[1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j,...0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j],
       [200.+0.j,   0.+0.j,   0.+0.j, ...,   0.+0.j,   0.+0.j,   0.+0.j]]))
        func       = <function assert_array_compare at 0x7fb6e3f31f80>
        kwds       = {'err_msg': 'Function returned wrong value in multithreaded context', 'header': 'Arrays are not equal', 'strict': False, 'verbose': True}
        self       = <contextlib._GeneratorContextManager object at 0x7fb6e3f51c90>
_________________________________________________________________ TestEntropy.test_entropy_with_axis_0_is_equal_to_default[torch] _________________________________________________________________
[gw26] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:91: in test_entropy_with_axis_0_is_equal_to_default
    xp_assert_close(stats.entropy(pk, qk, axis=0),
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0')
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f6034d7d710>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f6042f80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f6042f809a0>
        kwd_samples = []
        kwds       = {'axis': 0}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f6042f2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f6042f80900>
        temp       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        tuple_to_result = <function <lambda> at 0x7f6042f2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = 0
        base       = None
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0')
        sum_kwargs = {'axis': 0, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f6032c64e00>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        array_args = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f6035634a60>
        arg        = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f60592394d0>
        overloaded_args = [tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f6108994cc0>
        relevant_args = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f6108994cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f60592394d0>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
____________________________________________________________________________ TestEntropy.test_entropy_positive[torch] _____________________________________________________________________________
[gw27] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:18: in test_entropy_positive
    eself = stats.entropy(pk, pk)
        pk         = tensor([0.5000, 0.2000, 0.3000], device='cuda:0')
        qk         = tensor([0.1000, 0.2500, 0.6500], device='cuda:0')
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f65d83d4490>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([0.5000, 0.2000, 0.3000], device='cuda:0'), tensor([0.5000, 0.2000, 0.3000], device='cuda:0'))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f65e6c80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f65e6c809a0>
        kwd_samples = []
        kwds       = {}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f65e6c2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f65e6c80900>
        temp       = tensor([0.5000, 0.2000, 0.3000], device='cuda:0')
        tuple_to_result = <function <lambda> at 0x7f65e6c2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = 0
        base       = None
        pk         = tensor([0.5000, 0.2000, 0.3000], device='cuda:0')
        qk         = tensor([0.5000, 0.2000, 0.3000], device='cuda:0')
        sum_kwargs = {'axis': 0, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([0.5000, 0.2000, 0.3000], device='cuda:0'), tensor([0.5000, 0.2000, 0.3000], device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f65d6f71300>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([0.5000, 0.2000, 0.3000], device='cuda:0'), tensor([0.5000, 0.2000, 0.3000], device='cuda:0'))
        array_args = (tensor([0.5000, 0.2000, 0.3000], device='cuda:0'), tensor([0.5000, 0.2000, 0.3000], device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f65d643c5b0>
        arg        = tensor([0.5000, 0.2000, 0.3000], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([0.5000, 0.2000, 0.3000], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([0.5000, 0.2000, 0.3000], device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        overloaded_args = [tensor([0.5000, 0.2000, 0.3000], device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f66ac398cc0>
        relevant_args = (tensor([0.5000, 0.2000, 0.3000], device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([0.5000, 0.2000, 0.3000], device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f66ac398cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([0.5000, 0.2000, 0.3000], device='cuda:0')
______________________________________________________________________________ TestEntropy.test_entropy_base[torch] _______________________________________________________________________________
[gw27] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:31: in test_entropy_base
    S = stats.entropy(pk, qk)
        S          = tensor(4., device='cuda:0')
        pk         = tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
       device='cuda:0')
        qk         = tensor([2., 2., 2., 2., 2., 2., 2., 2., 1., 1., 1., 1., 1., 1., 1., 1.],
       device='cuda:0')
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f65dbb2d350>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
       device='cuda:0'), tensor([2., 2., 2., 2., 2., 2., 2., 2., 1., 1., 1., 1., 1., 1., 1., 1.],
       device='cuda:0'))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f65e6c80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f65e6c809a0>
        kwd_samples = []
        kwds       = {}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f65e6c2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f65e6c80900>
        temp       = tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
       device='cuda:0')
        tuple_to_result = <function <lambda> at 0x7f65e6c2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = 0
        base       = None
        pk         = tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0')
        qk         = tensor([0.0833, 0.0833, 0.0833, 0.0833, 0.0833, 0.0833, 0.0833, 0.0833, 0.0417,
        0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417],
       device='cuda:0')
        sum_kwargs = {'axis': 0, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.062...0833, 0.0833, 0.0833, 0.0417,
        0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417],
       device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f65d7381bc0>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.062...0833, 0.0833, 0.0833, 0.0417,
        0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417],
       device='cuda:0'))
        array_args = (tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.062...0833, 0.0833, 0.0833, 0.0417,
        0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417, 0.0417],
       device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f65d5ea2ce0>
        arg        = tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        overloaded_args = [tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f66ac398cc0>
        relevant_args = (tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f66ac398cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625,
        0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625, 0.0625],
       device='cuda:0')
___________________________________________________________________________ TestEntropy.test_entropy_transposed[torch] ____________________________________________________________________________
[gw26] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:104: in test_entropy_transposed
    xp_assert_close(stats.entropy(pk.T, qk.T),
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0')
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f6034d7fe10>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([[0.1000, 0.6000, 0.3000],
        [0.2000, 0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.3000, 0.5000],
        [0.1000, 0.6000, 0.3000]], device='cuda:0'))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f6042f80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f6042f809a0>
        kwd_samples = []
        kwds       = {}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f6042f2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f6042f80900>
        temp       = tensor([[0.1000, 0.6000, 0.3000],
        [0.2000, 0.3000, 0.5000]], device='cuda:0')
        tuple_to_result = <function <lambda> at 0x7f6042f2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = 0
        base       = None
        pk         = tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0')
        qk         = tensor([[0.6667, 0.3333, 0.6250],
        [0.3333, 0.6667, 0.3750]], device='cuda:0')
        sum_kwargs = {'axis': 0, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0'), tensor([[0.6667, 0.3333, 0.6250],
        [0.3333, 0.6667, 0.3750]], device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f6032cf7100>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0'), tensor([[0.6667, 0.3333, 0.6250],
        [0.3333, 0.6667, 0.3750]], device='cuda:0'))
        array_args = (tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0'), tensor([[0.6667, 0.3333, 0.6250],
        [0.3333, 0.6667, 0.3750]], device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f6032f05f60>
        arg        = tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f60592394d0>
        overloaded_args = [tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f6108994cc0>
        relevant_args = (tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f6108994cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f60592394d0>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([[0.3333, 0.6667, 0.3750],
        [0.6667, 0.3333, 0.6250]], device='cuda:0')
__________________________________________________________________________ TestEntropy.test_entropy_broadcasting[torch] ___________________________________________________________________________
[gw26] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:112: in test_entropy_broadcasting
    res = stats.entropy(x, y, axis=-1)
        rng        = Generator(PCG64) at 0x7F60334110E0
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f6034d88f90>
        x          = tensor([0.1155, 0.4569, 0.2974], device='cuda:0', dtype=torch.float64)
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([[0.4619],
        [0.4314]], device='cuda:0', dtype=torch.float64)
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([0.1155, 0.4569, 0.2974], device='cuda:0', dtype=torch.float64), tensor([[0.4619],
        [0.4314]], device='cuda:0', dtype=torch.float64))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f6042f80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f6042f809a0>
        kwd_samples = []
        kwds       = {'axis': -1}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f6042f2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f6042f80900>
        temp       = tensor([0.1155, 0.4569, 0.2974], device='cuda:0', dtype=torch.float64)
        tuple_to_result = <function <lambda> at 0x7f6042f2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = -1
        base       = None
        pk         = tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64)
        qk         = tensor([[0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333]], device='cuda:0', dtype=torch.float64)
        sum_kwargs = {'axis': -1, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64), tensor([[0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333]], device='cuda:0', dtype=torch.float64))
        f          = <function get_array_special_func.<locals>.f at 0x7f6032cf0400>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64), tensor([[0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333]], device='cuda:0', dtype=torch.float64))
        array_args = (tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64), tensor([[0.3333, 0.3333, 0.3333],
        [0.3333, 0.3333, 0.3333]], device='cuda:0', dtype=torch.float64))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f603529af80>
        arg        = tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f60592394d0>
        overloaded_args = [tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7f6108994cc0>
        relevant_args = (tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7f6108994cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f60592394d0>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([[0.1328, 0.5253, 0.3419],
        [0.1328, 0.5253, 0.3419]], device='cuda:0', dtype=torch.float64)
_______________________________________________________________________________ TestEntropy.test_entropy_2d[torch] ________________________________________________________________________________
[gw27] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:46: in test_entropy_2d
    xp_assert_close(stats.entropy(pk, qk),
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0')
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f65d8276450>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f65e6c80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f65e6c809a0>
        kwd_samples = []
        kwds       = {}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f65e6c2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f65e6c80900>
        temp       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        tuple_to_result = <function <lambda> at 0x7f65e6c2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = 0
        base       = None
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0')
        sum_kwargs = {'axis': 0, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f65d6ee6520>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        array_args = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f65e7b244f0>
        arg        = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        overloaded_args = [tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f66ac398cc0>
        relevant_args = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f66ac398cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
_____________________________________________________________________________ TestEntropy.test_entropy_2d_zero[torch] _____________________________________________________________________________
[gw27] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:53: in test_entropy_2d_zero
    xp_assert_close(stats.entropy(pk, qk),
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.0000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0')
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f65d8277710>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.0000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f65e6c80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f65e6c809a0>
        kwd_samples = []
        kwds       = {}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f65e6c2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f65e6c80900>
        temp       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        tuple_to_result = <function <lambda> at 0x7f65e6c2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = 0
        base       = None
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.0000, 0.1000],
        [0.3750, 0.6000],
        [0.6250, 0.3000]], device='cuda:0')
        sum_kwargs = {'axis': 0, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.0000, 0.1000],
        [0.3750, 0.6000],
        [0.6250, 0.3000]], device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f65d705a0c0>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.0000, 0.1000],
        [0.3750, 0.6000],
        [0.6250, 0.3000]], device='cuda:0'))
        array_args = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.0000, 0.1000],
        [0.3750, 0.6000],
        [0.6250, 0.3000]], device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f65db861360>
        arg        = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        overloaded_args = [tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f66ac398cc0>
        relevant_args = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f66ac398cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
_______________________________________________________________________ TestEntropy.test_entropy_2d_nondefault_axis[torch] ________________________________________________________________________
[gw27] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/stats/tests/test_entropy.py:70: in test_entropy_2d_nondefault_axis
    xp_assert_close(stats.entropy(pk, qk, axis=1),
        pk         = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        qk         = tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0')
        self       = <scipy.stats.tests.test_entropy.TestEntropy object at 0x7f65d82a8a10>
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_axis_nan_policy.py:407: in axis_nan_policy_wrapper
    return hypotest_fun_in(*args, **kwds)
        _no_deco   = False
        args       = (tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0'), tensor([[0.2000, 0.1000],
        [0.3000, 0.6000],
        [0.5000, 0.3000]], device='cuda:0'))
        default_axis = 0
        hypotest_fun_in = <function entropy at 0x7f65e6c80ae0>
        is_too_small = <function _axis_nan_policy_factory.<locals>.is_too_small at 0x7f65e6c809a0>
        kwd_samples = []
        kwds       = {'axis': 1}
        msg        = 'Use of `nan_policy` and `keepdims` is incompatible with non-NumPy arrays.'
        n_outputs  = 1
        n_samples  = <function <lambda> at 0x7f65e6c2c7c0>
        override   = {'nan_propagation': True, 'vectorization': False}
        paired     = True
        result_to_tuple = <function <lambda> at 0x7f65e6c80900>
        temp       = tensor([[0.1000, 0.2000],
        [0.6000, 0.3000],
        [0.3000, 0.5000]], device='cuda:0')
        tuple_to_result = <function <lambda> at 0x7f65e6c2c5e0>
scipy/stats/_entropy.py:155: in entropy
    vec = special.rel_entr(pk, qk)
        axis       = 1
        base       = None
        pk         = tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0')
        qk         = tensor([[0.6667, 0.3333],
        [0.3333, 0.6667],
        [0.6250, 0.3750]], device='cuda:0')
        sum_kwargs = {'axis': 1, 'keepdims': True}
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0'), tensor([[0.6667, 0.3333],
        [0.3333, 0.6667],
        [0.6250, 0.3750]], device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f65d75ec900>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0'), tensor([[0.6667, 0.3333],
        [0.3333, 0.6667],
        [0.6250, 0.3750]], device='cuda:0'))
        array_args = (tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0'), tensor([[0.6667, 0.3333],
        [0.3333, 0.6667],
        [0.6250, 0.3750]], device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f65e158b850>
        arg        = tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        overloaded_args = [tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f66ac398cc0>
        relevant_args = (tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f66ac398cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f66fbf56b50>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([[0.3333, 0.6667],
        [0.6667, 0.3333],
        [0.3750, 0.6250]], device='cuda:0')
_______________________________________________________________________________ TestFFTThreadSafe.test_ihfft[cupy] ________________________________________________________________________________
[gw6] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/fft/tests/test_basic.py:456: in test_ihfft
    self._test_mtsame(fft.ihfft, a, xp=xp)
        a          = array([[1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       ...,
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.]])
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d617a5d0>
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
scipy/fft/tests/test_basic.py:429: in _test_mtsame
    xp_assert_equal(
        args       = (array([[1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
   ....,
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.]]),)
        expected   = array([[1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],
       [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-...  [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],
       [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j]])
        func       = <uarray multimethod 'ihfft'>
        i          = 0
        q          = <queue.Queue object at 0x7fb5bf082e50>
        self       = <scipy.fft.tests.test_basic.TestFFTThreadSafe object at 0x7fb5d617a5d0>
        t          = [<Thread(Thread-369 (worker), stopped 140417647507008)>, <Thread(Thread-370 (worker), stopped 140417630721600)>, <Thre...>, <Thread(Thread-373 (worker), stopped 140418531137088)>, <Thread(Thread-374 (worker), stopped 140418293425728)>, ...]
        worker     = <function TestFFTThreadSafe._test_mtsame.<locals>.worker at 0x7fb5be0f9c60>
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/testing/_array.py:95: in assert_array_equal
    numpy.testing.assert_array_equal(
        err_msg    = 'Function returned wrong value in multithreaded context'
        kwargs     = {}
        strides_check = False
        verbose    = True
        x          = array([[0.005-0.j, 0.   -0.j, 0.   -0.j, ..., 0.   -0.j, 0.   -0.j,
        0.   -0.j],
       [0.005-0.j, 0.   -0.j, ...0.   -0.j,
        0.   -0.j],
       [0.005-0.j, 0.   -0.j, 0.   -0.j, ..., 0.   -0.j, 0.   -0.j,
        0.   -0.j]])
        y          = array([[1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],
       [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-...  [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],
       [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j]])
../../../../../spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
E   AssertionError: 
E   Arrays are not equal
E   Function returned wrong value in multithreaded context
E   Mismatched elements: 800 / 80800 (0.99%)
E   Max absolute difference: 0.995
E   Max relative difference: 0.995
E    x: array([[0.005-0.j, 0.   -0.j, 0.   -0.j, ..., 0.   -0.j, 0.   -0.j,
E           0.   -0.j],
E          [0.005-0.j, 0.   -0.j, 0.   -0.j, ..., 0.   -0.j, 0.   -0.j,...
E    y: array([[1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],
E          [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],
E          [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],...
        args       = (<built-in function eq>, array([[0.005-0.j, 0.   -0.j, 0.   -0.j, ..., 0.   -0.j, 0.   -0.j,
        0.   -0.j],
     ... [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j],
       [1.-0.j, 0.-0.j, 0.-0.j, ..., 0.-0.j, 0.-0.j, 0.-0.j]]))
        func       = <function assert_array_compare at 0x7fb6e3f31f80>
        kwds       = {'err_msg': 'Function returned wrong value in multithreaded context', 'header': 'Arrays are not equal', 'strict': False, 'verbose': True}
        self       = <contextlib._GeneratorContextManager object at 0x7fb6e3f51c90>
____________________________________________________________________ test_support_alternative_backends[f_name_n_args15-torch] _____________________________________________________________________
[gw5] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/special/tests/test_support_alternative_backends.py:31: in test_support_alternative_backends
    @given(data=strategies.data())
        f          = <function given.<locals>.run_test_as_given.<locals>.wrapped_test at 0x7fce7d977ba0>
        f_name_n_args = ('rel_entr', 2)
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/special/tests/test_support_alternative_backends.py:55: in test_support_alternative_backends
    res = f(*args_xp)
        args_np    = [array(0., dtype=float32), array(0., dtype=float32)]
        args_xp    = [tensor(0., device='cuda:0'), tensor(0., device='cuda:0')]
        data       = data(...)
        dtype      = 'float32'
        dtype_np   = <class 'numpy.float32'>
        dtype_xp   = torch.float32
        elements   = {'allow_subnormal': False, 'max_value': 10.0, 'min_value': -10.0}
        f          = <function support_alternative_backends.<locals>.wrapped at 0x7fce9e5d84a0>
        f_name     = 'rel_entr'
        f_name_n_args = ('rel_entr', 2)
        final_shape = ()
        mbs        = mutually_broadcastable_shapes(num_shapes=2)
        n_args     = 2
        ref        = array(0., dtype=float32)
        shapes     = ((), ())
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:50: in wrapped
    return f(*args, **kwargs)
        args       = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7fce61506520>
        f_name     = 'rel_entr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in f
    array_args = [np.asarray(arg) for arg in array_args]
        args       = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        array_args = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        f_scipy    = <ufunc 'rel_entr'>
        kwargs     = {}
        n_array_args = 2
        other_args = ()
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:35: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7fce693c3310>
        arg        = tensor(0., device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0., device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0., device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7fcea01310d0>
        overloaded_args = [tensor(0., device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7fcf4f984cc0>
        relevant_args = (tensor(0., device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0., device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7fcf4f984cc0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7fcea01310d0>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
E   Falsifying example: test_support_alternative_backends(
E       xp=<module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>,
E       f_name_n_args=('rel_entr', 2),
E       data=data(...),
E   )
E   Draw 1: BroadcastableShapes(input_shapes=((), ()), result_shape=())
E   Draw 2: 'float32'
E   Draw 3: array(0., dtype=float32)
E   Draw 4: array(0., dtype=float32)
E   
E   You can reproduce this example by temporarily adding @reproduce_failure('6.82.0', b'AXicY2DABwAAHgAB') as a decorator on your test case
        dtype      = None
        self       = tensor(0., device='cuda:0')
===================================================================================== short test summary info =====================================================================================
FAILED scipy/fft/tests/test_basic.py::TestFFTThreadSafe::test_fft[torch] - pytest.PytestUnhandledThreadExceptionWarning: Exception in thread Thread-33 (worker)
FAILED scipy/fft/tests/test_basic.py::TestFFTThreadSafe::test_ifft[cupy] - AssertionError: 
FAILED scipy/fft/tests/test_basic.py::TestFFTThreadSafe::test_rfft[cupy] - AssertionError: 
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_with_axis_0_is_equal_to_default[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_positive[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_base[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_transposed[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_broadcasting[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_2d[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_2d_zero[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_entropy.py::TestEntropy::test_entropy_2d_nondefault_axis[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/fft/tests/test_basic.py::TestFFTThreadSafe::test_ihfft[cupy] - AssertionError: 
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args15-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
============================================================ 13 failed, 51728 passed, 11250 skipped, 157 xfailed, 13 xpassed in 58.81s ============================================================

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can try to help iterate a bit though, if you ping me to check stuff, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh it looks like the _axis_nan_policy decorator is not getting skipped. It should always get an early exit. I tested locally with CuPy and didn't see this, which is surprising. I should be able to debug locally though. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this is not unexpected; I misread before. torch doesn't have a rel_entr function, so I knew that for CPU it would evaluate that part with NumPy and then change the result back to torch. I think this is in the same category of tests that we expect to pass only on the CPU. Until we can use from_dlpack to do device transfers, we are OK with such tests failing on GPU. Can we skip torch GPU only?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alternative is to write a rel_entr function that relies only on array API calls. Usually we can't do that in any reasonable way for special functions, but in this case it would be pretty easy. I'd prefer for that to be a follow-up enhancement, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never mind, I think I can add it here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have an opinion on it apart from thinking we'd prefer to skip rather than carrying the failures if it came to that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you try out the latest commit? I added a generic implementation of rel_entr.

Copy link
Contributor Author

@mdhaber mdhaber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only the else should change to a separate if.

@tylerjereddy
Copy link
Contributor

Hey Matt, I only see 2 failures left locally with cuda backend on x86_64 Linux. One of the two looks like it is on CPU actually.

_______________________________________________________________ test_support_alternative_backends[f_name_n_args15-array_api_strict] _______________________________________________________________
[gw13] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/special/tests/test_support_alternative_backends.py:51: in test_support_alternative_backends
    @given(data=strategies.data())
        f          = <function given.<locals>.run_test_as_given.<locals>.wrapped_test at 0x7f3ac095c900>
        f_name_n_args = ('rel_entr', 2)
        xp         = <module 'array_api_strict' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/array_api_strict/__init__.py'>
scipy/special/tests/test_support_alternative_backends.py:83: in test_support_alternative_backends
    xp_assert_close(res, ref, rtol=eps**0.2, atol=eps*10,
        args_np    = [array(4.), array(2.22507386e-308)]
        args_xp    = [Array(4., dtype=array_api_strict.float64), Array(2.22507386e-308, dtype=array_api_strict.float64)]
        data       = data(...)
        dtype      = 'float64'
        dtype_np   = <class 'numpy.float64'>
        dtype_xp   = array_api_strict.float64
        elements   = {'allow_subnormal': False, 'max_value': 10.0, 'min_value': -10.0}
        eps        = 2.220446049250313e-16
        f          = <function support_alternative_backends.<locals>.wrapped at 0x7f3ae15dc680>
        f_name     = 'rel_entr'
        f_name_n_args = ('rel_entr', 2)
        final_shape = ()
        mbs        = mutually_broadcastable_shapes(num_shapes=2)
        n_args     = 2
        ref        = Array(inf, dtype=array_api_strict.float64)
        res        = Array(2839.13085157, dtype=array_api_strict.float64)
        shapes     = ((), ())
        xp         = <module 'array_api_strict' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/array_api_strict/__init__.py'>
../../../../../spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
E   AssertionError: 
E   Not equal to tolerance rtol=0.000740096, atol=2.22045e-15
E   
E   x and y +inf location mismatch:
E    x: array(2839.130852)
E    y: array(inf)
E   Falsifying example: test_support_alternative_backends(
E       xp=<module 'array_api_strict' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/array_api_strict/__init__.py'>,
E       f_name_n_args=('rel_entr', 2),
E       data=data(...),
E   )
E   Draw 1: BroadcastableShapes(input_shapes=((), ()), result_shape=())
E   Draw 2: 'float64'
E   Draw 3: array(4.)
E   Draw 4: array(2.22507386e-308)
E   
E   You can reproduce this example by temporarily adding @reproduce_failure('6.82.0', b'AXicY2AAAkZGBgRgAZP/L0B4AA41AdY=') as a decorator on your test case
        args       = (<function assert_allclose.<locals>.compare at 0x7f3a5f37e2a0>, array(2839.13085157), array(inf))
        func       = <function assert_array_compare at 0x7f3b93b01f80>
        kwds       = {'equal_nan': True, 'err_msg': '', 'header': 'Not equal to tolerance rtol=0.000740096, atol=2.22045e-15', 'verbose': True}
        self       = <contextlib._GeneratorContextManager object at 0x7f3b93b189d0>
____________________________________________________________________ test_support_alternative_backends[f_name_n_args15-torch] _____________________________________________________________________
[gw13] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/special/tests/test_support_alternative_backends.py:51: in test_support_alternative_backends
    @given(data=strategies.data())
        f          = <function given.<locals>.run_test_as_given.<locals>.wrapped_test at 0x7f3ac095c900>
        f_name_n_args = ('rel_entr', 2)
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/special/tests/test_support_alternative_backends.py:83: in test_support_alternative_backends
    xp_assert_close(res, ref, rtol=eps**0.2, atol=eps*10,
E   AssertionError: Scalars are not close!
E   
E   Expected inf but got 2839.130851573536.
E   Absolute difference: inf (up to 2.220446049250313e-15 allowed)
E   Relative difference: nan (up to 0.000740095979741405 allowed)
E   Falsifying example: test_support_alternative_backends(
E       xp=<module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>,
E       f_name_n_args=('rel_entr', 2),
E       data=data(...),
E   )
E   Draw 1: BroadcastableShapes(input_shapes=((), ()), result_shape=())
E   Draw 2: 'float64'
E   Draw 3: array(4.)
E   Draw 4: array(2.22507386e-308)
E   
E   You can reproduce this example by temporarily adding @reproduce_failure('6.82.0', b'AXicY2AAAkYGBGABEf8vQDgADhMB1Q==') as a decorator on your test case
        args_np    = [array(4.), array(2.22507386e-308)]
        args_xp    = [tensor(4., device='cuda:0', dtype=torch.float64), tensor(2.2251e-308, device='cuda:0', dtype=torch.float64)]
        data       = data(...)
        dtype      = 'float64'
        dtype_np   = <class 'numpy.float64'>
        dtype_xp   = torch.float64
        elements   = {'allow_subnormal': False, 'max_value': 10.0, 'min_value': -10.0}
        eps        = 2.220446049250313e-16
        f          = <function support_alternative_backends.<locals>.wrapped at 0x7f3ae15dc680>
        f_name     = 'rel_entr'
        f_name_n_args = ('rel_entr', 2)
        final_shape = ()
        mbs        = mutually_broadcastable_shapes(num_shapes=2)
        n_args     = 2
        ref        = tensor(inf, device='cuda:0', dtype=torch.float64)
        res        = tensor(2839.1309, device='cuda:0', dtype=torch.float64)
        shapes     = ((), ())
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>

@mdhaber
Copy link
Contributor Author

mdhaber commented May 14, 2024

That's a shortcoming of scipy.special.rel_entr that is fixed by the generic implementation.

import numpy as np
from scipy import special
x = np.asarray([2., 3., 4.])
y = np.asarray(np.finfo(np.float64).tiny)
special.rel_entr(x, y)  # array([1418.17913143, 2128.48509246,           inf])
x * (np.log(x) - np.log(y))  # array([1418.17913143, 2128.48509246, 2839.13085157])

It must be using the naive expression that takes the ratio x/y before taking the logarithm, presumably because one divide and one log is faster than two logs and a subtraction. Perhaps it would be inexpensive enough to check for overflow and, in that case, take the difference in logs.

Since it's not failing in CI, let's not take any action toward that here, but I can file an issue. Thanks @tylerjereddy!

@lucascolley I think this is ready for another look. Thank you!

Copy link
Member

@lucascolley lucascolley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks Matt, and thanks for catching the GPU things Tyler!

@lucascolley lucascolley changed the title ENH: stats.entropy / special.entr / special.rel_entr: add array API support ENH: stats.entropy, special.{entr, rel_entr}: add array API support May 14, 2024
@lucascolley lucascolley merged commit 58e66fa into scipy:main May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
array types Items related to array API support and input array validation (see gh-18286) enhancement A new feature or improvement scipy.special scipy.stats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants