Skip to content

TST, MAINT: array API GPU test failures #20957

@tylerjereddy

Description

@tylerjereddy

On latest main at time of writing (c109bb1) on x86_64 Linux with an NVIDIA device:

<snip>
_____________________________________________________________________ test_support_alternative_backends[f_name_n_args2-torch] _____________________________________________________________________
[gw25] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/special/tests/test_support_alternative_backends.py:51: in test_support_alternative_backends
    @array_api_compatible
        f          = <function given.<locals>.run_test_as_given.<locals>.wrapped_test at 0x7f74302aea20>
        f_name_n_args = ('betainc', 3)
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/special/tests/test_support_alternative_backends.py:84: in test_support_alternative_backends
    res = f(*args_xp)
        args_np    = [array(0., dtype=float32), array(0., dtype=float32), array(0., dtype=float32)]
        args_xp    = [tensor(0., device='cuda:0'), tensor(0., device='cuda:0'), tensor(0., device='cuda:0')]
        data       = data(...)
        dtype      = 'float32'
        dtype_np   = <class 'numpy.float32'>
        dtype_xp   = torch.float32
        elements   = {'allow_subnormal': False, 'max_value': 10.0, 'min_value': -10.0}
        f          = <function support_alternative_backends.<locals>.wrapped at 0x7f7450711d00>
        f_name     = 'betainc'
        f_name_n_args = ('betainc', 3)
        final_shape = ()
        mbs        = mutually_broadcastable_shapes(num_shapes=3)
        n_args     = 3
        ref        = array(nan, dtype=float32)
        shapes     = ((), (), ())
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:168: in wrapped
    return f(*args, **kwargs)
        args       = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f742d037d80>
        f_name     = 'betainc'
        kwargs     = {}
        n_array_args = 3
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:47: in f
    array_args = [np.asarray(arg) for arg in array_args]
        _f         = <ufunc 'betainc'>
        _xp        = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        args       = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        array_args = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        kwargs     = {}
        n_array_args = 3
        other_args = ()
scipy/special/_support_alternative_backends.py:47: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f742c654400>
        arg        = tensor(0., device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1085: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0., device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1619: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0., device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f7452c95890>
        overloaded_args = [tensor(0., device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f7501da5b20>
        relevant_args = (tensor(0., device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:78: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0., device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f7501da5b20>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f7452c95890>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1087: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
E   Falsifying example: test_support_alternative_backends(
E       xp=<module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>,
E       f_name_n_args=('betainc', 3),
E       data=data(...),
E   )
E   Draw 1: BroadcastableShapes(input_shapes=((), (), ()), result_shape=())
E   Draw 2: 'float32'
E   Draw 3: array(0., dtype=float32)
E   Draw 4: array(0., dtype=float32)
E   Draw 5: array(0., dtype=float32)
E   
E   You can reproduce this example by temporarily adding @reproduce_failure('6.82.0', b'AXicY2AgHgAAACwAAQ==') as a decorator on your test case
        dtype      = None
        self       = tensor(0., device='cuda:0')
_____________________________________________________________________ test_support_alternative_backends[f_name_n_args4-torch] _____________________________________________________________________
[gw25] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/special/tests/test_support_alternative_backends.py:51: in test_support_alternative_backends
    @array_api_compatible
        f          = <function given.<locals>.run_test_as_given.<locals>.wrapped_test at 0x7f74302aea20>
        f_name_n_args = ('chdtr', 2)
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/special/tests/test_support_alternative_backends.py:84: in test_support_alternative_backends
    res = f(*args_xp)
        args_np    = [array(0., dtype=float32), array(0., dtype=float32)]
        args_xp    = [tensor(0., device='cuda:0'), tensor(0., device='cuda:0')]
        data       = data(...)
        dtype      = 'float32'
        dtype_np   = <class 'numpy.float32'>
        dtype_xp   = torch.float32
        elements   = {'allow_subnormal': False, 'max_value': 10.0, 'min_value': -10.0}
        f          = <function support_alternative_backends.<locals>.wrapped at 0x7f7450711bc0>
        f_name     = 'chdtr'
        f_name_n_args = ('chdtr', 2)
        final_shape = ()
        mbs        = mutually_broadcastable_shapes(num_shapes=2)
        n_args     = 2
        ref        = array(nan, dtype=float32)
        shapes     = ((), ())
        xp         = <module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:168: in wrapped
    return f(*args, **kwargs)
        args       = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        f          = <function get_array_special_func.<locals>.f at 0x7f74513be980>
        f_name     = 'chdtr'
        kwargs     = {}
        n_array_args = 2
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
scipy/special/_support_alternative_backends.py:47: in f
    array_args = [np.asarray(arg) for arg in array_args]
        _f         = <ufunc 'chdtr'>
        _xp        = <module 'scipy._lib.array_api_compat.torch' from '/home/treddy/github_projects/scipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        args       = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        array_args = (tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))
        kwargs     = {}
        n_array_args = 2
        other_args = ()
scipy/special/_support_alternative_backends.py:47: in <listcomp>
    array_args = [np.asarray(arg) for arg in array_args]
        .0         = <tuple_iterator object at 0x7f741c9f3a90>
        arg        = tensor(0., device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1085: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0., device='cuda:0')
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/overrides.py:1619: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0., device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7f7452c95890>
        overloaded_args = [tensor(0., device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7f7501da5b20>
        relevant_args = (tensor(0., device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/utils/_device.py:78: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0., device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7f7501da5b20>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7f7452c95890>
        types      = (<class 'torch.Tensor'>,)
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/_tensor.py:1087: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
E   Falsifying example: test_support_alternative_backends(
E       xp=<module 'torch' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/torch/__init__.py'>,
E       f_name_n_args=('chdtr', 2),
E       data=data(...),
E   )
E   Draw 1: BroadcastableShapes(input_shapes=((), ()), result_shape=())
E   Draw 2: 'float32'
E   Draw 3: array(0., dtype=float32)
E   Draw 4: array(0., dtype=float32)
E   
E   You can reproduce this example by temporarily adding @reproduce_failure('6.82.0', b'AXicY2DABwAAHgAB') as a decorator on your test case
        dtype      = None
        self       = tensor(0., device='cuda:0')
_____________________________________________________________________ test_support_alternative_backends[f_name_n_args6-cupy] ______________________________________________________________________
[gw25] linux -- Python 3.11.2 /home/treddy/python_venvs/py_311_scipy_dev/bin/python
scipy/special/tests/test_support_alternative_backends.py:51: in test_support_alternative_backends
    @array_api_compatible
        f          = <function given.<locals>.run_test_as_given.<locals>.wrapped_test at 0x7f74302aea20>
        f_name_n_args = ('rel_entr', 2)
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
scipy/special/tests/test_support_alternative_backends.py:92: in test_support_alternative_backends
    xp_assert_close(res, ref, rtol=eps**0.2, atol=eps*20,
        args_np    = [array(4.), array(2.22507386e-308)]
        args_xp    = [array(4.), array(2.22507386e-308)]
        data       = data(...)
        dtype      = 'float64'
        dtype_np   = <class 'numpy.float64'>
        dtype_xp   = <class 'numpy.float64'>
        elements   = {'allow_subnormal': False, 'max_value': 10.0, 'min_value': -10.0}
        eps        = 2.220446049250313e-16
        f          = <function support_alternative_backends.<locals>.wrapped at 0x7f7450711a80>
        f_name     = 'rel_entr'
        f_name_n_args = ('rel_entr', 2)
        final_shape = ()
        mbs        = mutually_broadcastable_shapes(num_shapes=2)
        n_args     = 2
        ref        = array(2839.13085157)
        res        = array(inf)
        shapes     = ((), ())
        xp         = <module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>
/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/testing/_array.py:24: in assert_allclose
    numpy.testing.assert_allclose(
        actual     = array(inf)
        atol       = 4.440892098500626e-15
        desired    = array(2839.13085157)
        err_msg    = ''
        rtol       = 0.000740095979741405
        verbose    = True
../../../../../spack/opt/spack/linux-ubuntu22.04-skylake/gcc-11.3.0/python-3.11.2-4qncg2nqev5evxfmamde3e6rnb34b4ls/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
E   AssertionError: 
E   Not equal to tolerance rtol=0.000740096, atol=4.44089e-15
E   
E   x and y +inf location mismatch:
E    x: array(inf)
E    y: array(2839.130852)
E   Falsifying example: test_support_alternative_backends(
E       xp=<module 'cupy' from '/home/treddy/python_venvs/py_311_scipy_dev/lib/python3.11/site-packages/cupy/__init__.py'>,
E       f_name_n_args=('rel_entr', 2),
E       data=data(...),
E   )
E   Draw 1: BroadcastableShapes(input_shapes=((), ()), result_shape=())
E   Draw 2: 'float64'
E   Draw 3: array(4.)
E   Draw 4: array(2.22507386e-308)
E   
E   You can reproduce this example by temporarily adding @reproduce_failure('6.82.0', b'AXicY2AAAkZGBgRgAZP/L0B4AA41AdY=') as a decorator on your test case
        args       = (<function assert_allclose.<locals>.compare at 0x7f741c22b1a0>, array(inf), array(2839.13085157))
        func       = <function assert_array_compare at 0x7f7502fbdb20>
        kwds       = {'equal_nan': True, 'err_msg': '', 'header': 'Not equal to tolerance rtol=0.000740096, atol=4.44089e-15', 'verbose': True}
        self       = <contextlib._GeneratorContextManager object at 0x7f7502fcd350>
===================================================================================== short test summary info =====================================================================================
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args0-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args0-cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_pearson[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[tippett-single-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_tippett[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[mudholkar_george-single-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_result[mudholkar_george-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_result[tippett-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_result[pearson-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[tippett-all-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args1-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[pearson-single-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args1-cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_mudholkar_george[cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[mudholkar_george-all-cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[pearson-random-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_result[mudholkar_george-cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_mudholkar_george_equal_fisher_pearson_average[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[pearson-all-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[mudholkar_george-single-cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_mudholkar_george_equal_fisher_pearson_average[cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_mudholkar_george[torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[mudholkar_george-all-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[mudholkar_george-random-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[tippett-random-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/stats/tests/test_stats.py::TestCombinePvalues::test_monotonicity[mudholkar_george-random-cupy] - TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args2-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args4-torch] - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED scipy/special/tests/test_support_alternative_backends.py::test_support_alternative_backends[f_name_n_args6-cupy] - AssertionError: 

The output was too long to paste in its entirety but I tried to keep a few tracebacks above the summary.

Metadata

Metadata

Assignees

No one assigned

    Labels

    array typesItems related to array API support and input array validation (see gh-18286)maintenanceItems related to regular maintenance tasks

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions