-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
Open
Labels
defectA clear bug or issue that prevents SciPy from being installed or used as expectedA clear bug or issue that prevents SciPy from being installed or used as expectedscipy.optimize
Description
Hi everybody,
I am quite sure there is a bug with SLSQP. Here is a minimal example:
import numpy as np
import scipy.optimize as scopt
def loss(x,r):
return -np.dot(x,r)
def opt(r):
r = np.array(r)
def cons(x):
return sum(x)
cons = [{'type': 'eq', 'fun': cons}]
x_init = [0., 0.]
x_opt = scopt.minimize(loss, x_init, method='SLSQP', constraints=cons, args = (r,))
return x_opt
Then for instance with r = [0., 0.01]
, opt(r) returns:
x: array([ -2.16178351e+08, 2.16178351e+08])
fun: -2161783.5125566297
message: 'Singular matrix C in LSQ subproblem'
njev: 17
nit: 17
nfev: 68
jac: array([ 0., 0., 0.])
status: 6
success: False
which is fine, because the solution is have the first component set to minus infinity and the second one to plus infinity. But with r = [0., 0.001]
, it returns:
x: array([ 0., 0.])
fun: -0
message: 'Optimization terminated successfully.'
njev: 1
nit: 1
nfev: 4
jac: array([ 0. , -0.001, 0. ])
status: 0
success: True
And it should have the same solution (or absence of solution). Putting a minimum or a maximum on the components of x does not change anything (that is, the first case finds a good solution and the second one returns the initial vector). Providing jacobians do not help either. It does seem like a bug to me...
Metadata
Metadata
Assignees
Labels
defectA clear bug or issue that prevents SciPy from being installed or used as expectedA clear bug or issue that prevents SciPy from being installed or used as expectedscipy.optimize