-
Notifications
You must be signed in to change notification settings - Fork 194
Fallback to fixed precision if prec change is infinite #792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This will leave handling of special numbers to fewer functions, e.g. to hypercomb(). Closes mpmath#520 Closes mpmath#522
Not sure if this worth it, looks like a hack. And, clearly, it's a compatibility break. On another hand this allows pass through to low-level functions (like hypercomb()) to handle special inputs, instead of blowing up on statements like |
def _set_prec(ctx, n): | ||
if not ctx.isfinite(n): | ||
return | ||
ctx._prec = ctx._prec_rounding[0] = max(1, int(n)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like an odd fix: just ignore the precision if it is mpmath.inf
?
Is it valid to set the ctx.prec = mpmath.inf
?
If it is valid then surely it should be used for something rather than ignored. If it is not valid then surely it should raise an error rather than being ignored. Either way ignoring does not seem like the right thing to do.
I would have thought that the right way to fix this is here by checking if M
is infinite before doing ctx.prec += M
:
mpmath/mpmath/functions/bessel.py
Lines 159 to 172 in dcc591c
M = ctx.mag(z) | |
if M < 1: | |
# Represent as limit definition | |
def h(n): | |
r = (z/2)**2 | |
T1 = [z, 2], [-n, n-1], [n], [], [], [1-n], r | |
T2 = [z, 2], [n, -n-1], [-n], [], [], [1+n], r | |
return T1, T2 | |
# We could use the limit definition always, but it leads | |
# to very bad cancellation (of exponentially large terms) | |
# for large real z | |
# Instead represent in terms of 2F0 | |
else: | |
ctx.prec += M |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like an odd fix: just ignore the precision if it is mpmath.inf?
Yes, but see: #792 (comment)
Is it valid to set the ctx.prec = mpmath.inf?
Yes, that was legal before.
I would have thought that the right way to fix this is here by checking if M is infinite before doing ctx.prec += M:
That was my original fix. The problem is that this will be required for many functions, that rely on hypercomb(). While with this approach we just fall deeper and rely on this function to handle special values for both mp and fp context.
BTW, to workaround SymPy failure we could use ctx.isinf instead.
In mpmath 1.3.0 it gives an error: >>> import mpmath
>>> mpmath.mp.prec = mpmath.inf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/enojb/work/dev/mpmath/mpmath/ctx_mp_python.py", line 613, in _set_prec
ctx._prec = ctx._prec_rounding[0] = max(1, int(n))
^^^^^^
File "/Users/enojb/work/dev/mpmath/mpmath/ctx_mp_python.py", line 143, in __int__
def __int__(s): return int(to_int(s._mpf_))
^^^^^^^^^^^^^^^
File "/Users/enojb/work/dev/mpmath/mpmath/libmp/libmpf.py", line 351, in to_int
raise ValueError("cannot convert inf or nan to int")
ValueError: cannot convert inf or nan to int After the change here it silently does nothing: >>> import mpmath
>>> mpmath.mp.prec = mpmath.inf
>>> mpmath.mp.prec
53 |
Oops, indeed. That was the reason why infinities were handled incorrectly.
Probably you are right. But I dislike approach "lets add workarounds to every hypercomb-based function". Maybe there are other options? What if we make this legal? The prec=0 is a special case for "exact" computations (without rounding). See the |
Maybe there could be come sort of helper function for calling hypercomb with precision increased by def hypercomb_prec_magz(ctx, ..., z):
magz = ctx.mag(z)
if ctx.isfinite(magz):
# for finite z we need magz extra precision in hypercomb
with ctx.extraprec(magz):
return ctx.hypercomb(..., z)
else:
# z is infinite so let hypercomb handle special values.
# increased precision is not needed in this case.
return ctx.hypercomb(..., z) I don't know how many cases there are like that. Alternatively: @contextmanager
def extraprec_inf(ctx, extra):
"""Increase precision if extra is finite."""
if ctx.isfinite(extra):
with ctx.extraprec(extra):
yield
else:
# This branch is for handling special values where something is infinite
# No extra precision is needed for that case.
yield |
Temporary fix for SymPy, regression from #792
But I think same story is valid for other functions (hyper/hyp2f1/etc).
Tests pass with diff --git a/mpmath/ctx_mp_python.py b/mpmath/ctx_mp_python.py
index 335cd58..9d4f120 100644
--- a/mpmath/ctx_mp_python.py
+++ b/mpmath/ctx_mp_python.py
@@ -774,8 +774,8 @@ def default(ctx):
ctx.trap_complex = False
def _set_prec(ctx, n):
- if not ctx.isfinite(n):
- return
+ if ctx.isinf(n):
+ n = 0
ctx._prec = ctx._prec_rounding[0] = max(1, int(n))
ctx._dps = prec_to_dps(n)
|
See #801 |
This will leave handling of special numbers to fewer functions, e.g. to hypercomb().
Closes #520
Closes #522