Age | Commit message (Collapse) | Author |
|
|
|
For arguments with X^2 + Y^2 close to 1, clog and clog10 avoid large
errors from log(hypot) by computing X^2 + Y^2 - 1 in a way that avoids
cancellation error and then using log1p.
However, the thresholds for using that approach still result in log
being used on argument as large as sqrt(13/16) > 0.9, leading to
significant errors, in some cases above the 9ulp maximum allowed in
glibc libm. This patch arranges for the approach using log1p to be
used in any cases where |X|, |Y| < 1 and X^2 + Y^2 >= 0.5 (with the
existing allowance for cases where one of X and Y is very small),
adjusting the __x2y2m1 functions to work with the wider range of
inputs. This way, log only gets used on arguments below sqrt(1/2) (or
substantially above 1), where the error involved is much less.
Tested for x86_64, x86, mips64 and powerpc. For the ulps regeneration
I removed the existing clog and clog10 ulps before regenerating to
allow any reduced ulps to appear. Tests added include those found by
random test generation to produce large ulps either before or after
the patch, and some found by trying inputs close to the (0.75, 0.5)
threshold where the potential errors from using log are largest.
[BZ #19016]
* sysdeps/generic/math_private.h (__x2y2m1f): Update comment to
allow more cases with X^2 + Y^2 >= 0.5.
* sysdeps/ieee754/dbl-64/x2y2m1.c (__x2y2m1): Likewise. Add -1 as
normal element in sum instead of special-casing based on values of
arguments.
* sysdeps/ieee754/dbl-64/x2y2m1f.c (__x2y2m1f): Update comment.
* sysdeps/ieee754/ldbl-128/x2y2m1l.c (__x2y2m1l): Likewise. Add
-1 as normal element in sum instead of special-casing based on
values of arguments.
* sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c (__x2y2m1l): Likewise.
* sysdeps/ieee754/ldbl-96/x2y2m1.c [FLT_EVAL_METHOD != 0]
(__x2y2m1): Update comment.
* sysdeps/ieee754/ldbl-96/x2y2m1l.c (__x2y2m1l): Likewise. Add -1
as normal element in sum instead of special-casing based on values
of arguments.
* math/s_clog.c (__clog): Handle more cases using log1p without
hypot.
* math/s_clog10.c (__clog10): Likewise.
* math/s_clog10f.c (__clog10f): Likewise.
* math/s_clog10l.c (__clog10l): Likewise.
* math/s_clogf.c (__clogf): Likewise.
* math/s_clogl.c (__clogl): Likewise.
* math/auto-libm-test-in: Add more tests of clog and clog10.
* math/auto-libm-test-out: Regenerated.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
|
|
Various floating-point functions have code to force underflow
exceptions if a tiny result was computed in a way that might not have
resulted in such exceptions even though the result is inexact. This
typically uses math_force_eval to ensure that the underflowing
expression is evaluated, but sometimes uses volatile.
This patch refactors such code to use three new macros
math_check_force_underflow, math_check_force_underflow_nonneg and
math_check_force_underflow_complex (which in turn use
math_force_eval). In the limited number of cases not suited to a
simple conversion to these macros, existing uses of volatile are
changed to use math_force_eval instead. The converted code does not
always execute exactly the same sequence of operations as the original
code, but the overall effects should be the same.
Tested for x86_64, x86, mips64 and powerpc.
* sysdeps/generic/math_private.h (fabs_tg): New macro.
(min_of_type): Likewise.
(math_check_force_underflow): Likewise.
(math_check_force_underflow_nonneg): Likewise.
(math_check_force_underflow_complex): Likewise.
* math/e_exp2l.c (__ieee754_exp2l): Use
math_check_force_underflow_nonneg.
* math/k_casinh.c (__kernel_casinh): Likewise.
* math/k_casinhf.c (__kernel_casinhf): Likewise.
* math/k_casinhl.c (__kernel_casinhl): Likewise.
* math/s_catan.c (__catan): Use
math_check_force_underflow_complex.
* math/s_catanf.c (__catanf): Likewise.
* math/s_catanh.c (__catanh): Likewise.
* math/s_catanhf.c (__catanhf): Likewise.
* math/s_catanhl.c (__catanhl): Likewise.
* math/s_catanl.c (__catanl): Likewise.
* math/s_ccosh.c (__ccosh): Likewise.
* math/s_ccoshf.c (__ccoshf): Likewise.
* math/s_ccoshl.c (__ccoshl): Likewise.
* math/s_cexp.c (__cexp): Likewise.
* math/s_cexpf.c (__cexpf): Likewise.
* math/s_cexpl.c (__cexpl): Likewise.
* math/s_clog.c (__clog): Use math_check_force_underflow_nonneg.
* math/s_clog10.c (__clog10): Likewise.
* math/s_clog10f.c (__clog10f): Likewise.
* math/s_clog10l.c (__clog10l): Likewise.
* math/s_clogf.c (__clogf): Likewise.
* math/s_clogl.c (__clogl): Likewise.
* math/s_csin.c (__csin): Use math_check_force_underflow_complex.
* math/s_csinf.c (__csinf): Likewise.
* math/s_csinh.c (__csinh): Likewise.
* math/s_csinhf.c (__csinhf): Likewise.
* math/s_csinhl.c (__csinhl): Likewise.
* math/s_csinl.c (__csinl): Likewise.
* math/s_csqrt.c (__csqrt): Use math_check_force_underflow.
* math/s_csqrtf.c (__csqrtf): Likewise.
* math/s_csqrtl.c (__csqrtl): Likewise.
* math/s_ctan.c (__ctan): Use math_check_force_underflow_complex.
* math/s_ctanf.c (__ctanf): Likewise.
* math/s_ctanh.c (__ctanh): Likewise.
* math/s_ctanhf.c (__ctanhf): Likewise.
* math/s_ctanhl.c (__ctanhl): Likewise.
* math/s_ctanl.c (__ctanl): Likewise.
* stdlib/strtod_l.c (round_and_return): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_asin): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/e_atanh.c (__ieee754_atanh): Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Do not use
volatile when forcing underflow.
* sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r):
Likewise.
* sysdeps/ieee754/dbl-64/e_j1.c (__ieee754_j1): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise.
* sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise.
* sysdeps/ieee754/dbl-64/s_asinh.c (__asinh): Likewise.
* sysdeps/ieee754/dbl-64/s_atan.c (atan): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/dbl-64/s_erf.c (__erf): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise.
* sysdeps/ieee754/dbl-64/s_fma.c (__fma): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/dbl-64/s_log1p.c (__log1p): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/s_sin.c (__sin): Likewise.
* sysdeps/ieee754/dbl-64/s_tan.c (tan): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/dbl-64/s_tanh.c (__tanh): Use
math_check_force_underflow.
* sysdeps/ieee754/flt-32/e_asinf.c (__ieee754_asinf): Likewise.
* sysdeps/ieee754/flt-32/e_atanhf.c (__ieee754_atanhf): Likewise.
* sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
Likewise.
* sysdeps/ieee754/flt-32/e_j1f.c (__ieee754_j1f): Use
math_check_force_underflow.
* sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_jnf): Likewise.
* sysdeps/ieee754/flt-32/e_sinhf.c (__ieee754_sinhf): Likewise.
* sysdeps/ieee754/flt-32/k_sinf.c (__kernel_sinf): Likewise.
* sysdeps/ieee754/flt-32/k_tanf.c (__kernel_tanf): Likewise.
* sysdeps/ieee754/flt-32/s_asinhf.c (__asinhf): Likewise.
* sysdeps/ieee754/flt-32/s_atanf.c (__atanf): Likewise.
* sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise.
* sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise.
* sysdeps/ieee754/flt-32/s_log1pf.c (__log1pf): Likewise.
* sysdeps/ieee754/flt-32/s_tanhf.c (__tanhf): Likewise.
* sysdeps/ieee754/ldbl-128/e_asinl.c (__ieee754_asinl): Likewise.
* sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl):
Likewise.
* sysdeps/ieee754/ldbl-128/e_expl.c (__ieee754_expl): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128/e_j1l.c (__ieee754_j1l): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise.
* sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise.
* sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl):
Likewise.
* sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-128/k_tanl.c (__kernel_tanl): Likewise.
* sysdeps/ieee754/ldbl-128/s_asinhl.c (__asinhl): Likewise.
* sysdeps/ieee754/ldbl-128/s_atanl.c (__atanl): Likewise.
* sysdeps/ieee754/ldbl-128/s_erfl.c (__erfl): Likewise.
* sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise.
* sysdeps/ieee754/ldbl-128/s_fmal.c (__fmal): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/ldbl-128/s_log1pl.c (__log1pl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_asinl.c (__ieee754_asinl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128ibm/e_atanhl.c (__ieee754_atanhl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
Use math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-128ibm/e_jnl.c (__ieee754_jnl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128ibm/e_sinhl.c (__ieee754_sinhl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_sincosl.c (__kernel_sincosl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_tanl.c (__kernel_tanl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_asinhl.c (__asinhl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_atanl.c (__atanl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_erfl.c (__erfl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_tanhl.c (__tanhl): Likewise.
* sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise.
* sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise.
* sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise.
* sysdeps/ieee754/ldbl-96/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-96/k_tanl.c (__kernel_tanl): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-96/s_asinhl.c (__asinhl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise.
* sysdeps/ieee754/ldbl-96/s_fmal.c (__fmal): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Use
math_check_force_underflow.
|
|
|
|
This patch fixes bug 16789, incorrect sign of (real part) zero result
from clog and clog10 in round-downward mode, arising from that real
part being computed as 0 - 0. To ensure that an underflow exception
occurred, the code used an underflowing value (the next term in the
series for log1p) in arithmetic computing the real part of the result,
yielding the problematic 0 - 0 computation in some cases even when the
mathematical result would be small but positive. The patch changes
this code to use the math_force_eval approach to ensuring that an
underflowing computation actually occurs. Tests of clog and clog10
are enabled in all rounding modes.
Tested x86_64 and x86 and ulps updated accordingly.
[BZ #16789]
* math/s_clog.c (__clog): Use math_force_eval to ensure underflow
instead of using underflowing value in computing result.
* math/s_clog10.c (__clog10): Likewise.
* math/s_clog10f.c (__clog10f): Likewise.
* math/s_clog10l.c (__clog10l): Likewise.
* math/s_clogf.c (__clogf): Likewise.
* math/s_clogl.c (__clogl): Likewise.
* math/libm-test.inc (clog_test): Use ALL_RM_TEST.
(clog10_test): Likewise.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
|
|
|
|
|
|
|
|
|
|
above 1 (bug 13629).
|
|
13629).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|