diff options
author | Joseph Myers <joseph@codesourcery.com> | 2018-09-27 12:35:23 +0000 |
---|---|---|
committer | Joseph Myers <joseph@codesourcery.com> | 2018-09-27 12:35:23 +0000 |
commit | 9755bc4686d8cd6a0e9539040b903e9e9291c319 (patch) | |
tree | 05d7d23577087ebb8c4d9b8122f604c06b1513d3 /sysdeps/x86/fpu | |
parent | f841c97e515a1673485a2b12b3c280073d737890 (diff) | |
download | glibc-9755bc4686d8cd6a0e9539040b903e9e9291c319.tar glibc-9755bc4686d8cd6a0e9539040b903e9e9291c319.tar.gz glibc-9755bc4686d8cd6a0e9539040b903e9e9291c319.tar.bz2 glibc-9755bc4686d8cd6a0e9539040b903e9e9291c319.zip |
Use round functions not __round functions in glibc libm.
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __round functions to call the
corresponding round names instead, with asm redirection to __round
when the calls are not inlined.
An additional complication arises in
sysdeps/ieee754/ldbl-128ibm/e_expl.c, where a call to roundl, with the
result converted to int, gets converted by the compiler to call
lroundl in the case of 32-bit long, so resulting in localplt test
failures. It's logically correct to let the compiler make such an
optimization; an appropriate asm redirection of lroundl to __lroundl
is thus added to that file (it's not needed anywhere else).
Tested for x86_64, and with build-many-glibcs.py.
* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (round): Redirect
using MATH_REDIRECT.
* sysdeps/aarch64/fpu/s_round.c: Define NO_MATH_REDIRECT before
header inclusion.
* sysdeps/aarch64/fpu/s_roundf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_round.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_round.c: Likewise.
* sysdeps/ieee754/float128/s_roundf128.c: Likewise.
* sysdeps/ieee754/flt-32/s_roundf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_roundl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_roundl.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_round.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_roundf.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_round.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_round.c: Likewise.
* sysdeps/riscv/rvf/s_roundf.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_roundl.c: Likewise.
(round): Redirect to __round.
(__roundl): Call round instead of __round.
* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__round):
Remove macro.
[_ARCH_PWR5X] (__roundf): Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (gamma_positive): Use round
functions instead of __round variants.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (gammaf_positive): Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (gammal_positive):
Likewise.
* sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_expl.c (lroundl): Redirect to
__lroundl.
(__ieee754_expl): Call roundl instead of __roundl.
Diffstat (limited to 'sysdeps/x86/fpu')
-rw-r--r-- | sysdeps/x86/fpu/powl_helper.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/sysdeps/x86/fpu/powl_helper.c b/sysdeps/x86/fpu/powl_helper.c index 651eedd792..469fd0fb18 100644 --- a/sysdeps/x86/fpu/powl_helper.c +++ b/sysdeps/x86/fpu/powl_helper.c @@ -216,7 +216,7 @@ __powl_helper (long double x, long double y) /* Split the base-2 logarithm of the result into integer and fractional parts. */ - long double log2_res_int = __roundl (log2_res_hi); + long double log2_res_int = roundl (log2_res_hi); long double log2_res_frac = log2_res_hi - log2_res_int + log2_res_lo; /* If the integer part is very large, the computed fractional part may be outside the valid range for f2xm1. */ |