aboutsummaryrefslogtreecommitdiff
path: root/sysdeps
AgeCommit message (Collapse)Author
2017-11-15Cleanup sigpause implementationAdhemerval Zanella
This patch simplify sigpause by remobing the single thread optimization since it will be handled already by the __sigsuspend call. Checked on x86_64-linux-gnu. * sysdeps/posix/sigpause.c (do_sigpause): Remove. (__sigpause): Rely on __sigsuspend to implement single thread optimization. Add LIBC_CANCEL_HANDLED for cancellation marking. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Zack Weinberg <zackw@panix.com>
2017-11-15Check length of ifname before copying it into to ifreq structure.Steve Ellcey
[BZ #22442] * sysdeps/unix/sysv/linux/if_index.c (__if_nametoindex): Check if ifname is too long.
2017-11-15linux: Include <sysdep-cancel.h> for epoll_waitLuke Shumaker
The epoll_wait wrapper uses the raw syscall if __NR_epoll_wait is defined, and falls back to calling epoll_pwait(..., NULL) if it isn't defined. However, it didn't include the appropriate headers for __NR_epoll_wait to be defined, so it was *always* falling back to calling epoll_pwait! This mistake was introduced in b62c3815912bc679a966134affdedd3f35ae8621, when epoll_wait changed from being in syscalls.list to always having a C wrapper. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-13ld.so: Add architecture specific fieldsH.J. Lu
To support Intel Control-flow Enforcement Technology (CET) run-time control: 1. An architecture specific field in the writable ld.so namespace is needed to indicate if CET features are enabled at run-time. 2. An architecture specific field in struct link_map is needed if CET features are enabled in an ELF module. This patch adds dl-procruntime.c to the writable ld.so namespace and link_map.h to struct link_map. Tested with build-many-glibcs.py. * elf/dl-support.c: Include <dl-procruntime.c>. * include/link.h: Include <link_map.h>. * sysdeps/generic/dl-procruntime.c: New file. * sysdeps/generic/link_map.h: Likewise. * sysdeps/generic/ldsodefs.h: Include <dl-procruntime.c> in the writable ld.so namespace.
2017-11-11Fix clog10_downward ulps on hppa.John David Anglin
2017-11-11 John David Anglin <danglin@gcc.gnu.org> * sysdeps/hppa/fpu/libm-test-ulps: Update clog10_downward ulps.
2017-11-09Add jmp_buf-macros.hH.J. Lu
Verify that sizes, alignments and field offsets of jmp_buf as well as sigjmp_buf are unchanged regardless how struct __jmp_buf_tag is defined. Since jmp_buf is target specific, jmp_buf-macros.h is added for each Linux target. A new target must provides its own jmp_buf-macros.h. TODO: Hurd needs to provide a jmp_buf-macros.h. Tested with build-many-glibcs.py. * include/setjmp.h [!_ISOMAC]: Include <stddef.h> and <jmp_buf-macros.h>. [!_ISOMAC] (STR_HELPER): New. [!_ISOMAC] (STR): Likewise. [!_ISOMAC] (TEST_SIZE): Likewise. [!_ISOMAC] (TEST_ALIGN): Likewise. [!_ISOMAC] (TEST_OFFSET): Likewise. [!_ISOMAC] Add _Static_assert to check sizes, alignments and field offsets of jmp_buf as well as sigjmp_buf. * sysdeps/unix/sysv/linux/aarch64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/alpha/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/arm/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/hppa/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/i386/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/ia64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/m68k/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/microblaze/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/mips/mips32/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n32/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/nios2/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/sh/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/tile/tilepro/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/64/jmp_buf-macros.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/jmp_buf-macros.h: Likewise.
2017-11-07nptl: Define __PTHREAD_MUTEX_{NUSERS_AFTER_KIND,USE_UNION}Adhemerval Zanella
This patch adds two new internal defines to set the internal pthread_mutex_t layout required by the supported ABIS: 1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define __nusers fields before or after __kind. The preferred value for is 0 for new ports and it sets __nusers before __kind. 2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and __list members will be place inside an union for linuxthreads compatibility. The preferred value is 0 for ports and it sets to not use an union to define both fields. It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32. Checked with a make check run-built-tests=no on all afected ABIs. [BZ #22298] * nptl/allocatestack.c (allocate_stack): Check if __PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if __PTHREAD_MUTEX_HAVE_PREV is defined. * nptl/descr.h (pthread): Likewise. * nptl/nptl-init.c (__pthread_initialize_minimal_internal): Likewise. * nptl/pthread_create.c (START_THREAD_DEFN): Likewise. * sysdeps/nptl/fork.c (__libc_fork): Likewise. * sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise. * sysdeps/nptl/bits/thread-shared-types.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New defines. (__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead of __WORDSIZE for internal layout. (__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION instead of __WORDSIZE whether to use an union for __spins and __list fields. (__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION case. * sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New defines. * sysdeps/alpha/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/arm/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/hppa/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/sparc/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/x86/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07nptl: Add tests for internal pthread_mutex_t offsetsAdhemerval Zanella
This patch adds a new build test to check for internal fields offsets for user visible internal field. Although currently the only field which is statically initialized to a non zero value is pthread_mutex_t.__data.__kind value, the tests also check the offset of __kind, __spins, __elision (if supported), and __list internal member. A internal header (pthread-offset.h) is added to each major ABI with the reference value. Checked on x86_64-linux-gnu and with a build check for all affected ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf, hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu, microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu, mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu, s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu, sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32, tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32). * nptl/pthreadP.h (ASSERT_PTHREAD_STRING, ASSERT_PTHREAD_INTERNAL_OFFSET): New macro. * nptl/pthread_mutex_init.c (__pthread_mutex_init): Add build time checks for internal pthread_mutex_t offsets. * sysdeps/aarch64/nptl/pthread-offsets.h (__PTHREAD_MUTEX_NUSERS_OFFSET, __PTHREAD_MUTEX_KIND_OFFSET, __PTHREAD_MUTEX_SPINS_OFFSET, __PTHREAD_MUTEX_ELISION_OFFSET, __PTHREAD_MUTEX_LIST_OFFSET): New macro. * sysdeps/alpha/nptl/pthread-offsets.h: Likewise. * sysdeps/arm/nptl/pthread-offsets.h: Likewise. * sysdeps/hppa/nptl/pthread-offsets.h: Likewise. * sysdeps/i386/nptl/pthread-offsets.h: Likewise. * sysdeps/ia64/nptl/pthread-offsets.h: Likewise. * sysdeps/m68k/nptl/pthread-offsets.h: Likewise. * sysdeps/microblaze/nptl/pthread-offsets.h: Likewise. * sysdeps/mips/nptl/pthread-offsets.h: Likewise. * sysdeps/nios2/nptl/pthread-offsets.h: Likewise. * sysdeps/powerpc/nptl/pthread-offsets.h: Likewise. * sysdeps/s390/nptl/pthread-offsets.h: Likewise. * sysdeps/sh/nptl/pthread-offsets.h: Likewise. * sysdeps/sparc/nptl/pthread-offsets.h: Likewise. * sysdeps/tile/nptl/pthread-offsets.h: Likewise. * sysdeps/x86_64/nptl/pthread-offsets.h: Likewise. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07Move <bits/mman-linux.h> to the Linux sysdeps directoryFlorian Weimer
The header file is no longer used on anything but Linux.
2017-11-07powerpc: Use latest optimization for internal function callsRajalakshmi Srinivasaraghavan
Update strcasestr-power8 to use power8 version of strnlen for calculating length. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
2017-11-06Cleanup Linux sigqueue implementationAdhemerval Zanella
This patch simplify Linux sigqueue implementation by assuming __NR_rt_sigqueueinfo existence due minimum kernel requirement (it pre-dates Linux git inclusion for Linux 2.6.12). Checked on x86_64-linux-gnu. * sysdeps/unix/sysv/linux/sigqueue.c (__sigqueue): Asssume __NR_rt_sigqueueinfo. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Zack Weinberg <zackw@panix.com>
2017-11-06Simplify Linux sig{timed}wait{info} implementationsAdhemerval Zanella
This patch simplifies sig{timed}wait{info} by: - Assuming __NR_rt_sigtimedwait existence on all architectures due minimum kernel version requirement (it pre-dates Linux git inclusion for Linux 2.6.12). - Call __sigtimedwait on both sigwait and sigwaitinfo. - Now that sigwait is based on an internal sigtimedwait call and it is present of both libc.so and libpthread.so we need to add an external private definition of __sigtimedwait for libpthread.so call. Checked on x86_64-linux-gnu. * sysdeps/unix/sysv/linux/Versions (libc) [GLIBC_PRIVATE]: Add __sigtimedwait. * sysdeps/unix/sysv/linux/sigtimedwait.c: Simplify includes and assume __NR_rt_sigtimedwait. * sysdeps/unix/sysv/linux/sigwait.c (__sigwait): Call __sigtimedwait and add LIBC_CANCEL_HANDLED for cancellation marking. * sysdeps/unix/sysv/linux/sigwaitinfo.c (__sigwaitinfo): Likewise. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Zack Weinberg <zackw@panix.com>
2017-11-06arm: Implement memchr ifunc selection in CAdhemerval Zanella
This patch refactor ARM memchr ifunc selector to a C implementation. No functional change is expected, including ifunc resolution rules. It also reorganize the ifunc options code: 1. The memchr_impl.S is renamed to memchr_neon.S and multiple compilation options (which route to armv6t2/memchr one) is removed. The code to build if __ARM_NEON__ is defined is also simplified. 2. A memchr_noneon is added (which as build along previous ifunc resolution) and includes the armv6t2 direct. 3. Same as 2. for loader object. Alongside the aforementioned changes, it also some cleanus: - Internal memchr definition (__GI_memcpy) is now a hidden symbol. - No need to create hidden definition for the ifunc variants. Checked on armv7-linux-gnueabihf and with a build for arm-linux-gnueabi, arm-linux-gnueabihf with and without multiarch support and with both GCC 7.1 and GCC mainline. * sysdeps/arm/armv7/multiarch/Makefile [$(subdir) = string] (sysdeps_routines): Add memchr_noneon. * sysdeps/arm/armv7/multiarch/ifunc-memchr.h: New file. * sysdeps/arm/armv7/multiarch/memchr_noneon.S: Likewise. * sysdeps/arm/armv7/multiarch/rtld-memchr.S: Likewise. * sysdeps/arm/armv7/multiarch/memchr.S: Remove file. * sysdeps/arm/armv7/multiarch/memchr.c: New file. * sysdeps/arm/armv7/multiarch/memchr_impl.S: Move to ... * sysdeps/arm/armv7/multiarch/memchr_neon.S: ... here. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-06arm: Implement memcpy ifunc selection in CAdhemerval Zanella
This patch refactor ARM memcpy ifunc selector to a C implementation. No functional change is expected, including ifunc resolution rules. It also adds some cleanup: - Internal memcpy hidden definition (__GI_memcpy) is now a hidden symbol. - No need to create hidden definition for the ifunc variants. Checked on armv7-linux-gnueabihf and with a build for arm-linux-gnueabi, arm-linux-gnueabihf with and without multiarch support and with both GCC 7.1 and GCC mainline. I also checked with the some possible multiarch different configurations that trigger different memcpy buids (__ARM_NEON__ && !__SOFT_FP__, !__ARM_NEON__ && !__SOFT_FP__, and !__ARM_NEON__ && __SOFT_FP__). * sysdeps/arm/arm-ifunc.h: New file. * sysdeps/arm/armv7/multiarch/ifunc-memcpy.h: Likewise. * sysdeps/arm/armv7/multiarch/memcpy.c: Likewise. * sysdeps/arm/armv7/multiarch/memcpy_arm.S: Likewise. * sysdeps/arm/armv7/multiarch/rtld-memcpy.S: Likewise. * sysdeps/arm/armv7/multiarch/memcpy_neon.S [!__ARM_NEON__] (__memcpy_neon): Avoid create hidden alias. * sysdeps/arm/armv7/multiarch/memcpy_vfp.S [!__ARM_NEON_] (__memcpy_vfp): Likewise. * sysdeps/arm/armv7/multiarch/Makefile [$(subdir) = string] (sysdep_routines): Add memcpy_arm. * sysdeps/arm/armv7/multiarch/memcpy.S: Remove file. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-06Do not declare _Float128 support for powerpc64le -mlong-double-64 (bug 22402).Joseph Myers
The powerpc bits/floatn.h declares _Float128 support to be present when the compiler supports it for powerpc64le. However, in the case where -mlong-double-64 is used, __MATH_TG does not actually support _Float128; it only supports _Float128 in the distinct-long-double case. This shows up as a build failure when building glibc mainline with GCC mainline, given the recently added sanity check in math.h for configurations supported by __MATH_TG, as the compat code for -mlong-double-64 fails to build. However, the bug was logically present before that change (including in 2.26), just less visible. This patch fixes the build failure by declaring _Float128 to be unsupported in that case. (Of course this can't actually stop users calling the type-generic macros with _Float128 arguments with -mlong-double-64, just as they could be called with other unsupported types on other platforms, but perhaps makes it less likely by making all the type-specific _Float128 interfaces invisible in that case.) Tested compilation for powerpc64le with build-many-glibcs.py. [BZ #22402] * sysdeps/powerpc/bits/floatn.h: Include <bits/long-double.h>. [__NO_LONG_DOUBLE_MATH] (__HAVE_FLOAT128): Define to 0.
2017-11-03aarch64: Guess L1 cache linesize for aarch64Richard Henderson
Using the cache hierarchy linesize minimum in CTR_EL0. See the comment within the code for rationale. * sysdeps/unix/sysv/linux/aarch64/sysconf.c: New file.
2017-11-03aarch64: optimize _dl_tlsdesc_dynamic fast pathSzabolcs Nagy
Remove some load/store instructions from the dynamic tlsdesc resolver fast path. This gives around 20% faster tls access in dlopened shared libraries (assuming glibc ran out of static tls space). * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Optimize.
2017-11-03arm: Remove lazy tlsdesc initialization related codeSzabolcs Nagy
Lazy tlsdesc initialization is no longer used in the dynamic linker so all related code can be removed. * sysdeps/arm/dl-machine.h (elf_machine_runtime_setup): Remove DT_TLSDESC_GOT initialization. * sysdeps/arm/dl-tlsdesc.S (_dl_tlsdesc_lazy_resolver): Remove. (_dl_tlsdesc_resolve_hold): Likewise. * sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_lazy_resolver): Remove. (_dl_tlsdesc_resolve_hold): Likewise. * sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_lazy_resolver_fixup): Remove. (_dl_tlsdesc_resolve_hold_fixup): Likewise.
2017-11-03arm: Remove unnecessary volatile qualifierSzabolcs Nagy
There is no reason to treat tlsdesc entries as volatile objects. * sysdeps/arm/dl-machine.h (elf_machine_rel): Remove volatile.
2017-11-03[BZ #18572] arm: Disable lazy initialization of tlsdesc entriesSzabolcs Nagy
Follow up to https://sourceware.org/ml/libc-alpha/2015-11/msg00272.html Always do tls descriptor initialization at load time during relocation processing (as if DF_BIND_NOW were set for the binary) to avoid barriers at every tls access. This patch mimics bind-now semantics in the lazy relocation code of the arm target (elf_machine_lazy_rel). Ideally the static linker should be updated too to not emit tlsdesc relocs in DT_REL*, so elf_machine_lazy_rel is not called on them at all. [BZ #18572] * sysdeps/arm/dl-machine.h (elf_machine_lazy_rel): Do symbol binding non-lazily for R_ARM_TLS_DESC.
2017-11-03[BZ #17078] arm: remove prelinker support for R_ARM_TLS_DESCSzabolcs Nagy
This patch reverts commit 9c82da17b5794efebe005de2fd22d61a3ea4b58a Author: Maciej W. Rozycki <macro@codesourcery.com> Date: 2014-07-17 19:22:05 +0100 [BZ #17078] ARM: R_ARM_TLS_DESC prelinker support This only implemented support for the lazy binding case (and thus closed the bugzilla ticket prematurely), however tlsdesc on arm is not correct with lazy binding because there is a data race between the lazy initialization code and tlsdesc resolver functions. Lazy initialization of tlsdesc entries will be removed from arm to fix the data races and thus this half-finished prelinker support is no longer useful. [BZ #17078] * sysdeps/arm/dl-machine.h (elf_machine_rela): Remove the R_ARM_TLS_DESC case. (elf_machine_lazy_rel): Remove the prelink check.
2017-11-03aarch64: Remove barriers from TLS descriptor functionsSzabolcs Nagy
Remove ldar synchronization and most lazy TLSDESC initialization related code. * sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove DT_TLSDESC_GOT initialization. * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Remove. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise. (_dl_tlsdesc_undefweak): Remove ldar. (_dl_tlsdesc_dynamic): Likewise. * sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Remove. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise. * sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Remove. (_dl_tlsdesc_resolve_hold_fixup): Likewise. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise.
2017-11-03aarch64: Disable lazy symbol binding of TLSDESCSzabolcs Nagy
Always do TLS descriptor initialization at load time during relocation processing to avoid barriers at every TLS access. In non-dlopened shared libraries the overhead of tls access vs static global access is > 3x bigger when lazy initialization is used (_dl_tlsdesc_return_lazy) compared to bind-now (_dl_tlsdesc_return) so the barriers dominate tls access performance. TLSDESC relocs are in DT_JMPREL which are processed at load time using elf_machine_lazy_rel which is only supposed to do lightweight initialization using the DT_TLSDESC_PLT trampoline (the trampoline code jumps to the entry point in DT_TLSDESC_GOT which does the lazy tlsdesc initialization at runtime). This patch changes elf_machine_lazy_rel in aarch64 to do the symbol binding and initialization as if DF_BIND_NOW was set, so the non-lazy code path of elf/do-rel.h was replicated. The static linker could be changed to emit TLSDESC relocs in DT_REL*, which are processed non-lazily, but the goal of this patch is to always guarantee bind-now semantics, even if the binary was produced with an old linker, so the barriers can be dropped in tls descriptor functions. After this change the synchronizing ldar instructions can be dropped as well as the lazy initialization machinery including the DT_TLSDESC_GOT setup. I believe this should be done on all targets, including ones where no barrier is needed for lazy initialization. There is very little gain in optimizing for large number of symbolic tlsdesc relocations which is an extremely uncommon case. And currently the tlsdesc entries are only readonly protected with -z now and some hardennings against writable JUMPSLOT relocs don't work for TLSDESC so they are a security hazard. (But to fix that the static linker has to be changed.) * sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Do symbol binding and initialization non-lazily for R_AARCH64_TLSDESC.
2017-11-02test-errno-linux: quotactl can fail with EPERM in containersFlorian Weimer
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-01x86: Add sysdeps/x86/sysdep.hH.J. Lu
Add a new header file, sysdeps/x86/sysdep.h, for common assembly code macros between i386 and x86-64. Tested on i686 and x86-64. There are no differences in outputs of "readelf -a" and "objdump -dw" on all glibc shared objects before and after the patch. * sysdeps/i386/sysdep.h: Include <sysdeps/x86/sysdep.h> instead of <sysdeps/generic/sysdep.h>. (ALIGNARG): Removed. (ASM_SIZE_DIRECTIVE): Likewise. (ENTRY): Likewise. (END): Likewise. (ENTRY_CHK): Likewise. (END_CHK): Likewise. (syscall_error): Likewise. (mcount): Likewise. (PSEUDO_END): Likewise. (L): Likewise. (atom_text_section): Likewise. * sysdeps/x86/sysdep.h: New file. * sysdeps/x86_64/sysdep.h: Include <sysdeps/x86/sysdep.h> instead of <sysdeps/generic/sysdep.h>. (ALIGNARG): Removed. (ASM_SIZE_DIRECTIVE): Likewise. (ENTRY): Likewise. (END): Likewise. (ENTRY_CHK): Likewise. (END_CHK): Likewise. (syscall_error): Likewise. (mcount): Likewise. (PSEUDO_END): Likewise. (L): Likewise. (atom_text_section): Likewise.
2017-10-31Remove useless #ifdefs from Linux sig*.c syscallsYury Norov
sigprocmask.c, sigtimedwait.c, sigwait.c and sigwaitinfo.c files from sysdeps/unix/sysv/linux include nptl-signals.h via nptl/pthreadP.h, and so SIGCANCEL and SIGSETXID become defined unconditionally. But later in the code, there are some checks weither symbols defined, which is useless. This patch removes useless checks. Checked on x86_64-linux-gnu. * sysdeps/unix/sysv/linux/sigprocmask.c: Remove useless #ifdefs. * sysdeps/unix/sysv/linux/sigtimedwait.c: Likewise. * sysdeps/unix/sysv/linux/sigwait.c: Likewise. * sysdeps/unix/sysv/linux/sigwaitinfo.c: Likewise. Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> Reviewed-by: Andreas Schwab <schwab@suse.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-10-31Consolidate Linux sigpending() implementationYury Norov
ia64, s390-64, sparc64 and x86_64 host their own implementation of sigpending() in corresponding files, but they are identical to generic linux file despite few comments. This patch removes that files, so the implementation of sigpending() is taken from sysdeps/unix/sysv/linux for all ports. Build-tested on x86_64. * sysdeps/unix/sysv/linux/ia64/sigpending.c: Remove file. * sysdeps/unix/sysv/linux/s390/s390-64/sigpending.c: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/sigpending.c: Likewise. * sysdeps/unix/sysv/linux/x86_64/sigpending.c: Likewise. Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-10-31[PowerPC64] sysdep.h doesn't need to be included in multiarch filesAlan Modra
When the .c/.S file neither uses nor modifies macros defined in sysdep.h there is no point to #include it. The same goes for math_ldbl_opt.h except that it includes shlib-compat.h, and if compat_symbol is redefined we need to include shlib-compat.h first. * sysdeps/powerpc/powerpc64/fpu/multiarch/e_expf-power8.S: Don't include sysdep.h. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceilf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceilf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_cosf-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_cosf-ppc64.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floor-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floor-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_sinf-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_sinf-ppc64.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_truncf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_truncf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcmp-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-a2.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-cell.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-power6.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memmove-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/mempcpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memrchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memrchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power6.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/rawmemchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/stpcpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/stpncpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/stpncpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp_l-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasestr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchr-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchrnul-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchrnul-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power9.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcspn-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strlen-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strlen-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strlen-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncase-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power9.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strnlen-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strnlen-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strrchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strrchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strspn-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strstr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floorf-ppc64.S: Don't include sysdep.h and math_ldbl_opt.h. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceil-power5+.S: Don't include sysdep.h and math_ldbl_opt.h. Include shlib-compat.h. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceil-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_copysign-power6.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_copysign-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floorf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinf-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinf-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power5.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power6.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power6x.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llrint-power6x.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llrint-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llrint-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-power6x.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llroundf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_round-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_round-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_trunc-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_trunc-ppc64.S: Likewise.
2017-10-31[PowerPC64] strncase_l-power7.c should use strncase_l.cAlan Modra
This is another one where we'll be wanting the base symbols for powerpc64le rather than just a power7 variant. * sysdeps/powerpc/powerpc64/multiarch/strncase_l-power7.c: Include string/strncase_l.c, not string/strncase.c. (USE_IN_EXTENDED_LOCALE_MODEL): Don't define. (libc_hidden_def): Redefine.
2017-10-31[PowerPC64] Tidy strcasecmp_l-power7.S symbolsAlan Modra
The routine being assembled here is strcasecmp_l, so ask for that via __STRCMP and STRCMP defines. That change means tweaking the power7 override. Needed for later powerpc64le changes where we want the base symbols, not just a power7 variant. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp_l-power7.S: (__STRCMP, STRCMP, __strcasecmp_l): Define. (__strcasecmp): Don't define.
2017-10-31[PowerPC64] Wrap str{,n}cmp-power{8,9}.S in IS_IN(libc)Alan Modra
These functions aren't used in ld.so at the moment since we don't have strcmp or strncmp ifuncs for them there. Remove the ld.so bloat. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power8.S: Wrap in IS_IN (libc). * sysdeps/powerpc/powerpc64/multiarch/strcmp-power9.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power9.S: Likewise.
2017-10-31[PowerPC64] Remove duplicate define in stpncpy-power8.SAlan Modra
USE_AS_STPNCPY is defined by sysdeps/powerpc/powerpc64/power8/stpncpy.S, included by this file. * sysdeps/powerpc/powerpc64/multiarch/stpncpy-power8.S: Don't define USE_AS_STPNCPY.
2017-10-31[PowerPC64] Don't define __GI_ variant of isnan for static libAlan Modra
It seems to me that libc.a should not contain any of the __GI_ symbols, and certainly --enable-multi-arch ought to not add to the list. At the end of this patch series we have the following in both --enable-multi-arch and --disable-multi-arch libc.a: 0000000000000000 T __GI___readdir64 0000000000000000 T __GI___fxstatat64 0000000000000000 T __GI_getrlimit 0000000000000000 T __GI___getrlimit * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-ppc64.S (hidden_def): Redefine only when SHARED.
2017-10-30sysdeps/x86/libc-start.c: Add /* !SHARED */H.J. Lu
* sysdeps/x86/libc-start.c: Add /* !SHARED */.
2017-10-30Reformat sysdeps/x86/libc-start.cH.J. Lu
* sysdeps/x86/libc-start.c: Reformat.
2017-10-30i586: Use conditional branches in strcpy.S [BZ #22353]H.J. Lu
i586 strcpy.S used a clever trick with LEA to implement jump table: /* ECX has the last 2 bits of the address of source - 1. */ andl $3, %ecx call 2f 2: popl %edx /* 0xb is the distance between 2: and 1:. */ leal 0xb(%edx,%ecx,8), %ecx jmp *%ecx .align 8 1: /* ECX == 0 */ orb (%esi), %al jz L(end) stosb xorl %eax, %eax incl %esi /* ECX == 1 */ orb (%esi), %al jz L(end) stosb xorl %eax, %eax incl %esi /* ECX == 2 */ orb (%esi), %al jz L(end) stosb xorl %eax, %eax incl %esi /* ECX == 3 */ L(1): movl (%esi), %ecx leal 4(%esi),%esi This fails if there are instruction length changes before L(1):. This patch replaces it with conditional branches: cmpb $2, %cl je L(Src2) ja L(Src3) cmpb $1, %cl je L(Src1) L(Src0): which have similar performance and work with any instruction lengths. Tested on i586 and i686 with and without --disable-multi-arch. [BZ #22353] * sysdeps/i386/i586/strcpy.S (STRCPY): Use conditional branches. (1): Renamed to ... (L(Src0)): This. (L(Src1)): New. (L(Src2)): Likewise. (L(1)): Renamed to ... (L(Src3)): This.
2017-10-27i386: Regenerate libm-test-ulps for for gcc 7H.J. Lu
Regenerate libm-test-ulps for gcc 7 with "-m32 -O2 -march=i586". * sysdeps/i386/fpu/libm-test-ulps: Regenerated for GCC 7 with "-O2 -march=i586".
2017-10-25powerpc: Replace lxvd2x/stxvd2x with lvx/stvx in P7's memcpy/memmoveRajalakshmi Srinivasaraghavan
POWER9 DD2.1 and earlier has an issue where some cache inhibited vector load traps to the kernel, causing a performance degradation. To handle this in memcpy and memmove, lvx/stvx is used for aligned addresses instead of lxvd2x/stxvd2x. Reference: https://patchwork.ozlabs.org/patch/814059/ * sysdeps/powerpc/powerpc64/power7/memcpy.S: Replace lxvd2x/stxvd2x with lvx/stvx. * sysdeps/powerpc/powerpc64/power7/memmove.S: Likewise. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-10-25Replace "if if " with "if " in commentsH.J. Lu
* include/alloc_buffer.h: Replace "if if " with "if " in comments. * sysdeps/mips/memcpy.S: Likkewise. * sysdeps/mips/memset.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S: Likewise.
2017-10-24Update x86 fix-fp-int-compare-invalid.h for GCC 8.Joseph Myers
The glibc implementation of iseqsig relies on ordered comparison operators raising the "invalid" exception for quiet NaN operands, with a workaround on platforms where a GCC bug means that exception is not raised. For x86, that bug has now been fixed for GCC 8, so this patch disables the workaround in that case. If and when the corresponding bugs for powerpc and s390 are fixed, the headers for those platforms should of course be updated similarly. Tested for x86_64 and x86, including with GCC mainline. Note that other failures appear with GCC mainline because of spurious use of ordered comparison instructions for unordered operations <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82692>. * sysdeps/x86/fpu/fix-fp-int-compare-invalid.h (FIX_COMPARE_INVALID): Define to 0 if [__GNUC_PREREQ (8, 0)].
2017-10-23posix: Do not use WNOHANG in waitpid call for Linux posix_spawnAdhemerval Zanella
As shown in some buildbot issues on aarch64 and powerpc, calling clone (VFORK) and waitpid (WNOHANG) does not guarantee the child is ready to be collected. This patch changes the call back to 0 as before fe05e1cb6d64 fix. This change can lead to the scenario 4.3 described in the commit, where the waitpid call can hang undefinitely on the call. However this is also a very unlikely and also undefinied situation where both the caller is trying to terminate a pid before posix_spawn returns and the race pid reuse is triggered. I don't see how to correct handle this specific situation within posix_spawn. Checked on x86_64-linux-gnu, aarch64-linux-gnu and powerpc64-linux-gnu. * sysdeps/unix/sysv/linux/spawni.c (__spawnix): Use 0 instead of WNOHANG in waitpid call.
2017-10-23aarch64: Add missing math Makefile for recent commitSzabolcs Nagy
Without -fno-math-errno, the builtins just do a call instead of inlining a single instruction.
2017-10-23aarch64: Implement math acceleration via builtinsMichael Collison
This patch converts asm statements into builtins for AArch64. As an example for the file sysdeps/aarch64/fpu/s_ceil.c, we convert the function from double __ceil (double x) { double result; asm ("frintp\t%d0, %d1" : "=w" (result) : "w" (x) ); return result; } into double __ceil (double x) { return __builtin_ceil (x); } Tested on aarch64-linux-gnu with gcc-4.9.4 and gcc-6. * sysdeps/aarch64/fpu/e_sqrt.c (ieee754_sqrt): Replace asm statements with __builtin_sqrt. * sysdeps/aarch64/fpu/e_sqrtf.c (ieee754_sqrtf): Replace asm statements with __builtin_sqrtf. * sysdeps/aarch64/fpu/s_ceil.c (__ceil): Replace asm statements with __builtin_ceil. * sysdeps/aarch64/fpu/s_ceilf.c (__ceilf): Replace asm statements with __builtin_ceilf. * sysdeps/aarch64/fpu/s_floor.c (__floor): Replace asm statements with __builtin_floor. * sysdeps/aarch64/fpu/s_floorf.c (__floorf): Replace asm statements with __builtin_floorf. * sysdeps/aarch64/fpu/s_fma.c (__fma): Replace asm statements with __builtin_fma. * sysdeps/aarch64/fpu/s_fmaf.c (__fmaf): Replace asm statements with __builtin_fmaf. * sysdeps/aarch64/fpu/s_fmax.c (__fmax): Replace asm statements with __builtin_fmax. * sysdeps/aarch64/fpu/s_fmaxf.c (__fmaxf): Replace asm statements with __builtin_fmaxf. * sysdeps/aarch64/fpu/s_fmin.c (__fmin): Replace asm statements with __builtin_fmin. * sysdeps/aarch64/fpu/s_fminf.c (__fminf): Replace asm statements with __builtin_fminf. * sysdeps/aarch64/fpu/s_frint.c: Delete file. * sysdeps/aarch64/fpu/s_frintf.c: Delete file. * sysdeps/aarch64/fpu/s_llrint.c (__llrint): Replace asm statements with builtin_rint and conversion to int. * sysdeps/aarch64/fpu/s_llrintf.c (__llrintf): Likewise. * sysdeps/aarch64/fpu/s_llround.c (__llround): Replace asm statements with builtin_llround. * sysdeps/aarch64/fpu/s_llroundf.c (__llroundf): Likewise. * sysdeps/aarch64/fpu/s_lrint.c (__lrint): Replace asm statements with builtin_rint and conversion to long int. * sysdeps/aarch64/fpu/s_lrintf.c (__lrintf): Likewise. * sysdeps/aarch64/fpu/s_lround.c (__lround): Replace asm statements with builtin_lround. * sysdeps/aarch64/fpu/s_lroundf.c (__lroundf): Replace asm statements with builtin_lroundf. * sysdeps/aarch64/fpu/s_nearbyint.c (__nearbyint): Replace asm statements with __builtin_nearbyint. * sysdeps/aarch64/fpu/s_nearbyintf.c (__nearbyintf): Replace asm statements with __builtin_nearbyintf. * sysdeps/aarch64/fpu/s_rint.c (__rint): Replace asm statements with __builtin_rint. * sysdeps/aarch64/fpu/s_rintf.c (__rintf): Replace asm statements with __builtin_rintf. * sysdeps/aarch64/fpu/s_round.c (__round): Replace asm statements with __builtin_round. * sysdeps/aarch64/fpu/s_roundf.c (__roundf): Replace asm statements with __builtin_roundf. * sysdeps/aarch64/fpu/s_trunc.c (__trunc): Replace asm statements with __builtin_trunc. * sysdeps/aarch64/fpu/s_truncf.c (__truncf): Replace asm statements with __builtin_truncf. * sysdeps/aarch64/fpu/Makefile: Build e_sqrt[f].c with -fno-math-errno.
2017-10-23PowerPC64 power8 strncpy cfi fixesAlan Modra
cfi info for stack adjust needs to be on the insn doing the adjust. cfi describing register saves can be anywhere after the save insn but before the reg is altered. Fewer locations with cfi result in smaller cfi programs and possibly slightly faster exception handling. Thus the LR cfi_offset move. The idea behind ajusting sp after restoring regs is to break a register dependency chain, in this case not be using r1 immediately after it is modified. The missing LR cfi_restore meant that code after the blr, unaligned_lt_16 and other labels, would have cfi that said LR was at cfa+16, but that code is reached without LR being saved. * sysdeps/powerpc/powerpc64/power8/strncpy.S: Move LR cfi. Adjust stack after restoring regs. Add missing LR cfi_restore. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
2017-10-23PowerPC64 power7 strncpy stack handling and cfiAlan Modra
This patch moves the frame setup and teardown to immediately around the single memset call, as has been done for power8. I've also decreased FRAMESIZE to that needed to save the two callee-saved registers used. Plus added cfi. * sysdeps/powerpc/powerpc64/power7/strncpy.S: Decrease FRAMESIZE. Move LR save and frame setup/teardown and LR restore to immediately around memset call. Provide cfi. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
2017-10-22i386: Replace assembly versions of e_powf with generic e_powf.cH.J. Lu
This patch replaces i386 assembly versions of e_powf with generic e_powf.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 230.855 78.3358 194% latency 231.685 94.1259 146% On Skylake, it improves performance by: Before After Improvement reciprocal-throughput 239.858 47.4713 405% latency 247.57 93.8798 163% On IvyBridge with --disable-multi-arch, it improves performance by: Before After Improvement reciprocal-throughput 269.078 63.3758 324% latency 271.473 102.091 165% * sysdeps/i386/fpu/e_powf.S: Removed. * sysdeps/i386/fpu/e_powf_log2_data.c: Likewise. * sysdeps/i386/fpu/w_powf.c: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_powf.c. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_powf-sse2. (CFLAGS-e_powf-sse2.c): New. * sysdeps/i386/i686/fpu/multiarch/e_powf-sse2.c: New file. * sysdeps/i386/i686/fpu/multiarch/e_powf.c: Likewise.
2017-10-22i386: Replace assembly versions of e_log2f with generic e_log2f.cH.J. Lu
This patch replaces i386 assembly versions of e_log2f with generic e_log2f.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 92.3845 30.8752 199% latency 112.855 54.8645 105% On Skylake, it improves performance by: Before After Improvement reciprocal-throughput 98.7488 22.7507 334% latency 118.01 51.6083 128% On IvyBridge with --disable-multi-arch, it improves performance by: Before After Improvement reciprocal-throughput 106.635 28.8596 269% latency 129.888 56.9187 128% * sysdeps/i386/fpu/e_log2f.S: Removed. * sysdeps/i386/fpu/e_log2f_data.c: Likewise. * sysdeps/i386/fpu/w_log2f.c: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_log2f.c. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_log2f-sse2. (CFLAGS-e_log2f-sse2.c): New. * sysdeps/i386/i686/fpu/multiarch/e_log2f-sse2.c: New file. * sysdeps/i386/i686/fpu/multiarch/e_log2f.c: Likewise.
2017-10-22x86-64: Add powf with FMAH.J. Lu
For workload-spec2017.wrf, on Skylake, it improves performance by: Before After Improvement reciprocal-throughput 35.4713 27.3842 29% latency 82.4537 66.3175 24% * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_powf-fma. (CFLAGS-e_powf-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_powf-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_powf.c: Likewise.
2017-10-22x86-64: Add log2f with FMAH.J. Lu
For workload-spec2017.wrf, on Skylake, it improves performance by: Before After Improvement reciprocal-throughput 16.5937 14.0789 17% latency 41.7755 35.3586 18% * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_log2f-fma. (CFLAGS-e_log2f-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_log2f-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_log2f.c: Likewise.
2017-10-22x86-64: Add logf with FMAH.J. Lu
For workload-spec2017.wrf, on Skylake, it improves performance by: Before After Improvement reciprocal-throughput 16.1534 13.8874 16% latency 41.9642 34.3072 22% * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_logf-fma. (CFLAGS-e_logf-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_logf-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_logf.c: Likewise.