From 13efa86ece61bf84daca50cab30db1b0902fe2db Mon Sep 17 00:00:00 2001 From: "H.J. Lu" Date: Thu, 30 Jun 2016 07:57:07 -0700 Subject: Check Prefer_ERMS in memmove/memcpy/mempcpy/memset Although the Enhanced REP MOVSB/STOSB (ERMS) implementations of memmove, memcpy, mempcpy and memset aren't used by the current processors, this patch adds Prefer_ERMS check in memmove, memcpy, mempcpy and memset so that they can be used in the future. * sysdeps/x86/cpu-features.h (bit_arch_Prefer_ERMS): New. (index_arch_Prefer_ERMS): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Return __memcpy_erms for Prefer_ERMS. * sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S (__memmove_erms): Enabled for libc.a. * ysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Return __memmove_erms or Prefer_ERMS. * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Return __mempcpy_erms for Prefer_ERMS. * sysdeps/x86_64/multiarch/memset.S (memset): Return __memset_erms for Prefer_ERMS. --- sysdeps/x86_64/multiarch/memcpy.S | 3 +++ 1 file changed, 3 insertions(+) (limited to 'sysdeps/x86_64/multiarch/memcpy.S') diff --git a/sysdeps/x86_64/multiarch/memcpy.S b/sysdeps/x86_64/multiarch/memcpy.S index f6771a4696..df7fbacd8a 100644 --- a/sysdeps/x86_64/multiarch/memcpy.S +++ b/sysdeps/x86_64/multiarch/memcpy.S @@ -29,6 +29,9 @@ ENTRY(__new_memcpy) .type __new_memcpy, @gnu_indirect_function LOAD_RTLD_GLOBAL_RO_RDX + lea __memcpy_erms(%rip), %RAX_LP + HAS_ARCH_FEATURE (Prefer_ERMS) + jnz 2f # ifdef HAVE_AVX512_ASM_SUPPORT HAS_ARCH_FEATURE (AVX512F_Usable) jz 1f -- cgit v1.2.3