aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/x86_64/multiarch/memcpy.S
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2016-03-28 04:39:48 -0700
committerH.J. Lu <hjl.tools@gmail.com>2016-03-28 04:40:03 -0700
commite41b395523040fcb58c7d378475720c2836d280c (patch)
tree7a4271638219c8d5b141039178105b0f1564ea16 /sysdeps/x86_64/multiarch/memcpy.S
parentb66d837bb5398795c6b0f651bd5a5d66091d8577 (diff)
downloadglibc-e41b395523040fcb58c7d378475720c2836d280c.tar
glibc-e41b395523040fcb58c7d378475720c2836d280c.tar.gz
glibc-e41b395523040fcb58c7d378475720c2836d280c.tar.bz2
glibc-e41b395523040fcb58c7d378475720c2836d280c.zip
[x86] Add a feature bit: Fast_Unaligned_Copy
On AMD processors, memcpy optimized with unaligned SSE load is slower than emcpy optimized with aligned SSSE3 while other string functions are faster with unaligned SSE load. A feature bit, Fast_Unaligned_Copy, is added to select memcpy optimized with unaligned SSE load. [BZ #19583] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel processors. Set Fast_Copy_Backward for AMD Excavator processors. * sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy): New. (index_arch_Fast_Unaligned_Copy): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
Diffstat (limited to 'sysdeps/x86_64/multiarch/memcpy.S')
-rw-r--r--sysdeps/x86_64/multiarch/memcpy.S2
1 files changed, 1 insertions, 1 deletions
diff --git a/sysdeps/x86_64/multiarch/memcpy.S b/sysdeps/x86_64/multiarch/memcpy.S
index 8882590e51..5b045d7847 100644
--- a/sysdeps/x86_64/multiarch/memcpy.S
+++ b/sysdeps/x86_64/multiarch/memcpy.S
@@ -42,7 +42,7 @@ ENTRY(__new_memcpy)
HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load)
jnz 2f
lea __memcpy_sse2_unaligned(%rip), %RAX_LP
- HAS_ARCH_FEATURE (Fast_Unaligned_Load)
+ HAS_ARCH_FEATURE (Fast_Unaligned_Copy)
jnz 2f
lea __memcpy_sse2(%rip), %RAX_LP
HAS_CPU_FEATURE (SSSE3)