aboutsummaryrefslogtreecommitdiff
path: root/ChangeLog
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2017-06-05 12:52:41 -0700
committerH.J. Lu <hjl.tools@gmail.com>2017-06-05 12:52:55 -0700
commit935971ba6b4eaf67a34e4651434ba9b61e7355cc (patch)
tree6daa389845c92adc076839582eb94871347b6779 /ChangeLog
parentef9c4cb6c7abb6340b52e19de31d2a56c8de5844 (diff)
downloadglibc-935971ba6b4eaf67a34e4651434ba9b61e7355cc.tar
glibc-935971ba6b4eaf67a34e4651434ba9b61e7355cc.tar.gz
glibc-935971ba6b4eaf67a34e4651434ba9b61e7355cc.tar.bz2
glibc-935971ba6b4eaf67a34e4651434ba9b61e7355cc.zip
x86-64: Optimize memcmp/wmemcmp with AVX2 and MOVBE
Optimize x86-64 memcmp/wmemcmp with AVX2. It uses vector compare as much as possible. It is as fast as SSE4 memcmp for size <= 16 bytes and up to 2X faster for size > 16 bytes on Haswell and Skylake. Select AVX2 memcmp/wmemcmp on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. NB: It uses TZCNT instead of BSF since TZCNT produces the same result as BSF for non-zero input. TZCNT is faster than BSF and is executed as BSF if machine doesn't support TZCNT. Key features: 1. For size from 2 to 7 bytes, load as big endian with movbe and bswap to avoid branches. 2. Use overlapping compare to avoid branch. 3. Use vector compare when size >= 4 bytes for memcmp or size >= 8 bytes for wmemcmp. 4. If size is 8 * VEC_SIZE or less, unroll the loop. 5. Compare 4 * VEC_SIZE at a time with the aligned first memory area. 6. Use 2 vector compares when size is 2 * VEC_SIZE or less. 7. Use 4 vector compares when size is 4 * VEC_SIZE or less. 8. Use 8 vector compares when size is 8 * VEC_SIZE or less. * sysdeps/x86/cpu-features.h (index_cpu_MOVBE): New. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add memcmp-avx2 and wmemcmp-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Test __memcmp_avx2 and __wmemcmp_avx2. * sysdeps/x86_64/multiarch/memcmp-avx2.S: New file. * sysdeps/x86_64/multiarch/wmemcmp-avx2.S: Likewise. * sysdeps/x86_64/multiarch/memcmp.S: Use __memcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. * sysdeps/x86_64/multiarch/wmemcmp.S: Use __wmemcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred.
Diffstat (limited to 'ChangeLog')
-rw-r--r--ChangeLog16
1 files changed, 16 insertions, 0 deletions
diff --git a/ChangeLog b/ChangeLog
index 55c708a12e..303e1892e4 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,21 @@
2017-06-05 H.J. Lu <hongjiu.lu@intel.com>
+ * sysdeps/x86/cpu-features.h (index_cpu_MOVBE): New.
+ * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
+ memcmp-avx2 and wmemcmp-avx2.
+ * sysdeps/x86_64/multiarch/ifunc-impl-list.c
+ (__libc_ifunc_impl_list): Test __memcmp_avx2 and __wmemcmp_avx2.
+ * sysdeps/x86_64/multiarch/memcmp-avx2.S: New file.
+ * sysdeps/x86_64/multiarch/wmemcmp-avx2.S: Likewise.
+ * sysdeps/x86_64/multiarch/memcmp.S: Use __memcmp_avx2 on AVX
+ 2 machines if AVX unaligned load is fast and vzeroupper is
+ preferred.
+ * sysdeps/x86_64/multiarch/wmemcmp.S: Use __wmemcmp_avx2 on AVX
+ 2 machines if AVX unaligned load is fast and vzeroupper is
+ preferred.
+
+2017-06-05 H.J. Lu <hongjiu.lu@intel.com>
+
* include/wchar.h (__wmemset_chk): New.
* sysdeps/x86_64/memset.S (VDUP_TO_VEC0_AND_SET_RETURN): Renamed
to MEMSET_VDUP_TO_VEC0_AND_SET_RETURN.