diff options
author | H.J. Lu <hjl.tools@gmail.com> | 2017-06-09 05:42:16 -0700 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2017-06-09 05:42:29 -0700 |
commit | 8fe57365bfb5a417d911ab715a5671b3b1d7b155 (patch) | |
tree | 9b6a729305aaa4f9b30e8a016469fbbd5a7bc664 /COPYING | |
parent | dc485ceb2ac596d27294cc1942adf3181f15e8bf (diff) | |
download | glibc-8fe57365bfb5a417d911ab715a5671b3b1d7b155.tar glibc-8fe57365bfb5a417d911ab715a5671b3b1d7b155.tar.gz glibc-8fe57365bfb5a417d911ab715a5671b3b1d7b155.tar.bz2 glibc-8fe57365bfb5a417d911ab715a5671b3b1d7b155.zip |
x86-64: Optimize strchr/strchrnul/wcschr with AVX2
Optimize strchr/strchrnul/wcschr with AVX2 to search 32 bytes with vector
instructions. It is as fast as SSE2 versions for size <= 16 bytes and up
to 1X faster for or size > 16 bytes on Haswell. Select AVX2 version on
AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast.
NB: It uses TZCNT instead of BSF since TZCNT produces the same result
as BSF for non-zero input. TZCNT is faster than BSF and is executed
as BSF if machine doesn't support TZCNT.
* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
strchr-sse2, strchrnul-sse2, strchr-avx2, strchrnul-avx2,
wcschr-sse2 and wcschr-avx2.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c
(__libc_ifunc_impl_list): Add tests for __strchr_avx2,
__strchrnul_avx2, __strchrnul_sse2, __wcschr_avx2 and
__wcschr_sse2.
* sysdeps/x86_64/multiarch/strchr-avx2.S: New file.
* sysdeps/x86_64/multiarch/strchr-sse2.S: Likewise.
* sysdeps/x86_64/multiarch/strchr.c: Likewise.
* sysdeps/x86_64/multiarch/strchrnul-avx2.S: Likewise.
* sysdeps/x86_64/multiarch/strchrnul-sse2.S: Likewise.
* sysdeps/x86_64/multiarch/strchrnul.c: Likewise.
* sysdeps/x86_64/multiarch/wcschr-avx2.S: Likewise.
* sysdeps/x86_64/multiarch/wcschr-sse2.S: Likewise.
* sysdeps/x86_64/multiarch/wcschr.c: Likewise.
* sysdeps/x86_64/multiarch/strchr.S: Removed.
Diffstat (limited to 'COPYING')
0 files changed, 0 insertions, 0 deletions