diff options
author | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2016-12-15 18:17:09 -0200 |
---|---|---|
committer | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2016-12-27 10:50:41 -0200 |
commit | 3daef2c8ee4df29b9806e3bb2f407417c1222e9a (patch) | |
tree | b752089e0a3a443da08b6161e1ef6c626292e854 /sysdeps/x86_64 | |
parent | cecbc7967f0bcac718b6f8f8942b58403c0e917c (diff) | |
download | glibc-3daef2c8ee4df29b9806e3bb2f407417c1222e9a.tar glibc-3daef2c8ee4df29b9806e3bb2f407417c1222e9a.tar.gz glibc-3daef2c8ee4df29b9806e3bb2f407417c1222e9a.tar.bz2 glibc-3daef2c8ee4df29b9806e3bb2f407417c1222e9a.zip |
Fix x86_64 memchr for large input sizes
Current optimized memchr for x86_64 does for input arguments pointers
module 64 in range of [49,63] if there is no searchr char in the rest
of 64-byte block a pointer addition which might overflow:
* sysdeps/x86_64/memchr.S
77 .p2align 4
78 L(unaligned_no_match):
79 add %rcx, %rdx
Add (uintptr_t)s % 16 to n in %rdx.
80 sub $16, %rdx
81 jbe L(return_null)
This patch fixes by adding a saturated math that sets a maximum pointer
value if it overflows (UINTPTR_MAX).
Checked on x86_64-linux-gnu and powerpc64-linux-gnu.
[BZ# 19387]
* sysdeps/x86_64/memchr.S (memchr): Avoid overflow in pointer
addition.
* string/test-memchr.c (do_test): Remove alignment limitation.
(test_main): Add test that trigger BZ# 19387.
Diffstat (limited to 'sysdeps/x86_64')
-rw-r--r-- | sysdeps/x86_64/memchr.S | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/sysdeps/x86_64/memchr.S b/sysdeps/x86_64/memchr.S index 132eacba8f..1e34568039 100644 --- a/sysdeps/x86_64/memchr.S +++ b/sysdeps/x86_64/memchr.S @@ -76,7 +76,13 @@ L(crosscache): .p2align 4 L(unaligned_no_match): + /* Calculate the last acceptable address and check for possible + addition overflow by using satured math: + rdx = rcx + rdx + rdx |= -(rdx < rcx) */ add %rcx, %rdx + sbb %rax, %rax + or %rax, %rdx sub $16, %rdx jbe L(return_null) add $16, %rdi |