From 7a25d6a84df9fea56963569ceccaaf7c2a88f161 Mon Sep 17 00:00:00 2001 From: Nick Alcock Date: Wed, 23 Mar 2016 13:40:14 +0100 Subject: x86, pthread_cond_*wait: Do not depend on %eax not being clobbered The x86-specific versions of both pthread_cond_wait and pthread_cond_timedwait have (in their fall-back-to-futex-wait slow paths) calls to __pthread_mutex_cond_lock_adjust followed by __pthread_mutex_unlock_usercnt, which load the parameters before the first call but then assume that the first parameter, in %eax, will survive unaffected. This happens to have been true before now, but %eax is a call-clobbered register, and this assumption is not safe: it could change at any time, at GCC's whim, and indeed the stack-protector canary checking code clobbers %eax while checking that the canary is uncorrupted. So reload %eax before calling __pthread_mutex_unlock_usercnt. (Do this unconditionally, even when stack-protection is not in use, because it's the right thing to do, it's a slow path, and anything else is dicing with death.) * sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload call-clobbered %eax on retry path. * sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise. --- ChangeLog | 6 ++++++ 1 file changed, 6 insertions(+) (limited to 'ChangeLog') diff --git a/ChangeLog b/ChangeLog index 54454a54be..b7574b06ea 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,9 @@ +2016-03-23 Nick Alcock + + * sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload + call-clobbered %eax on retry path. + * sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise. + 2016-03-22 H.J. Lu * sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMCPY): -- cgit v1.2.3-70-g09d2