diff options
author | Ulrich Drepper <drepper@redhat.com> | 2003-01-12 10:11:16 +0000 |
---|---|---|
committer | Ulrich Drepper <drepper@redhat.com> | 2003-01-12 10:11:16 +0000 |
commit | aff4519d380c863ed0f259a1387cb473d3f3bad2 (patch) | |
tree | 825718efb1a8a4598d6de379d70a05ee034ab4f2 /linuxthreads/pthread.c | |
parent | 26a676d0aae53e2e770581c133356d8c0e39268c (diff) | |
download | glibc-aff4519d380c863ed0f259a1387cb473d3f3bad2.tar glibc-aff4519d380c863ed0f259a1387cb473d3f3bad2.tar.gz glibc-aff4519d380c863ed0f259a1387cb473d3f3bad2.tar.bz2 glibc-aff4519d380c863ed0f259a1387cb473d3f3bad2.zip |
Update.
2003-01-11 Jim Meyering <jim@meyering.net>
* io/ftw.c [HAVE_CONFIG_H]: Include <config.h>.
[HAVE_SYS_PARAM_H || _LIBC]: Guard inclusion of <sys/param.h>.
Include <sys/stat.h>, not <include/sys/stat.h>, if !_LIBC.
[!_LIBC] (__chdir, __closedir, __fchdir, __getcwd, __opendir): Define.
[!_LIBC] (__readdir64, __tdestroy, __tfind, __tsearch): Define.
[!_LIBC] (internal_function, dirent64, MAX): Define.
(__set_errno): Define if not already defined.
(open_dir_stream): When FTW_CHDIR is enabled, invoke opendir on
the basename, not the entire file name.
(process_entry): When FTW_CHDIR is enabled, invoke XSTAT or LXSTAT on
the basename, not the entire file name.
2003-01-12 Ulrich Drepper <drepper@redhat.com>
* string/tester.c (test_strcpy): Disable last added strcpy until
it is fixed.
2003-01-11 Philip Blundell <philb@gnu.org>
* sysdeps/unix/sysv/linux/arm/socket.S: Add cancellation support.
2003-01-11 Andreas Schwab <schwab@suse.de>
* Makerules: Add vpath for %.dynsym and %.so so that the
implicit rule chaining for check-abi works.
2003-01-11 Kaz Kojima <kkojima@rr.iij4u.or.jp>
* sysdeps/unix/sysv/linux/sh/sysdep.h (SYSCALL_ERROR_HANDLER):
Add non-PIC case.
2003-01-11 Jakub Jelinek <jakub@redhat.com>
* elf/tls-macros.h [__ia64__] (__TLS_CALL_CLOBBERS): Define.
[__ia64__] (TLS_LE, TLS_IE): Fix typos. Add ;; at start of asm if
gp is used early.
[__ia64__] (TLS_LD, TLS_GD): Likewise. Use __TLS_CALL_CLOBBERS.
* elf/Makefile ($(objpfx)tst-tlsmod5.so, $(objpfx)tst-tlsmod6.so):
Ensure libc.so in DT_NEEDED.
* sysdeps/alpha/dl-machine.h (elf_machine_rela): Move
CHECK_STATIC_TLS before l_tls_offset use.
* sysdeps/i386/dl-machine.h (elf_machine_rel, elf_machine_rela):
Likewise.
* sysdeps/sh/dl-machine.h (elf_machine_rela): Likewise.
* sysdeps/generic/dl-tls.c (_dl_allocate_tls_storage) [TLS_DTV_AT_TP]:
Allocate TLS_PRE_TCB_SIZE bytes below result.
(_dl_deallocate_tls) [TLS_DTV_AT_TP]: Adjust before freeing.
* sysdeps/generic/libc-tls.c (__libc_setup_tls): If
TLS_INIT_TP_EXPENSIVE is not defined, allocate even if no PT_TLS
segment has been found. If TLS_DTV_AT_TP, allocate TLS_PRE_TCB_SIZE
bytes below result and add tcb_offset to memsz.
* sysdeps/ia64/dl-tls.h (__tls_get_addr): New prototype.
* sysdeps/ia64/dl-machine.h: Include tls.h.
(elf_machine_type_class): Return ELF_RTYPE_CLASS_PLT for TLS relocs
too.
(elf_machine_rela): Assume if sym_map != NULL sym is non-NULL too.
Handle R_IA64_DTPMOD*, R_IA64_DTPREL* and R_IA64_TPREL* relocations.
* sysdeps/ia64/libc-tls.c: New file.
2003-01-10 Steven Munroe <sjmunroe@us.ibm.com>
* sysdeps/powerpc/powerpc64/sysdep.h (PSEUDO_RET): Add branch hit.
* sysdeps/unix/sysv/linux/powerpc/bits/stat.h (STAT_VER_LINUX):
Fix type. Move definition out of #if.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/ftruncate64.c: New file.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/pread.c: New file.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/pread64.c: New file.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/pwrite.c: New file.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/pwrite64.c: New file.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/socket.S: Add cancellation
support.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/syscalls.list: Remove
ftruncate64, pread64, pwrite64, truncate64 entries.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h
(INLINE_SYSCALL): New version that supports function call like
syscalls. Add __builtin_expect.
(LOADARGS_n): Add argument size safety checks.
(INTERNAL_SYSCALL): New Macro.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/truncate64.c: New file.
* sysdeps/unix/sysv/linux/powerpc/sys/procfs.h [__PPC_ELF_H]: Avoid
redefinition of elf_fpreg_t and elf_fpregset_t.
2003-01-12 Ulrich Drepper <drepper@redhat.com>
* elf/dl-close.c (_dl_close): Add several asserts. Correct and
simplify test for unloading. If loader of a DSO is unloaded do not
use its scope anymore. Fall back to own scope and adjust opencounts.
Fix several comments.
* elf/dl-deps.c (_dl_map_object_deps): Always allocate memory for
the l_searchlist, not only for l_initfini.
* elf/dl-lookup.c (add_dependencies): Avoid creating relocation
dependencies if objects cannot be removed. Remove object with the
definition as not unloadable if necessary.
* elf/reldep6.c: Create relocation dependency before closing the first
module.
2003-01-10 Guido Günther <agx@sigxcpu.org>
* elf/Makefile: Add rules to build and run reldep9 test.
* elf/reldep9.c: New file.
* elf/reldep9mod1.c: New file.
* elf/reldep9mod2.c: New file.
* elf/reldep9mod3.c: New file.
2003-01-09 Jakub Jelinek <jakub@redhat.com>
* elf/Makefile: Add rules to build and run nodelete2 test.
* elf/nodelete2.c: New file.
* elf/nodel2mod1.c: New file.
* elf/nodel2mod2.c: New file.
* elf/nodel2mod3.c: New file.
2003-01-09 Jakub Jelinek <jakub@redhat.com>
Diffstat (limited to 'linuxthreads/pthread.c')
-rw-r--r-- | linuxthreads/pthread.c | 75 |
1 files changed, 44 insertions, 31 deletions
diff --git a/linuxthreads/pthread.c b/linuxthreads/pthread.c index 432336258c..f72e20ee09 100644 --- a/linuxthreads/pthread.c +++ b/linuxthreads/pthread.c @@ -307,6 +307,8 @@ __pthread_initialize_minimal(void) # elif !USE___THREAD if (__builtin_expect (GL(dl_tls_max_dtv_idx) == 0, 0)) { + tcbhead_t *tcbp; + /* There is no actual TLS being used, so the thread register was not initialized in the dynamic linker. */ @@ -318,7 +320,7 @@ __pthread_initialize_minimal(void) __libc_malloc_pthread_startup (true); if (__builtin_expect (_dl_tls_setup (), 0) - || __builtin_expect ((self = _dl_allocate_tls (NULL)) == NULL, 0)) + || __builtin_expect ((tcbp = _dl_allocate_tls (NULL)) == NULL, 0)) { static const char msg[] = "\ cannot allocate TLS data structures for initial thread\n"; @@ -326,7 +328,7 @@ cannot allocate TLS data structures for initial thread\n"; msg, sizeof msg - 1)); abort (); } - const char *lossage = TLS_INIT_TP (self, 0); + const char *lossage = TLS_INIT_TP (tcbp, 0); if (__builtin_expect (lossage != NULL, 0)) { static const char msg[] = "cannot set up thread-local storage: "; @@ -343,7 +345,7 @@ cannot allocate TLS data structures for initial thread\n"; the hooks might not work with that block from the plain malloc. So we record this block as unfreeable just as the dynamic linker does when it allocates the DTV before the libc malloc exists. */ - GL(dl_initial_dtv) = GET_DTV (self); + GL(dl_initial_dtv) = GET_DTV (tcbp); __libc_malloc_pthread_startup (false); } @@ -558,7 +560,10 @@ int __pthread_initialize_manager(void) int pid; struct pthread_request request; int report_events; - pthread_descr tcb; + pthread_descr mgr; +#ifdef USE_TLS + tcbhead_t *tcbp; +#endif __pthread_multiple_threads = 1; __pthread_main_thread->p_header.data.multiple_threads = 1; @@ -588,31 +593,39 @@ int __pthread_initialize_manager(void) #ifdef USE_TLS /* Allocate memory for the thread descriptor and the dtv. */ - __pthread_handles[1].h_descr = manager_thread = tcb - = _dl_allocate_tls (NULL); - if (tcb == NULL) { + tcbp = _dl_allocate_tls (NULL); + if (tcbp == NULL) { free(__pthread_manager_thread_bos); __libc_close(manager_pipe[0]); __libc_close(manager_pipe[1]); return -1; } +# if TLS_TCB_AT_TP + mgr = (pthread_descr) tcbp; +# elif TLS_DTV_AT_TP + /* pthread_descr is located right below tcbhead_t which _dl_allocate_tls + returns. */ + mgr = (pthread_descr) tcbp - 1; +# endif + __pthread_handles[1].h_descr = manager_thread = mgr; + /* Initialize the descriptor. */ - tcb->p_header.data.tcb = tcb; - tcb->p_header.data.self = tcb; - tcb->p_header.data.multiple_threads = 1; - tcb->p_lock = &__pthread_handles[1].h_lock; + mgr->p_header.data.tcb = tcbp; + mgr->p_header.data.self = mgr; + mgr->p_header.data.multiple_threads = 1; + mgr->p_lock = &__pthread_handles[1].h_lock; # ifndef HAVE___THREAD - tcb->p_errnop = &tcb->p_errno; + mgr->p_errnop = &mgr->p_errno; # endif - tcb->p_start_args = (struct pthread_start_args) PTHREAD_START_ARGS_INITIALIZER(__pthread_manager); - tcb->p_nr = 1; + mgr->p_start_args = (struct pthread_start_args) PTHREAD_START_ARGS_INITIALIZER(__pthread_manager); + mgr->p_nr = 1; # if __LT_SPINLOCK_INIT != 0 self->p_resume_count = (struct pthread_atomic) __ATOMIC_INITIALIZER; # endif - tcb->p_alloca_cutoff = PTHREAD_STACK_MIN / 4; + mgr->p_alloca_cutoff = PTHREAD_STACK_MIN / 4; #else - tcb = &__pthread_manager_thread; + mgr = &__pthread_manager_thread; #endif __pthread_manager_request = manager_pipe[1]; /* writing end */ @@ -649,24 +662,24 @@ int __pthread_initialize_manager(void) if ((mask & (__pthread_threads_events.event_bits[idx] | event_bits)) != 0) { - __pthread_lock(tcb->p_lock, NULL); + __pthread_lock(mgr->p_lock, NULL); #ifdef NEED_SEPARATE_REGISTER_STACK pid = __clone2(__pthread_manager_event, (void **) __pthread_manager_thread_bos, THREAD_MANAGER_STACK_SIZE, CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, - tcb); + mgr); #elif _STACK_GROWS_UP pid = __clone(__pthread_manager_event, (void **) __pthread_manager_thread_bos, CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, - tcb); + mgr); #else pid = __clone(__pthread_manager_event, (void **) __pthread_manager_thread_tos, CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, - tcb); + mgr); #endif if (pid != -1) @@ -675,18 +688,18 @@ int __pthread_initialize_manager(void) the newly created thread's data structure. We cannot let the new thread do this since we don't know whether it was already scheduled when we send the event. */ - tcb->p_eventbuf.eventdata = tcb; - tcb->p_eventbuf.eventnum = TD_CREATE; - __pthread_last_event = tcb; - tcb->p_tid = 2* PTHREAD_THREADS_MAX + 1; - tcb->p_pid = pid; + mgr->p_eventbuf.eventdata = mgr; + mgr->p_eventbuf.eventnum = TD_CREATE; + __pthread_last_event = mgr; + mgr->p_tid = 2* PTHREAD_THREADS_MAX + 1; + mgr->p_pid = pid; /* Now call the function which signals the event. */ __linuxthreads_create_event (); } /* Now restart the thread. */ - __pthread_unlock(tcb->p_lock); + __pthread_unlock(mgr->p_lock); } } @@ -695,13 +708,13 @@ int __pthread_initialize_manager(void) #ifdef NEED_SEPARATE_REGISTER_STACK pid = __clone2(__pthread_manager, (void **) __pthread_manager_thread_bos, THREAD_MANAGER_STACK_SIZE, - CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, tcb); + CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, mgr); #elif _STACK_GROWS_UP pid = __clone(__pthread_manager, (void **) __pthread_manager_thread_bos, - CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, tcb); + CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, mgr); #else pid = __clone(__pthread_manager, (void **) __pthread_manager_thread_tos, - CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, tcb); + CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, mgr); #endif } if (__builtin_expect (pid, 0) == -1) { @@ -710,8 +723,8 @@ int __pthread_initialize_manager(void) __libc_close(manager_pipe[1]); return -1; } - tcb->p_tid = 2* PTHREAD_THREADS_MAX + 1; - tcb->p_pid = pid; + mgr->p_tid = 2* PTHREAD_THREADS_MAX + 1; + mgr->p_pid = pid; /* Make gdb aware of new thread manager */ if (__builtin_expect (__pthread_threads_debug, 0) && __pthread_sig_debug > 0) { |