Age | Commit message (Collapse) | Author |
|
atomic_exchange_acq() expected a pointer, but was receiving an integer.
|
|
This patch rearranges cfi_offset() calls after the last store
so as to avoid extra DW_CFA_advance opcodes in unwind information.
|
|
|
|
A large number of the test-ldouble failures seen for ldbl-128ibm are
spurious "underflow" and "inexact" exceptions. These arise from such
exceptions in the underlying arithmetic; unlike other spurious
exceptions from that arithmetic, they do not in general relate to
cases where the returned result is also substantially inaccurate, are
not so readily avoidable by appropriately conditional libgcc patches,
and are widespread enough to be hard to handle through individual
XFAILing of the affected tests.
Thus, this patch documents relaxed accuracy goals for libm functions
for IBM long double and makes libm-test.inc reflect these spurious
exceptions in ldbl-128ibm arithmetic and always allow them in
ldbl-128ibm testing (while still not allowing these exceptions to be
missing where required to be present). Tested for powerpc.
* manual/math.texi (Errors in Math Functions): Document relaxed
accuracy goals for IBM long double.
* math/libm-test.inc (test_exceptions): Always allow spurious
"underflow" and "inexact" exceptions for IBM long double.
|
|
index_* and bit_* macros are used to access cpuid and feature arrays o
struct cpu_features. It is very easy to use bits and indices of cpuid
array on feature array, especially in assembly codes. For example,
sysdeps/i386/i686/multiarch/bcopy.S has
HAS_CPU_FEATURE (Fast_Rep_String)
which should be
HAS_ARCH_FEATURE (Fast_Rep_String)
We change index_* and bit_* to index_cpu_*/index_arch_* and
bit_cpu_*/bit_arch_* so that we can catch such error at build time.
[BZ #19762]
* sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h
(EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*.
* sysdeps/x86/cpu-features.c (init_cpu_features): Likewise.
* sysdeps/x86/cpu-features.h (bit_*): Renamed to ...
(bit_arch_*): This for feature array.
(bit_*): Renamed to ...
(bit_cpu_*): This for cpu array.
(index_*): Renamed to ...
(index_arch_*): This for feature array.
(index_*): Renamed to ...
(index_cpu_*): This for cpu array.
[__ASSEMBLER__] (HAS_FEATURE): Add and use field.
[__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE.
[__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE.
[!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and
bit_##name with index_cpu_##name and bit_cpu_##name.
[!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and
bit_##name with index_arch_##name and bit_arch_##name.
|
|
In makecontext the FDE needs to be terminated before the return
trampoline otherwise backtrace called within a context created by
makecontext yields infinite backtrace.
This bug has been present for a long time, stdlib/tst-makecontext did
not fail until recent commit e535ce25. Tested on mips-linux-gnu and
mips64el-linux-gnuabi64 and mips-linux-gnu, no regression.
This fixes stdlib/tst-makecontext on MIPS.
Changelog:
[BZ #19792]
* sysdeps/unix/sysv/linux/mips/makecontext.S (__makecontext):
Terminate FDE before return label.
|
|
The ldbl-128ibm implementation of nearbyintl uses logic that only
works in round-to-nearest mode. This contrasts with rintl, which
works in all rounding modes.
Now, arguably nearbyintl could simply be aliased to rintl, given that
spurious "inexact" is generally allowed for ldbl-128ibm, even for the
underlying arithmetic operations. But given that the only point of
nearbyintl is to avoid "inexact", this patch follows the more
conservative approach of adding conditionals to the rintl
implementation to make it suitable for use to implement nearbyintl,
then builds it for nearbyintl with USE_AS_NEARBYINTL defined. The
test test-nearbyint-except-2 shows up issues when traps on "inexact"
are enabled, which turn out to be problems with the powerpc
fenv_private.h implementation (two functions that should disable
exception traps potentially failing to do so in some cases); this
patch duly fixes that as well (I don't see any other existing cases
where this would be user-visible; there isn't much use of *_NOEX,
*hold* etc. in libm that requires exceptions to be discarded and not
trapped on).
Tested for powerpc.
[BZ #19790]
* sysdeps/ieee754/ldbl-128ibm/s_rintl.c [USE_AS_NEARBYINTL]
(rintl): Define as macro.
[USE_AS_NEARBYINTL] (__rintl): Likewise.
(__rintl) [USE_AS_NEARBYINTL]: Use SET_RESTORE_ROUND_NOEX instead
of fesetround. Ensure results are evaluated before end of scope.
* sysdeps/ieee754/ldbl-128ibm/s_nearbyintl.c: Define
USE_AS_NEARBYINTL and include s_rintl.c.
* sysdeps/powerpc/fpu/fenv_private.h (libc_feholdsetround_ppc):
Disable exception traps in new environment.
(libc_feholdsetround_ppc_ctx): Likewise.
|
|
|
|
The GNU libc testsuite fails to build on powerpc/ppc64/ppc64le with the
following error:
../sysdeps/powerpc/test-get_hwcap.c:26:22: fatal error: sys/auxv.h: No such file or director
This is because test-get_hwcap.c includes <sys/auxv.h>, but we don't
provide a wrapper in include/sys. This patch adds one.
Changelog:
* include/sys/auxv.h: New file.
|
|
Since x86 has an optimized mempcpy and GCC can inline mempcpy on x86,
define _HAVE_STRING_ARCH_mempcpy to 1 for x86.
[BZ #19759]
* sysdeps/x86/bits/string.h (_HAVE_STRING_ARCH_mempcpy): New.
|
|
The operand modifier %s on powerpc is an undocumented internal implementation
detail of GCC. Besides that, the GCC community wants to remove it. This patch
rewrites the expressions that use this modifier with logically equivalent
expressions that don't require it.
Explanation for the substitution:
The %s modifier takes an immediate operand and prints 32 less such immediate.
Thus, in the previous code, the expression resulted in:
32 - __builtin_ffs(e)
where e was guaranteed to have exactly a single bit set, by the following
expressions:
(e & (e-1) == 0) : e has at most one bit set.
(e != 0) : e is not zero, thus it has at least one bit set.
Since we guarantee that there is exactly only one bit set, the following
statement is true:
32 - __builtin_ffs(e) == __builtin_clz(e)
Thus, we can replace __builtin_ffs with __builtin_clz and remove the %s operand
modifier.
|
|
HWCAP-related code should had been updated when the 32 bits of HWCAP were
used. This patch updates the code in dl-procinfo.h to loop through all
the 32 bits in HWCAP and updates _dl_powerpc_cap_flags accordingly.
|
|
benchtests should use $(test-via-rtld-prefix) and $(+link-tests) like
other glibc tests.
[BZ #19783]
* benchtests/Makefile (run-bench): Replace $(rtld-prefix) with
$(test-via-rtld-prefix).
($(binaries-bench)): Replace $(+link) with $(+link-tests).
|
|
|
|
This patch fixes the posix/tst-execvpe5 invocation when GLIBC is
configured with --enable-hardcoded-path-in-tests which fails with:
$ cat posix/tst-execvpe5.out
Wrong number of arguments (4)
Checked on x86-64 and powerpc64le.
* posix/tst-execvpe5.c (do_test): Fix fix test invocation when
configured with --enable-hardcoded-path-in-tests.
|
|
The ldbl-128ibm implementation of remainderl has logic resulting in
incorrect tests for equality of the absolute values of the arguments
in the case of zero low parts. If the low parts are both zero but
with different signs, this can wrongly cause equal arguments to be
treated as different, resulting in turn in incorrect signs of zero
result in nondefault rounding modes arising from the subtractions done
when the arguments are not equal.
This patch fixes the logic to convert -0 low parts into +0 before the
comparison (remquo already has separate logic to deal with signs of
zero results, so doesn't need such a change). Tests are added for
remainderl and remquol similar to that for fmodl, and based on a
refactoring of it, since the bug depends on low parts which should not
be relied upon in tests not setting the representation explicitly
(although in fact the bug shows up in test-ldouble with current GCC).
Tested for powerpc.
[BZ #19677]
* sysdeps/ieee754/ldbl-128ibm/e_remainderl.c
(__ieee754_remainderl): Put zero low parts in canonical form.
* sysdeps/ieee754/ldbl-128ibm/test-fmodrem-ldbl-128ibm.c: New
file. Based on
sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c.
* sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c: Replace
with wrapper round test-fmodrem-ldbl-128ibm.c.
* sysdeps/ieee754/ldbl-128ibm/test-remainderl-ldbl-128ibm.c: New
file.
* sysdeps/ieee754/ldbl-128ibm/test-remquol-ldbl-128ibm.c:
Likewise.
* sysdeps/ieee754/ldbl-128ibm/Makefile (tests): Add
test-remainderl-ldbl-128ibm and test-remquol-ldbl-128ibm.
|
|
|
|
|
|
|
|
When using sln on some filesystems which return 64-bit inodes,
the stat call might fail during install like so:
.../elf/sln .../elf/symlink.list
/lib32/libc.so.6: invalid destination: Value too large for defined data type
/lib32/ld-linux.so.2: invalid destination: Value too large for defined data type
Makefile:104: recipe for target 'install-symbolic-link' failed
Switch to using stat64 all the time to avoid this.
URL: https://bugs.gentoo.org/576396
|
|
No functional changes.
|
|
This ensures that GCC will not use unsupported instructions before
the run-time check to ensure support.
|
|
With older kernels, it is mostly ineffective because it causes malloc
to switch from sbrk to mmap (potentially invalidating malloc testing
compared to what real appliations do). With newer kernels which
have switched to enforcing RLIMIT_DATA for mmap as well, some test
cases will fail in an unintended fashion because the limit which was
set previously does not include room for all mmap mappings.
|
|
This patch implements a new posix_spawn{p} implementation for Linux. The main
difference is it uses the clone syscall directly with CLONE_VM and CLONE_VFORK
flags and a direct allocated stack. The new stack and start function solves
most the vfork limitation (possible parent clobber due stack spilling). The
remaning issue are related to signal handling:
1. That no signal handlers must run in child context, to avoid corrupt
parent's state.
2. Child must synchronize with parent to enforce stack deallocation and
to possible return execv issues.
The first one is solved by blocking all signals in child, even NPTL-internal
ones (SIGCANCEL and SIGSETXID). The second issue is done by a stack allocation
in parent and a synchronization with using a pipe or waitpid (in case or error).
The pipe has the advantage of allowing the child signal an exec error (checked
with new tst-spawn2 test).
There is an inherent race condition in pipe2 usage for architectures that do not
support the syscall directly. In such cases the a pipe plus fctnl is used
instead and it may lead to file descriptor leak in parent (as decribed by fcntl
documentation).
The child process stack is allocate with a mmap with MAP_STACK flag using
default architecture stack size. Although it is slower than use a stack buffer
from parent, it allows some slack for the compatibility code to run scripts
with no shebang (which may use a buffer with size depending of argument list
count).
Performance should be similar to the vfork default posix implementation and
way faster than fork path (vfork on mostly linux ports are basically
clone with CLONE_VM plus CLONE_VFORK). The only difference is the syscalls
required for the stack allocation/deallocation.
It fixes BZ#10354, BZ#14750, and BZ#18433.
Tested on i386, x86_64, powerpc64le, and aarch64.
[BZ #14750]
[BZ #10354]
[BZ #18433]
* include/sched.h (__clone): Add hidden prototype.
(__clone2): Likewise.
* include/unistd.h (__dup): Likewise.
* posix/Makefile (tests): Add tst-spawn2.
* posix/tst-spawn2.c: New file.
* sysdeps/posix/dup.c (__dup): Add hidden definition.
* sysdeps/unix/sysv/linux/aarch64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/alpha/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/arm/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/hppa/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/i386/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/ia64/clone2.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/m68k/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/microblaze/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/mips/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/nios2/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S (__clone):
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S (__clone):
Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/sh/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/tile/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/x86_64/clone.S (__clone): Likewise.
* sysdeps/unix/sysv/linux/nptl-signals.h
(____nptl_is_internal_signal): New function.
* sysdeps/unix/sysv/linux/spawni.c: New file.
|
|
This patch removes all the dynamic allocation on execvpe code and
instead use direct stack allocation. This is QoI approach to make
it possible use in scenarios where memory is shared with parent
(vfork or clone with CLONE_VM).
For default process spawn (script file without a shebang), stack
allocation is bounded by NAME_MAX plus PATH_MAX plus 1. Large
file arguments returns an error (ENAMETOOLONG). This differs than
current GLIBC pratice in general, but it used to limit stack
allocation for large inputs. Also, path in PATH environment variable
larger than PATH_MAX are ignored.
The shell direct execution exeception, where execve returns ENOEXEC,
might requires a large stack allocation due large input argument list.
Tested on i686, x86_64, powerpc64le, and aarch64.
* posix/execvpe.c (__execvpe): Remove dynamic allocation.
* posix/Makefile (tests): Add tst-execvpe{1,2,3,4,5,6}.
* posix/tst-execvp1.c (do_test): Use a macro to call execvp.
* posix/tst-execvp2.c (do_test): Likewise.
* posix/tst-execvp3.c (do_test): Likewise.
* posix/tst-execvp4.c (do_test): Likewise.
* posix/tst-execvpe1.c: New file.
* posix/tst-execvpe2.c: Likewise.
* posix/tst-execvpe3.c: Likewise.
* posix/tst-execvpe4.c: Likewise.
* posix/tst-execvpe5.c: Likewise.
* posix/tst-execvpe6.c: Likewise.
|
|
GLIBC execl{e,p} implementation might use malloc if the total number of
arguments exceed initial assumption size (1024). This might lead to
issues in two situations:
1. execl/execle is stated to be async-signal-safe by POSIX [1]. However
if execl is used in a signal handler with a large argument set (that
may call malloc internally) and if the resulting call fails it might
lead malloc in the program in a bad state.
2. If the functions are used in a vfork/clone(VFORK) situation it also
might issue malloc internal bad state.
This patch fixes it by using stack allocation instead. It also fixes
BZ#19534.
Tested on x86_64.
[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html
[BZ #19534]
* posix/execl.c (execl): Remove dynamic memory allocation.
* posix/execle.c (execle): Likewise.
* posix/execlp.c (execlp): Likewise.
|
|
* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S:
Replace .text with .text.avx512.
* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S:
Likewise.
|
|
Changelog:
* sysdeps/generic/libnsl.abilist: New file.
* sysdeps/generic/libutil.abilist: New file.
|
|
HAS_ARCH_FEATURE, not HAS_CPU_FEATURE, should be used with
Fast_Rep_String.
[BZ #19762]
* sysdeps/i386/i686/multiarch/bcopy.S (bcopy): Use
HAS_ARCH_FEATURE with Fast_Rep_String.
* sysdeps/i386/i686/multiarch/bzero.S (__bzero): Likewise.
* sysdeps/i386/i686/multiarch/memcpy.S (memcpy): Likewise.
* sysdeps/i386/i686/multiarch/memcpy_chk.S (__memcpy_chk):
Likewise.
* sysdeps/i386/i686/multiarch/memmove_chk.S (__memmove_chk):
Likewise.
* sysdeps/i386/i686/multiarch/mempcpy.S (__mempcpy): Likewise.
* sysdeps/i386/i686/multiarch/mempcpy_chk.S (__mempcpy_chk):
Likewise.
* sysdeps/i386/i686/multiarch/memset.S (memset): Likewise.
* sysdeps/i386/i686/multiarch/memset_chk.S (__memset_chk):
Likewise.
|
|
These fields aren't terribly useful and most don't set it.
|
|
Since we have loaded address of PREINIT_FUNCTION into %rax, we can
avoid extra branch to PLT slot.
[BZ #19745]
* sysdeps/x86_64/crti.S (_init): Replace PREINIT_FUNCTION@PLT
with *%rax in call.
|
|
Since __libc_start_main is called very early, lazy binding isn't relevant
here. Use indirect branch via GOT to avoid extra branch to PLT slot.
[BZ #19745]
* sysdeps/x86_64/start.S (_start): __libc_start_main@PLT
with *__libc_start_main@GOTPCREL(%rip) in call.
|
|
|
|
|
|
|
|
|
|
Mention recursive calls when ENTRY is used in _mcount.S.
* sysdeps/x86_64/Makefile (sysdep_noprof): Add a comment.
|
|
Chek Fast_Unaligned_Load, instead of Slow_BSF, and also check for
Fast_Copy_Backward to enable __memcpy_ssse3_back. Existing selection
order is updated with following selection order:
1. __memcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set.
2. __memcpy_sse2_unaligned if Fast_Unaligned_Load bit is set.
3. __memcpy_sse2 if SSSE3 isn't available.
4. __memcpy_ssse3_back if Fast_Copy_Backward bit it set.
5. __memcpy_ssse3
[BZ #18880]
* sysdeps/x86_64/multiarch/memcpy.S: Check Fast_Unaligned_Load,
instead of Slow_BSF, and also check for Fast_Copy_Backward to
enable __memcpy_ssse3_back.
|
|
|
|
We should turn on bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARS without
overriding other bits.
[BZ #19758]
* sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h
(EXTRA_LD_ENVVARS): Or bit_Prefer_MAP_32BIT_EXEC.
|
|
|
|
[BZ #19490]
* sysdeps/x86_64/_mcount.S (_mcount): Add unwind descriptor.
(__fentry__): Likewise
|
|
No need to compile x86_64 _mcount.S with -pg. We can just copy the
normal static object.
* gmon/Makefile (noprof): Add $(sysdep_noprof).
* sysdeps/x86_64/Makefile (sysdep_noprof): Add _mcount.
|
|
Since __mcount_internal and __sigjmp_save are internal to x86-64 libc.so:
3532: 0000000000104530 289 FUNC LOCAL DEFAULT 13 __mcount_internal
3391: 0000000000034170 38 FUNC LOCAL DEFAULT 13 __sigjmp_save
they can be called directly without PLT.
* sysdeps/x86_64/_mcount.S (C_LABEL(_mcount)): Call
__mcount_internal directly.
(C_LABEL(__fentry__)): Likewise.
* sysdeps/x86_64/setjmp.S __sigsetjmp): Call __sigjmp_save
directly.
|
|
Since x86-64 __start_context calls the internal __setcontext:
5089: 00000000000417e0 145 FUNC LOCAL DEFAULT 13 __setcontext
it should call __setcontext directly.
* sysdeps/unix/sysv/linux/x86_64/__start_context.S
(__start_context): Call __setcontext directly.
|
|
Puerto Rico uses the metric system and has for a long time.
https://en.wikipedia.org/wiki/Puerto_Rican_units_of_measurement
|
|
Aragonese is classified as "an" so set it.
|
|
This patch follows up on the increase in minimum kernel version by
removing conditionals in non-x86, non-x86_64 kernel-features.h headers
that are now constant for all supported kernel versions.
* sysdeps/unix/sysv/linux/alpha/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional.
[__LINUX_KERNEL_VERSION >= 0x030200]: Likewise.
[__LINUX_KERNEL_VERSION < 0x020621]: Remove conditional code.
* sysdeps/unix/sysv/linux/arm/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional.
[__LINUX_KERNEL_VERSION >= 0x020624]: Likewise.
[__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
* sysdeps/unix/sysv/linux/hppa/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020622]: Likewise.
[__LINUX_KERNEL_VERSION >= 0x030100]: Likewise.
[__LINUX_KERNEL_VERSION < 0x020625]: Remove conditional code.
* sysdeps/unix/sysv/linux/ia64/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional.
[__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
* sysdeps/unix/sysv/linux/m68k/kernel-features.h
[__LINUX_KERNEL_VERSION < 0x030000]: Remove conditional code.
* sysdeps/unix/sysv/linux/microblaze/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional.
[__LINUX_KERNEL_VERSION < 0x020621]: Remove conditional code.
[__LINUX_KERNEL_VERSION < 0x020625]: Likewise.
* sysdeps/unix/sysv/linux/mips/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional.
[__LINUX_KERNEL_VERSION >= 0x030100]: Likewise.
[_MIPS_SIM == _ABIN32 && __LINUX_KERNEL_VERSION < 0x020623]:
Remove conditional code.
* sysdeps/unix/sysv/linux/powerpc/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020625]: Make code unconditional.
[__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
* sysdeps/unix/sysv/linux/sh/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020625]: Likewise.
[__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
[__LINUX_KERNEL_VERSION < 0x020625]: Remove conditional code.
* sysdeps/unix/sysv/linux/sparc/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional.
[__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
* sysdeps/unix/sysv/linux/tile/kernel-features.h
[__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
|
|
|
|
In 1999 the project split "localedir" into "localedir" (path to compiled
locale archives) and "msgcatdir" (path to message catalogs). This
predates the 2002 change in the GNU Coding Standard to document the use
of "localedir" for the path to the message catalogs. It appears that
newlib, gcc, and several other projects also used "msgcatdir" at one
point or another in the past, and so it is in line with historical
precedent that glibc would also use "msgcatdir." However, given that the
GNU Coding Standard uses "localedir", we will switch to that for
consistency as a GNU project. Previous uses of --localdir didn't work
anyway (see bug 14259).
I am committing this patch in the understanding that nobody would object
to fixing #14259 as part of aligning our variable usage to the GNU
Coding Standard.
Given that previous "localedir" uses were converted to "complocaledir"
by [1], we can now convert "msgcatdir" to "localedir" and complete the
transition. With an addition to config.make.in we also fix bug 14259 and
allow users to specify the locale dependent data directory with
"--localedir" at configure time. There is still no way to control at
configure time the location of the *compiled* locale directory.
Tested on x86_64 with no regressions.
Tested using "--localedir" to specify alternate locale dependent data
directory and verified with "make install DESTDIR=/tmp/glibc".
[1] 90fe682d3067163aa773feecf497ef599429457a
|