Age | Commit message (Collapse) | Author |
|
And make always supported. The configure option was added on glibc 2.25
and some features require it (such as hwcap mask, huge pages support, and
lock elisition tuning). It also simplifies the build permutations.
Changes from v1:
* Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs
more discussion.
* Cleanup more code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
They are both used by __libc_freeres to free all library malloc
allocated resources to help tooling like mtrace or valgrind with
memory leak tracking.
The current scheme uses assembly markers and linker script entries
to consolidate the free routine function pointers in the RELRO segment
and to be freed buffers in BSS.
This patch changes it to use specific free functions for
libc_freeres_ptrs buffers and call the function pointer array directly
with call_function_static_weak.
It allows the removal of both the internal macros and the linker
script sections.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
It became unused with the removal of the assembler implementation of the
pthread functions.
|
|
|
|
Currently glibc uses in_time_t_range to detects time_t overflow,
and if it occurs fallbacks to 64 bit syscall version.
The function name is confusing because internally time_t might be
either 32 bits or 64 bits (depending on __TIMESIZE).
This patch refactors the in_time_t_range by replacing it with
in_int32_t_range for the case to check if the 64 bit time_t syscall
should be used.
The in_time_t range is used to detect overflow of the
syscall return value.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The old exception handling implementation used function interposition
to replace the dynamic loader implementation (no TLS support) with the
libc implementation (TLS support). This results in problems if the
link order between the dynamic loader and libc is reversed (bug 25486).
The new implementation moves the entire implementation of the
exception handling functions back into the dynamic loader, using
THREAD_GETMEM and THREAD_SETMEM for thread-local data support.
These depends on Hurd support for these macros, added in commit
b65a82e4e757c1e6cb7073916 ("hurd: Add THREAD_GET/SETMEM/_NC").
One small obstacle is that the exception handling facilities are used
before the TCB has been set up, so a check is needed if the TCB is
available. If not, a regular global variable is used to store the
exception handling information.
Also rename dl-error.c to dl-catch.c, to avoid confusion with the
dlerror function.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
clang complains that libc_hidden_data_def (__nptl_threads_events)
creates an invalid alias:
pthread_create.c:50:1: error: alias must point to a defined variable or function
libc_hidden_data_def (__nptl_threads_events)
^
../include/libc-symbols.h:621:37: note: expanded from macro
'libc_hidden_data_def'
It seems that clang requires that a proper prototype is defined prior
the hidden alias creation.
Reviewed-by: Fangrui Song <maskray@google.com>
|
|
The current macros uses pid as signed value, which triggers a compiler
warning for process and thread timers. Replace MAKE_PROCESS_CPUCLOCK
with static inline function that expects the pid as unsigned. These
are similar to what Linux does internally.
Checked on x86_64-linux-gnu.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
|
|
Use <support/test-driver.c> and replace pthread calls to its xpthread
equivalents.
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire
since these map to the standard C11 atomic builtins.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Replace atomic_decrement_and_test with atomic_fetch_add_relaxed.
These are simple counters which do not protect any shared data from
concurrent accesses. Also remove the unused file cond-perf.c.
Passes regress on AArch64.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Replace atomic_increment and atomic_increment_val with atomic_fetch_add_relaxed.
One case in sem_post.c uses release semantics (see comment above it).
The others are simple counters and do not protect any shared data from
concurrent accesses.
Passes regress on AArch64.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Remove the 4 uses of atomic_and and atomic_or with atomic_fetch_and_acquire
and atomic_fetch_or_acquire. This is preserves existing implied semantics,
however relaxed MO on FUTEX_OWNER_DIED accesses may be correct.
Passes regress on AArch64.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The Z modifier is a nonstandard synonymn for z (that predates z
itself) and compiler might issue an warning for in invalid
conversion specifier.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Replace the 3 uses of atomic_bit_set and atomic_bit_test_set with
atomic_fetch_or_relaxed. Using relaxed MO is correct since the
atomics are used to ensure memory is released only once.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Replace atomic_decrement and atomic_decrement_val with
atomic_fetch_add_relaxed.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
The implementation is based on scalar Chacha20 with per-thread cache.
It uses getrandom or /dev/urandom as fallback to get the initial entropy,
and reseeds the internal state on every 16MB of consumed buffer.
To improve performance and lower memory consumption the per-thread cache
is allocated lazily on first arc4random functions call, and if the
memory allocation fails getentropy or /dev/urandom is used as fallback.
The cache is also cleared on thread exit iff it was initialized (so if
arc4random is not called it is not touched).
Although it is lock-free, arc4random is still not async-signal-safe
(the per thread state is not updated atomically).
The ChaCha20 implementation is based on RFC8439 [1], omitting the final
XOR of the keystream with the plaintext because the plaintext is a
stream of zeros. This strategy is similar to what OpenBSD arc4random
does.
The arc4random_uniform is based on previous work by Florian Weimer,
where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete
Uniform Generation from Coin Flips, and Applications (2013) [2], who
credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform
random number generation (1976), for solving the general case.
The main advantage of this method is the that the unit of randomness is not
the uniform random variable (uint32_t), but a random bit. It optimizes the
internal buffer sampling by initially consuming a 32-bit random variable
and then sampling byte per byte. Depending of the upper bound requested,
it might lead to better CPU utilization.
Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
[1] https://datatracker.ietf.org/doc/html/rfc8439
[2] https://arxiv.org/pdf/1304.1916.pdf
|
|
This was due a wrong revert done on 404656009b459658.
Checked on x86_64-linux-gnu and i686-linux-gnu.
|
|
And also fixes the SINGLE_THREAD_P macro for SINGLE_THREAD_BY_GLOBAL,
since header inclusion single-thread.h is in the wrong order, the define
needs to come before including sysdeps/unix/sysdep.h. The macro
is now moved to a per-arch single-threade.h header.
The SINGLE_THREAD_P is used on some more places.
Checked on aarch64-linux-gnu and x86_64-linux-gnu.
|
|
The main drive is to optimize the internal usage and required size
when sigset_t is embedded in other data structures. On Linux, the
current supported signal set requires up to 8 bytes (16 on mips),
was lower than the user defined sigset_t (128 bytes).
A new internal type internal_sigset_t is added, along with the
functions to operate on it similar to the ones for sigset_t. The
internal-signals.h is also refactored to remove unused functions
Besides small stack usage on some functions (posix_spawn, abort)
it lower the struct pthread by about 120 bytes (112 on mips).
Checked on x86_64-linux-gnu.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
|
|
It removes both pid_ununsed and cpuclock_offset_ununsed, saving about
12 bytes from struct pthread.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
|
|
By adding an internal alias to avoid the GOT indirection.
On some architecture, __libc_single_thread may be accessed through
copy relocations and thus it requires to update also the copies
default copy.
This is done by adding a new internal macro,
libc_hidden_data_{proto,def}, which has an addition argument that
specifies the alias name (instead of default __GI_ one).
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Fangrui Song <maskray@google.com>
|
|
This was due a wrong revert done on 404656009b459658.
Checked on x86_64-linux-gnu.
|
|
When mutiple threads waiting for lock at the same time, once lock owner
releases the lock, waiters will see lock available and all try to lock,
which may cause an expensive CAS storm.
Binary exponential backoff with random jitter is introduced. As try-lock
attempt increases, there is more likely that a larger number threads
compete for adaptive mutex lock, so increase wait time in exponential.
A random jitter is also added to avoid synchronous try-lock from other
threads.
v2: Remove read-check before try-lock for performance.
v3:
1. Restore read-check since it works well in some platform.
2. Make backoff arch dependent, and enable it for x86_64.
3. Limit max backoff to reduce latency in large critical section.
v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h
v5: Commit log updated for regression in large critical section.
Result of pthread-mutex-locks bench
Test Platform: Xeon 8280L (2 socket, 112 CPUs in total)
First Row: thread number
First Col: critical section length
Values: backoff vs upstream, time based, low is better
non-critical-length: 1
1 2 4 8 16 32 64 112 140
0 0.99 0.58 0.52 0.49 0.43 0.44 0.46 0.52 0.54
1 0.98 0.43 0.56 0.50 0.44 0.45 0.50 0.56 0.57
2 0.99 0.41 0.57 0.51 0.45 0.47 0.48 0.60 0.61
4 0.99 0.45 0.59 0.53 0.48 0.49 0.52 0.64 0.65
8 1.00 0.66 0.71 0.63 0.56 0.59 0.66 0.72 0.71
16 0.97 0.78 0.91 0.73 0.67 0.70 0.79 0.80 0.80
32 0.95 1.17 0.98 0.87 0.82 0.86 0.89 0.90 0.90
64 0.96 0.95 1.01 1.01 0.98 1.00 1.03 0.99 0.99
128 0.99 1.01 1.01 1.17 1.08 1.12 1.02 0.97 1.02
non-critical-length: 32
1 2 4 8 16 32 64 112 140
0 1.03 0.97 0.75 0.65 0.58 0.58 0.56 0.70 0.70
1 0.94 0.95 0.76 0.65 0.58 0.58 0.61 0.71 0.72
2 0.97 0.96 0.77 0.66 0.58 0.59 0.62 0.74 0.74
4 0.99 0.96 0.78 0.66 0.60 0.61 0.66 0.76 0.77
8 0.99 0.99 0.84 0.70 0.64 0.66 0.71 0.80 0.80
16 0.98 0.97 0.95 0.76 0.70 0.73 0.81 0.85 0.84
32 1.04 1.12 1.04 0.89 0.82 0.86 0.93 0.91 0.91
64 0.99 1.15 1.07 1.00 0.99 1.01 1.05 0.99 0.99
128 1.00 1.21 1.20 1.22 1.25 1.31 1.12 1.10 0.99
non-critical-length: 128
1 2 4 8 16 32 64 112 140
0 1.02 1.00 0.99 0.67 0.61 0.61 0.61 0.74 0.73
1 0.95 0.99 1.00 0.68 0.61 0.60 0.60 0.74 0.74
2 1.00 1.04 1.00 0.68 0.59 0.61 0.65 0.76 0.76
4 1.00 0.96 0.98 0.70 0.63 0.63 0.67 0.78 0.77
8 1.01 1.02 0.89 0.73 0.65 0.67 0.71 0.81 0.80
16 0.99 0.96 0.96 0.79 0.71 0.73 0.80 0.84 0.84
32 0.99 0.95 1.05 0.89 0.84 0.85 0.94 0.92 0.91
64 1.00 0.99 1.16 1.04 1.00 1.02 1.06 0.99 0.99
128 1.00 1.06 0.98 1.14 1.39 1.26 1.08 1.02 0.98
There is regression in large critical section. But adaptive mutex is
aimed for "quick" locks. Small critical section is more common when
users choose to use adaptive pthread_mutex.
Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
The 404656009b reversion did not setup the atomic loop to set the
cancel bits correctly. The fix is essentially what pthread_cancel
did prior 26cfbb7162ad.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
|
|
Some Linux interfaces never restart after being interrupted by a signal
handler, regardless of the use of SA_RESTART [1]. It means that for
pthread cancellation, if the target thread disables cancellation with
pthread_setcancelstate and calls such interfaces (like poll or select),
it should not see spurious EINTR failures due the internal SIGCANCEL.
However recent changes made pthread_cancel to always sent the internal
signal, regardless of the target thread cancellation status or type.
To fix it, the previous semantic is restored, where the cancel signal
is only sent if the target thread has cancelation enabled in
asynchronous mode.
The cancel state and cancel type is moved back to cancelhandling
and atomic operation are used to synchronize between threads. The
patch essentially revert the following commits:
8c1c0aae20 nptl: Move cancel type out of cancelhandling
2b51742531 nptl: Move cancel state out of cancelhandling
26cfbb7162 nptl: Remove CANCELING_BITMASK
However I changed the atomic operation to follow the internal C11
semantic and removed the MACRO usage, it simplifies a bit the
resulting code (and removes another usage of the old atomic macros).
Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu,
and powerpc64-linux-gnu.
[1] https://man7.org/linux/man-pages/man7/signal.7.html
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
If the build itself is run in a container, we may not be able to
fully set up a nested container for test-container testing.
Notably is the mounting of /proc, since it's critical that it
be mounted from within the same PID namespace as its users, and
thus cannot be bind mounted from outside the container like other
mounts.
This patch defaults to using the parent's PID namespace instead of
creating a new one, as this is more likely to be allowed.
If the test needs an isolated PID namespace, it should add the "pidns"
command to its init script.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
And also use libsupport.
Checked on x86_64-linux-gnu and i686-linux-gnu.
|
|
_STACK_GROWS_DOWN is defined to 0 when the stack grows up. The
code in unwind.c used `#ifdef _STACK_GROWS_DOWN' to selct the
stack grows down define for FRAME_LEFT. As a result, the
_STACK_GROWS_DOWN define was always selected and cleanups were
incorrectly sequenced when the stack grows up.
|
|
For audit modules and dependencies with initial-exec TLS, we can not
set the initial TLS image on default loader initialization because it
would already be set by the audit setup. However, subsequent thread
creation would need to follow the default behaviour.
This patch fixes it by setting l_auditing link_map field not only
for the audit modules, but also for all its dependencies. This is
used on _dl_allocate_tls_init to avoid the static TLS initialization
at load time.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
|
|
The commit:
"Add LLL_MUTEX_READ_LOCK [BZ #28537]"
SHA1: d672a98a1af106bd68deb15576710cd61363f7a6
introduced LLL_MUTEX_READ_LOCK, to skip CAS in spinlock loop
if atomic load fails. But, "continue" inside of do-while loop
does not skip the evaluation of escape expression, thus CAS
is not skipped.
Replace do-while with while and skip LLL_MUTEX_TRYLOCK if
LLL_MUTEX_READ_LOCK fails.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
|
|
This simplifies the application programming model.
Browser sandboxes have already been fixed:
Sandbox is incompatible with rseq registration
<https://bugzilla.mozilla.org/show_bug.cgi?id=1651701>
Allow rseq in the Linux sandboxes. r=gcp
<https://hg.mozilla.org/mozilla-central/rev/042425712eb1>
Sandbox needs to support rseq system call
<https://bugs.chromium.org/p/chromium/issues/detail?id=1104160>
Linux sandbox: Allow rseq(2)
<https://chromium.googlesource.com/chromium/src.git/+/230675d9ac8f1>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
This tunable allows applications to register the rseq area instead
of glibc.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
The rseq area is placed directly into struct pthread. rseq
registration failure is not treated as an error, so it is possible
that threads run with inconsistent registration status.
<sys/rseq.h> is not yet installed as a public header.
Co-Authored-By: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
The function was renamed to __atomic_wide_counter_load_relaxed
in commit 8bd336a00a5311bf7a9e99b3b0e9f01ff5faa74b ("nptl: Extract
<bits/atomic_wide_counter.h> from pthread_cond_common.c").
|
|
rseq support will use a 32-byte aligned field in struct pthread,
so the whole struct needs to have at least that alignment.
nptl/tst-tls3mod.c uses TCB_ALIGNMENT, therefore include <descr.h>
to obtain the fallback definition.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
__libc_signal_restore_set was in the wrong place: It also ran
when setjmp returned the second time (after pthread_exit or
pthread_cancel). This is observable with blocked pending
signals during thread exit.
Fixes commit b3cae39dcbfa2432b3f3aa28854d8ac57f0de1b8
("nptl: Start new threads with all signals blocked [BZ #25098]").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
And make it an installed header. This addresses a few aliasing
violations (which do not seem to result in miscompilation due to
the use of atomics), and also enables use of wide counters in other
parts of the library.
The debug output in nptl/tst-cond22 has been adjusted to print
the 32-bit values instead because it avoids a big-endian/little-endian
difference.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Update
commit 49302b8fdf9103b6fc0a398678668a22fa19574c
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Thu Nov 11 06:54:01 2021 -0800
Avoid extra load with CAS in __pthread_mutex_clocklock_common [BZ #28537]
Replace boolean CAS with value CAS to avoid the extra load.
and
commit 0b82747dc48d5bf0871bdc6da8cb6eec1256355f
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Thu Nov 11 06:31:51 2021 -0800
Avoid extra load with CAS in __pthread_mutex_lock_full [BZ #28537]
Replace boolean CAS with value CAS to avoid the extra load.
by moving assignment out of the CAS condition.
|
|
CAS instruction is expensive. From the x86 CPU's point of view, getting
a cache line for writing is more expensive than reading. See Appendix
A.2 Spinlock in:
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf
The full compare and swap will grab the cache line exclusive and cause
excessive cache line bouncing.
Add LLL_MUTEX_READ_LOCK to do an atomic load and skip CAS in spinlock
loop if compare may fail to reduce cache line bouncing on contended locks.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
Replace boolean CAS with value CAS to avoid the extra load.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
Replace boolean CAS with value CAS to avoid the extra load.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
The check for waiting for the pidfile to be created looks wrong. At the
point when ACCESS is run the pid file will always be created and
accessible as it is created during DO_PREPARE. This means that thread
cancellation may be performed before the pid is written to the pidfile.
This was found to be flaky when testing on my OpenRISC platform.
Fix this by using the semaphore to wait for pidfile pid write
completion.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The choice between the kill vs tgkill system calls is not just about
the TID reuse race, but also about whether the signal is sent to the
whole process (and any thread in it) or to a specific thread.
This was caught by the openposix test suite:
LTP: openposix test suite - FAIL: SIGUSR1 is member of new thread pendingset.
<https://gitlab.com/cki-project/kernel-tests/-/issues/764>
Fixes commit 526c3cf11ee9367344b6b15d669e4c3cb461a2be ("nptl: Fix race
between pthread_kill and thread exit (bug 12889)").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
|
|
Linux added FUTEX_LOCK_PI2 to support clock selection
(commit bf22a6976897977b0a3f1aeba6823c959fc4fdae). With the new
flag we can now proper support CLOCK_MONOTONIC for
pthread_mutex_clocklock with Priority Inheritance. If kernel
does not support, EINVAL is returned instead.
The difference is the futex operation will be issued and the kernel
will advertise the missing support (instead of hard-code error
return).
Checked on x86_64-linux-gnu and i686-linux-gnu on Linux 5.14, 5.11,
and 4.15.
|
|
This patch uses the new futex PI operation provided by Linux v5.14
when it is required.
The futex_lock_pi64() is moved to futex-internal.c (since it used on
two different places and its code size might be large depending of the
kernel configuration) and clockid is added as an argument.
Co-authored-by: Kurt Kanzenbach <kurt@linutronix.de>
|
|
As part of the fix for bug 12889, signals are blocked during
thread exit, so that application code cannot run on the thread that
is about to exit. This would cause problems if the application
expected signals to be delivered after the signal handler revealed
the thread to still exist, despite pthread_kill can no longer be used
to send signals to it. However, glibc internally uses the SIGSETXID
signal in a way that is incompatible with signal blocking, due to the
way the setxid handshake delays thread exit until the setxid operation
has completed. With a blocked SIGSETXID, the handshake can never
complete, causing a deadlock.
As a band-aid, restore the previous handshake protocol by not blocking
SIGSETXID during thread exit.
The new test sysdeps/pthread/tst-pthread-setuid-loop.c is based on
a downstream test by Martin Osvald.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
|
|
The fix for bug 19193 breaks some old applications which appear
to use pthread_kill to probe if a thread is still running, something
that is not supported by POSIX.
|
|
A new thread exit lock and flag are introduced. They are used to
detect that the thread is about to exit or has exited in
__pthread_kill_internal, and the signal is not sent in this case.
The test sysdeps/pthread/tst-pthread_cancel-select-loop.c is derived
from a downstream test originally written by Marek Polacek.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|