From fff9b6c7d26943a8eb32b58364b7ec6b9369746a Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 24 May 2019 13:52:31 +0200 Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage Clarify that pure non-RMW usage of atomic_t is pointless, there is nothing 'magical' about atomic_set() / atomic_read(). This is something that seems to confuse people, because I happen upon it semi-regularly. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Greg Kroah-Hartman Acked-by: Will Deacon Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/20190524115231.GN2623@hirez.programming.kicks-ass.net Signed-off-by: Ingo Molnar --- Documentation/atomic_t.txt | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) (limited to 'Documentation/atomic_t.txt') diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index dca3fb0554db..89eae7f6b360 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -81,9 +81,11 @@ Non-RMW ops: The non-RMW ops are (typically) regular LOADs and STOREs and are canonically implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and -smp_store_release() respectively. +smp_store_release() respectively. Therefore, if you find yourself only using +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all +and are doing it wrong. -The one detail to this is that atomic_set{}() should be observable to the RMW +A subtle detail of atomic_set{}() is that it should be observable to the RMW ops. That is: C atomic-set -- cgit v1.2.3 From 69d927bba39517d0980462efc051875b7f4db185 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Wed, 24 Apr 2019 13:38:23 +0200 Subject: x86/atomic: Fix smp_mb__{before,after}_atomic() Recent probing at the Linux Kernel Memory Model uncovered a 'surprise'. Strongly ordered architectures where the atomic RmW primitive implies full memory ordering and smp_mb__{before,after}_atomic() are a simple barrier() (such as x86) fail for: *x = 1; atomic_inc(u); smp_mb__after_atomic(); r0 = *y; Because, while the atomic_inc() implies memory order, it (surprisingly) does not provide a compiler barrier. This then allows the compiler to re-order like so: atomic_inc(u); *x = 1; smp_mb__after_atomic(); r0 = *y; Which the CPU is then allowed to re-order (under TSO rules) like: atomic_inc(u); r0 = *y; *x = 1; And this very much was not intended. Therefore strengthen the atomic RmW ops to include a compiler barrier. NOTE: atomic_{or,and,xor} and the bitops already had the compiler barrier. Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Signed-off-by: Ingo Molnar --- Documentation/atomic_t.txt | 3 +++ 1 file changed, 3 insertions(+) (limited to 'Documentation/atomic_t.txt') diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index 89eae7f6b360..d439a0fdbe47 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -196,6 +196,9 @@ These helper barriers exist because architectures have varying implicit ordering on their SMP atomic primitives. For example our TSO architectures provide full ordered atomics and these barriers are no-ops. +NOTE: when the atomic RmW ops are fully ordered, they should also imply a +compiler barrier. + Thus: atomic_fetch_add(); -- cgit v1.2.3