summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMichael S. Tsirkin <mst@redhat.com>2016-01-28 19:02:37 +0200
committerIngo Molnar <mingo@kernel.org>2016-01-29 09:40:10 +0100
commite37cee133c72c9529f74a20d9b7eb3b6dfb928b5 (patch)
treeed19bb96ae62fdd3efce33ca2551249e0c1392dd
parentbd922477d9350a3006d73dabb241400e6c4181b0 (diff)
locking/x86: Drop a comment left over from X86_OOSTORE
The comment about wmb being non-NOP to deal with non-Intel CPUs is a left over from before the following commit: 09df7c4c8097 ("x86: Remove CONFIG_X86_OOSTORE") It makes no sense now: in particular, wmb() is not a NOP even for regular Intel CPUs because of weird use-cases e.g. dealing with WC memory. Drop this comment. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization <virtualization@lists.linux-foundation.org> Link: http://lkml.kernel.org/r/1453921746-16178-3-git-send-email-mst@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r--arch/x86/include/asm/barrier.h4
1 files changed, 0 insertions, 4 deletions
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index a65bdb10246a..a29174599a98 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -11,10 +11,6 @@
*/
#ifdef CONFIG_X86_32
-/*
- * Some non-Intel clones support out of order store. wmb() ceases to be a
- * nop for these.
- */
#define mb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "mfence", \
X86_FEATURE_XMM2) ::: "memory", "cc")
#define rmb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "lfence", \