summaryrefslogtreecommitdiff
path: root/arch/m68k/coldfire
diff options
context:
space:
mode:
authorJuergen Gross <jgross@suse.com>2018-08-21 17:37:55 +0200
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2018-09-15 09:40:40 +0200
commitf46d2b99a6acd87d56822c600fd2587a37e4d56c (patch)
treef49cea400949f2916f10906c627eeb0ee37e225e /arch/m68k/coldfire
parent98d122a4a74667ffc16d50baa086e9616fb44f28 (diff)
x86/pae: use 64 bit atomic xchg function in native_ptep_get_and_clear
commit b2d7a075a1ccef2fb321d595802190c8e9b39004 upstream. Using only 32-bit writes for the pte will result in an intermediate L1TF vulnerable PTE. When running as a Xen PV guest this will at once switch the guest to shadow mode resulting in a loss of performance. Use arch_atomic64_xchg() instead which will perform the requested operation atomically with all 64 bits. Some performance considerations according to: https://software.intel.com/sites/default/files/managed/ad/dc/Intel-Xeon-Scalable-Processor-throughput-latency.pdf The main number should be the latency, as there is no tight loop around native_ptep_get_and_clear(). "lock cmpxchg8b" has a latency of 20 cycles, while "lock xchg" (with a memory operand) isn't mentioned in that document. "lock xadd" (with xadd having 3 cycles less latency than xchg) has a latency of 11, so we can assume a latency of 14 for "lock xchg". Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Jason Andryuk <jandryuk@gmail.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> [ Atomic operations gained an arch_ prefix in 8bf705d13039 ("locking/atomic/x86: Switch atomic.h to use atomic-instrumented.h") so s/arch_atomic64_xchg/atomic64_xchg/ for backport.] Signed-off-by: Jason Andryuk <jandryuk@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch/m68k/coldfire')
0 files changed, 0 insertions, 0 deletions