diff options
| author | Nicholas Piggin <npiggin@gmail.com> | 2020-09-14 14:52:17 +1000 |
|---|---|---|
| committer | Michael Ellerman <mpe@ellerman.id.au> | 2020-09-16 12:24:37 +1000 |
| commit | 66acd46080bd9e5ad2be4b0eb1d498d5145d058e (patch) | |
| tree | 8587b1bb6eb4efab8b2636f1f07093d50204d0d5 /arch/powerpc/include/asm/mmu_context.h | |
| parent | d53c3dfb23c45f7d4f910c3a3ca84bf0a99c6143 (diff) | |
powerpc: select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
powerpc uses IPIs in some situations to switch a kernel thread away
from a lazy tlb mm, which is subject to the TLB flushing race
described in the changelog introducing ARCH_WANT_IRQS_OFF_ACTIVATE_MM.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200914045219.3736466-3-npiggin@gmail.com
Diffstat (limited to 'arch/powerpc/include/asm/mmu_context.h')
| -rw-r--r-- | arch/powerpc/include/asm/mmu_context.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 7f3658a97384..e02aa793420b 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -244,7 +244,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) { - switch_mm(prev, next, current); + switch_mm_irqs_off(prev, next, current); } /* We don't currently use enter_lazy_tlb() for anything */ |
