summaryrefslogtreecommitdiff
path: root/include/linux/rolling_buffer.h
diff options
context:
space:
mode:
authorAlexander Gordeev <agordeev@linux.ibm.com>2025-12-15 15:03:10 +0000
committerAndrew Morton <akpm@linux-foundation.org>2026-01-20 19:24:32 -0800
commit58852f24f9566602340130804bf7f4474a3f5f2a (patch)
treecc5046662e75c36b92fde8857460912cc3c2d5b6 /include/linux/rolling_buffer.h
parent2a912d440c6024148a25850c7c4066b152ec8750 (diff)
powerpc/64s: do not re-activate batched TLB flush
Patch series "Nesting support for lazy MMU mode", v6. When the lazy MMU mode was introduced eons ago, it wasn't made clear whether such a sequence was legal: arch_enter_lazy_mmu_mode() ... arch_enter_lazy_mmu_mode() ... arch_leave_lazy_mmu_mode() ... arch_leave_lazy_mmu_mode() It seems fair to say that nested calls to arch_{enter,leave}_lazy_mmu_mode() were not expected, and most architectures never explicitly supported it. Nesting does in fact occur in certain configurations, and avoiding it has proved difficult. This series therefore enables lazy_mmu sections to nest, on all architectures. Nesting is handled using a counter in task_struct (patch 8), like other stateless APIs such as pagefault_{disable,enable}(). This is fully handled in a new generic layer in <linux/pgtable.h>; the arch_* API remains unchanged. A new pair of calls, lazy_mmu_mode_{pause,resume}(), is also introduced to allow functions that are called with the lazy MMU mode enabled to temporarily pause it, regardless of nesting. An arch now opts in to using the lazy MMU mode by selecting CONFIG_ARCH_LAZY_MMU; this is more appropriate now that we have a generic API, especially with state conditionally added to task_struct. This patch (of 14): Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode") a task can not be preempted while in lazy MMU mode. Therefore, the batch re-activation code is never called, so remove it. Link: https://lkml.kernel.org/r/20251215150323.2218608-1-kevin.brodsky@arm.com Link: https://lkml.kernel.org/r/20251215150323.2218608-2-kevin.brodsky@arm.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Borislav Betkov <bp@alien8.de> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David S. Miller <davem@davemloft.net> Cc: David Woodhouse <dwmw2@infradead.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Juegren Gross <jgross@suse.com> Cc: levi.yun <yeoreum.yun@arm.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Cc: David Hildenbrand (Red Hat) <david@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include/linux/rolling_buffer.h')
0 files changed, 0 insertions, 0 deletions