summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2026-04-07 14:16:46 +0100
committerCatalin Marinas <catalin.marinas@arm.com>2026-04-08 17:40:04 +0100
commit2371bd83b3df9d833191fe58dadb0e69a794a1cd (patch)
tree445010ecdc11420187fe0170b62abe8be17991f1
parent041aa7a85390c99b1de86dc28eddcff0890d8186 (diff)
arm64: entry: Don't preempt with SError or Debug masked
On arm64, involuntary kernel preemption has been subtly broken since the move to the generic irqentry code. When preemption occurs, the new task may run with SError and Debug exceptions masked unexpectedly, leading to a loss of RAS events, breakpoints, watchpoints, and single-step exceptions. Prior to moving to the generic irqentry code, involuntary preemption of kernel mode would only occur when returning from regular interrupts, in a state where interrupts were masked and all other arm64-specific exceptions (SError, Debug, and pseudo-NMI) were unmasked. This is the only state in which it is valid to switch tasks. As part of moving to the generic irqentry code, the involuntary preemption logic was moved such that involuntary preemption could occur when returning from any (non-NMI) exception. As most exception handlers mask all arm64-specific exceptions before this point, preemption could occur in a state where arm64-specific exceptions were masked. This is not a valid state to switch tasks, and resulted in the loss of exceptions described above. As a temporary bodge, avoid the loss of exceptions by avoiding involuntary preemption when SError and/or Debug exceptions are masked. Practically speaking this means that involuntary preemption will only occur when returning from regular interrupts, as was the case before moving to the generic irqentry code. Fixes: 99eb057ccd67 ("arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode()") Reported-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Jinjie Ruan <ruanjinjie@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Jinjie Ruan <ruanjinjie@huawei.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-rw-r--r--arch/arm64/include/asm/entry-common.h21
1 files changed, 13 insertions, 8 deletions
diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm/entry-common.h
index cab8cd78f693..20f0a7c7bde1 100644
--- a/arch/arm64/include/asm/entry-common.h
+++ b/arch/arm64/include/asm/entry-common.h
@@ -29,14 +29,19 @@ static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
static inline bool arch_irqentry_exit_need_resched(void)
{
- /*
- * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
- * priority masking is used the GIC irqchip driver will clear DAIF.IF
- * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
- * DAIF we must have handled an NMI, so skip preemption.
- */
- if (system_uses_irq_prio_masking() && read_sysreg(daif))
- return false;
+ if (system_uses_irq_prio_masking()) {
+ /*
+ * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
+ * priority masking is used the GIC irqchip driver will clear DAIF.IF
+ * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
+ * DAIF we must have handled an NMI, so skip preemption.
+ */
+ if (read_sysreg(daif))
+ return false;
+ } else {
+ if (read_sysreg(daif) & (PSR_D_BIT | PSR_A_BIT))
+ return false;
+ }
/*
* Preempting a task from an IRQ means we leave copies of PSTATE