summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/stacktrace.c
AgeCommit message (Collapse)Author
2021-09-22powerpc/stacktrace: Include linux/delay.hMichal Suchanek
[ Upstream commit a6cae77f1bc89368a4e2822afcddc45c3062d499 ] commit 7c6986ade69e ("powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()") introduces udelay() call without including the linux/delay.h header. This may happen to work on master but the header that declares the functionshould be included nonetheless. Fixes: 7c6986ade69e ("powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()") Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210729180103.15578-1-msuchanek@suse.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()Michael Ellerman
commit 7c6986ade69e3c81bac831645bc72109cd798a80 upstream. In raise_backtrace_ipi() we iterate through the cpumask of CPUs, sending each an IPI asking them to do a backtrace, but we don't wait for the backtrace to happen. We then iterate through the CPU mask again, and if any CPU hasn't done the backtrace and cleared itself from the mask, we print a trace on its behalf, noting that the trace may be "stale". This works well enough when a CPU is not responding, because in that case it doesn't receive the IPI and the sending CPU is left to print the trace. But when all CPUs are responding we are left with a race between the sending and receiving CPUs, if the sending CPU wins the race then it will erroneously print a trace. This leads to spurious "stale" traces from the sending CPU, which can then be interleaved messily with the receiving CPU, note the CPU numbers, eg: [ 1658.929157][ C7] rcu: Stack dump where RCU GP kthread last ran: [ 1658.929223][ C7] Sending NMI from CPU 7 to CPUs 1: [ 1658.929303][ C1] NMI backtrace for cpu 1 [ 1658.929303][ C7] CPU 1 didn't respond to backtrace IPI, inspecting paca. [ 1658.929362][ C1] CPU: 1 PID: 325 Comm: kworker/1:1H Tainted: G W E 5.13.0-rc2+ #46 [ 1658.929405][ C7] irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 325 (kworker/1:1H) [ 1658.929465][ C1] Workqueue: events_highpri test_work_fn [test_lockup] [ 1658.929549][ C7] Back trace of paca->saved_r1 (0xc0000000057fb400) (possibly stale): [ 1658.929592][ C1] NIP: c00000000002cf50 LR: c008000000820178 CTR: c00000000002cfa0 To fix it, change the logic so that the sending CPU waits 5s for the receiving CPU to print its trace. If the receiving CPU prints its trace successfully then the sending CPU just continues, avoiding any spurious "stale" trace. This has the added benefit of allowing all CPUs to print their traces in order and avoids any interleaving of their output. Fixes: 5cc05910f26e ("powerpc/64s: Wire up arch_trigger_cpumask_backtrace()") Cc: stable@vger.kernel.org # v4.18+ Reported-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210625140408.3351173-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-09-18powerpc/ftrace: Enable HAVE_FUNCTION_GRAPH_RET_ADDR_PTRNaveen N. Rao
This associates entries in the ftrace_ret_stack with corresponding stack frames, enabling more robust stack unwinding. Also update the only user of ftrace_graph_ret_addr() to pass the stack pointer. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0224f2d0971b069c678e2ff678cfc2cd1e114cfe.1567707399.git.naveen.n.rao@linux.vnet.ibm.com
2019-03-02powerpc: Remove export of save_stack_trace_tsk_reliable()Joe Lawrence
As tglx points out, there are no in-tree module users of save_stack_trace_tsk_reliable() and its x86 counterpart is not exported, so remove the powerpc symbol export. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-23powerpc: prep stack walkers for THREAD_INFO_IN_TASKChristophe Leroy
[text copied from commit 9bbd4c56b0b6 ("arm64: prep stack walkers for THREAD_INFO_IN_TASK")] When CONFIG_THREAD_INFO_IN_TASK is selected, task stacks may be freed before a task is destroyed. To account for this, the stacks are refcounted, and when manipulating the stack of another task, it is necessary to get/put the stack to ensure it isn't freed and/or re-used while we do so. This patch reworks the powerpc stack walking code to account for this. When CONFIG_THREAD_INFO_IN_TASK is not selected these perform no refcounting, and this should only be a structural change that does not affect behaviour. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Move try_get_task_stack() below tsk == NULL check in show_stack()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-31powerpc/livepatch: return -ERRNO values in save_stack_trace_tsk_reliable()Joe Lawrence
To match its x86 counterpart, save_stack_trace_tsk_reliable() should return -EINVAL in cases that it is currently returning 1. No caller is currently differentiating non-zero error codes, but let's keep the arch-specific implementations consistent. Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-31powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable()Joe Lawrence
Mostly cosmetic changes: - Group common stack pointer code at the top - Simplify the first frame logic - Code stackframe iteration into for...loop construct - Check for trace->nr_entries overflow before adding any into the array Suggested-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-31powerpc/livepatch: relax reliable stack tracer checks for first-frameJoe Lawrence
The bottom-most stack frame (the first to be unwound) may be largely uninitialized, for the "Power Architecture 64-Bit ELF V2 ABI" only requires its backchain pointer to be set. The reliable stack tracer should be careful when verifying this frame: skip checks on STACK_FRAME_LR_SAVE and STACK_FRAME_MARKER offsets that may contain uninitialized residual data. Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model") Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-06-19powerpc/64s: Fix build failures with CONFIG_NMI_IPI=nMichael Ellerman
I broke the build when CONFIG_NMI_IPI=n with my recent commit to add arch_trigger_cpumask_backtrace(), eg: stacktrace.c:(.text+0x1b0): undefined reference to `.smp_send_safe_nmi_ipi' We should rework the CONFIG symbols here in future to avoid these double barrelled ifdefs but for now they fix the build. Fixes: 5cc05910f26e ("powerpc/64s: Wire up arch_trigger_cpumask_backtrace()") Reported-by: Christophe LEROY <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-06-03powerpc/stacktrace: Update copyrightMichael Ellerman
This now has new code in it written by Nick and I, and switch to a SPDX tag. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
2018-06-03powerpc/64s: Wire up arch_trigger_cpumask_backtrace()Michael Ellerman
This allows eg. the RCU stall detector, or the soft/hardlockup detectors to trigger a backtrace on all CPUs. We implement this by sending a "safe" NMI, which will actually only send an IPI. Unfortunately the generic code prints "NMI", so that's a little confusing but we can probably live with it. If one of the CPUs doesn't respond to the IPI, we then print some info from it's paca and do a backtrace based on its saved_r1. Example output: INFO: rcu_sched detected stalls on CPUs/tasks: 2-...0: (0 ticks this GP) idle=1be/1/4611686018427387904 softirq=1055/1055 fqs=25735 (detected by 4, t=58847 jiffies, g=58, c=57, q=1258) Sending NMI from CPU 4 to CPUs 2: CPU 2 didn't respond to backtrace IPI, inspecting paca. irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 3623 (bash) Back trace of paca->saved_r1 (0xc0000000e1c83ba0) (possibly stale): Call Trace: [c0000000e1c83ba0] [0000000000000014] 0x14 (unreliable) [c0000000e1c83bc0] [c000000000765798] lkdtm_do_action+0x48/0x80 [c0000000e1c83bf0] [c000000000765a40] direct_entry+0x110/0x1b0 [c0000000e1c83c90] [c00000000058e650] full_proxy_write+0x90/0xe0 [c0000000e1c83ce0] [c0000000003aae3c] __vfs_write+0x6c/0x1f0 [c0000000e1c83d80] [c0000000003ab214] vfs_write+0xd4/0x240 [c0000000e1c83dd0] [c0000000003ab5cc] ksys_write+0x6c/0x110 [c0000000e1c83e30] [c00000000000b860] system_call+0x58/0x6c Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
2018-05-29powerpc/livepatch: Fix build error with kprobes disabled.Aneesh Kumar K.V
arch/powerpc/kernel/stacktrace.c: In function ‘save_stack_trace_tsk_reliable’: arch/powerpc/kernel/stacktrace.c:176:28: error: ‘kretprobe_trampoline’ undeclared if (ip == (unsigned long)kretprobe_trampoline) ^~~~~~~~~~~~~~~~~~~~ Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc/livepatch: Implement reliable stack tracing for the consistency modelTorsten Duwe
The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3: [...] There are several rules that must be adhered to in order to ensure reliable and consistent call chain backtracing: * Before a function calls any other function, it shall establish its own stack frame, whose size shall be a multiple of 16 bytes. – In instances where a function’s prologue creates a stack frame, the back-chain word of the stack frame shall be updated atomically with the value of the stack pointer (r1) when a back chain is implemented. (This must be supported as default by all ELF V2 ABI-compliant environments.) [...] – The function shall save the link register that contains its return address in the LR save doubleword of its caller’s stack frame before calling another function. To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE. This patch may be unneccessarily limited to ppc64le, but OTOH the only user of this flag so far is livepatching, which is only implemented on PPCs with 64-LE, a.k.a. ELF ABI v2. Feel free to add other ppc variants, but so far only ppc64le got tested. This change also implements save_stack_trace_tsk_reliable() for ppc64le that checks for the above conditions, where possible. Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Nicolai Stange <nstange@suse.de> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-03-28powerpc: Make /proc/self/stack always print the current stackThadeu Lima de Souza Cascardo
For the current task, the kernel stack would only tell the last time the process was rescheduled, if ever. Use the current stack pointer for the current task. Otherwise, every once in a while, the stacktrace printed when reading /proc/self/stack would look like the process is running in userspace, while it's not, which some may consider as a bug. This is also consistent with some other architectures, like x86 and arm, at least. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar
<linux/sched/debug.h> We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/debug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-01-11powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracingSteven Rostedt
It has come to my attention that kprobe event stack tracing does not work on powerpc. You can see with the following: # cd /sys/kernel/debug/tracing # echo stacktrace > trace_options # echo 'p kfree' > kprobe_events # echo 1 > events/kprobes/enable Will print the following warning: save_stack_trace_regs() not implemented yet. Although save_stack_trace() (which normal event stack traces use) is implemented, save_stack_trace_regs() which kprobe events use is not. This is a cheap attempt to implement that function. Note, This may have issues if a task tries to get a stack trace from another task with its regs, because it just passes in "current" to save_context_stack(). But this does solve the issue with stack tracing kprobe events. Reported-by: Chunyu Hu <chuhu@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-15powerpc: Rename __get_SP() to current_stack_pointer()Anton Blanchard
Michael points out that __get_SP() is a pretty horrible function name. Let's give it a better name. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-15powerpc: Reimplement __get_SP() as a function not a defineAnton Blanchard
Li Zhong points out an issue with our current __get_SP() implementation. If ftrace function tracing is enabled (ie -pg profiling using _mcount) we spill a stack frame on 64bit all the time. If a function calls __get_SP() and later calls a function that is tail call optimised, we will pop the stack frame and the value returned by __get_SP() is no longer valid. An example from Li can be found in save_stack_trace -> save_context_stack: c0000000000432c0 <.save_stack_trace>: c0000000000432c0: mflr r0 c0000000000432c4: std r0,16(r1) c0000000000432c8: stdu r1,-128(r1) <-- stack frame for _mcount c0000000000432cc: std r3,112(r1) c0000000000432d0: bl <._mcount> c0000000000432d4: nop c0000000000432d8: mr r4,r1 <-- __get_SP() c0000000000432dc: ld r5,632(r13) c0000000000432e0: ld r3,112(r1) c0000000000432e4: li r6,1 c0000000000432e8: addi r1,r1,128 <-- pop stack frame c0000000000432ec: ld r0,16(r1) c0000000000432f0: mtlr r0 c0000000000432f4: b <.save_context_stack> <-- tail call optimized save_context_stack ends up with a stack pointer below the current one, and it is likely to be scribbled over. Fix this by making __get_SP() a function which returns the callers stack frame. Also replace inline assembly which grabs the stack pointer in save_stack_trace and show_stack with __get_SP(). This also fixes an issue with perf_arch_fetch_caller_regs(). It currently unwinds the stack once, which will skip a valid stack frame on a leaf function. With the __get_SP() fixes in this patch, we never need to unwind the stack frame to get to the first interesting frame. We have to export __get_SP() because perf_arch_fetch_caller_regs() (which is used in modules) calls it from a header file. Reported-by: Li Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2011-10-31powerpc: various straight conversions from module.h --> export.hPaul Gortmaker
All these files were including module.h just for the basic EXPORT_SYMBOL infrastructure. We can shift them off to the export.h header which is a way smaller footprint and thus realize some compile time gains. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2008-07-28powerpc: Removed duplicated include in stacktrace.cHuang Weiyi
Removed duplicated include file <linux/module.h> in arch/powerpc/kernel/stacktrace.c. Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22powerpc: Fix support for latencytopArnd Bergmann
We need to pass the kernel stack pointer instead of the user space stack pointer in save_stack_trace_tsk(). Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-16Merge commit 'origin/master'Benjamin Herrenschmidt
Manual merge of: arch/powerpc/Kconfig arch/powerpc/kernel/stacktrace.c arch/powerpc/mm/slice.c arch/ppc/kernel/smp.c
2008-07-15powerpc: support for latencytopArnd Bergmann
Implement save_stack_trace_tsk on powerpc, so that we can run with latencytop. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-14generic-ipi: powerpc/generic-ipi tree build failureStephen Rothwell
Today's linux-next build (powerpc allmodconfig) failed like this: ERROR: ".save_stack_trace" [tests/backtracetest.ko] undefined! But save_stack_trace is exported in arch/powerpc/kernel/stacktrace.c I couldn't figure it out until I noticed these earlier warnings: arch/powerpc/kernel/stacktrace.c:47: warning: data definition has no type or storage class arch/powerpc/kernel/stacktrace.c:47: warning: type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL' arch/powerpc/kernel/stacktrace.c:47: warning: parameter names (without types) in function declaration I applied the patch below. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <linuxppc-dev@ozlabs.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-03stacktrace: export save_stack_trace[_tsk]Ingo Molnar
Andrew Morton reported this against linux-next: ERROR: ".save_stack_trace" [tests/backtracetest.ko] undefined! Reported-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-24[POWERPC] Fix new warnings arising from stacktrace patchChristoph Hellwig
Remove the inclusion of asm-offsets.h from stacktrace.c. It isn't supposed to be included in C code and it causes problems with multiple definitions of things. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-18[POWERPC] Stacktrace support for lockdepChristoph Hellwig
This adds stacktrace support for powerpc, which will be needed for lockdep. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>