diff options
author | Chunyan Zhang <zhang.chunyan@linaro.org> | 2016-09-08 20:46:42 +0800 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2016-09-09 12:34:47 +0100 |
commit | 2b9743441a312e0b0a2d87deae363eccbe9d0f00 (patch) | |
tree | 4545db239efeb695ced5c8b1069d82d63d32be39 | |
parent | 872c63fbf9e153146b07f0cece4da0d70b283eeb (diff) |
arm64: use preempt_disable_notrace in _percpu_read/write
When debug preempt or preempt tracer is enabled, preempt_count_add/sub()
can be traced by function and function graph tracing, and
preempt_disable/enable() would call preempt_count_add/sub(), so in Ftrace
subsystem we should use preempt_disable/enable_notrace instead.
In the commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap
like events do") the function this_cpu_read() was added to
trace_graph_entry(), and if this_cpu_read() calls preempt_disable(), graph
tracer will go into a recursive loop, even if the tracing_on is
disabled.
So this patch change to use preempt_enable/disable_notrace instead in
this_cpu_read().
Since Yonghui Yang helped a lot to find the root cause of this problem,
so also add his SOB.
Signed-off-by: Yonghui Yang <mark.yang@spreadtrum.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-rw-r--r-- | arch/arm64/include/asm/percpu.h | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h index 0a456bef8c79..2fee2f59288c 100644 --- a/arch/arm64/include/asm/percpu.h +++ b/arch/arm64/include/asm/percpu.h @@ -199,19 +199,19 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val, #define _percpu_read(pcp) \ ({ \ typeof(pcp) __retval; \ - preempt_disable(); \ + preempt_disable_notrace(); \ __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \ sizeof(pcp)); \ - preempt_enable(); \ + preempt_enable_notrace(); \ __retval; \ }) #define _percpu_write(pcp, val) \ do { \ - preempt_disable(); \ + preempt_disable_notrace(); \ __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \ sizeof(pcp)); \ - preempt_enable(); \ + preempt_enable_notrace(); \ } while(0) \ #define _pcp_protect(operation, pcp, val) \ |