diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2012-10-02 15:41:23 +0200 |
---|---|---|
committer | Ben Hutchings <ben@decadent.org.uk> | 2013-10-26 21:06:12 +0100 |
commit | 49a58f5fa9d71155be5b6211030c281084f336a3 (patch) | |
tree | e220ff359b7ac07563d08433c204b0d124234020 /kernel | |
parent | 6272221247a80adf0c36bf4f6ac8a4d7dcfb7e5b (diff) |
perf: Fix perf_cgroup_switch for sw-events
commit 95cf59ea72331d0093010543b8951bb43f262cac upstream.
Jiri reported that he could trigger the WARN_ON_ONCE() in
perf_cgroup_switch() using sw-events. This is because sw-events share
a cpuctx with multiple PMUs.
Use the ->unique_pmu pointer to limit the pmu iteration to unique
cpuctx instances.
Reported-and-Tested-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-so7wi2zf3jjzrwcutm2mkz0j@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/events/core.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c3ff456e974..83d5621b7d90 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -360,6 +360,8 @@ void perf_cgroup_switch(struct task_struct *task, int mode) list_for_each_entry_rcu(pmu, &pmus, entry) { cpuctx = this_cpu_ptr(pmu->pmu_cpu_context); + if (cpuctx->unique_pmu != pmu) + continue; /* ensure we process each cpuctx once */ /* * perf_cgroup_events says at least one @@ -383,9 +385,10 @@ void perf_cgroup_switch(struct task_struct *task, int mode) if (mode & PERF_CGROUP_SWIN) { WARN_ON_ONCE(cpuctx->cgrp); - /* set cgrp before ctxsw in to - * allow event_filter_match() to not - * have to pass task around + /* + * set cgrp before ctxsw in to allow + * event_filter_match() to not have to pass + * task around */ cpuctx->cgrp = perf_cgroup_from_task(task); cpu_ctx_sched_in(cpuctx, EVENT_ALL, task); |