diff options
author | Kan Liang <kan.liang@linux.intel.com> | 2018-03-01 12:54:54 -0500 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-05-30 07:50:47 +0200 |
commit | cb65df419ec536b4abeabe0985aebbe23d8ffd3a (patch) | |
tree | 86ccadb55ef58f7332315bd45787428b97f3591a /arch | |
parent | 94ee9a43c689ea8cfb45d7a74e00756605af3732 (diff) |
perf/x86/intel: Fix large period handling on Broadwell CPUs
[ Upstream commit f605cfca8c39ffa2b98c06d2b9f30ba64f1e54e3 ]
Large fixed period values could be truncated on Broadwell, for example:
perf record -e cycles -c 10000000000
Here the fixed period is 0x2540BE400, but the period which finally applied is
0x540BE400 - which is wrong.
The reason is that x86_pmu::limit_period() uses an u32 parameter, so the
high 32 bits of 'period' get truncated.
This bug was introduced in:
commit 294fe0f52a44 ("perf/x86/intel: Add INST_RETIRED.ALL workarounds")
It's safe to use u64 instead of u32:
- Although the 'left' is s64, the value of 'left' must be positive when
calling limit_period().
- bdw_limit_period() only modifies the lowest 6 bits, it doesn't touch
the higher 32 bits.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 294fe0f52a44 ("perf/x86/intel: Add INST_RETIRED.ALL workarounds")
Link: http://lkml.kernel.org/r/1519926894-3520-1-git-send-email-kan.liang@linux.intel.com
[ Rewrote unacceptably bad changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/events/intel/core.c | 2 | ||||
-rw-r--r-- | arch/x86/events/perf_event.h | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 8ab0b8b76c09..815039327932 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3027,7 +3027,7 @@ hsw_get_event_constraints(struct cpu_hw_events *cpuc, int idx, * Therefore the effective (average) period matches the requested period, * despite coarser hardware granularity. */ -static unsigned bdw_limit_period(struct perf_event *event, unsigned left) +static u64 bdw_limit_period(struct perf_event *event, u64 left) { if ((event->hw.config & INTEL_ARCH_EVENT_MASK) == X86_CONFIG(.event=0xc0, .umask=0x01)) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index bcbb1d2ae10b..f3563179290b 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -548,7 +548,7 @@ struct x86_pmu { struct x86_pmu_quirk *quirks; int perfctr_second_write; bool late_ack; - unsigned (*limit_period)(struct perf_event *event, unsigned l); + u64 (*limit_period)(struct perf_event *event, u64 l); /* * sysfs attrs |