diff options
author | Adrian Hunter <adrian.hunter@intel.com> | 2016-08-15 10:23:04 +0300 |
---|---|---|
committer | Arnaldo Carvalho de Melo <acme@redhat.com> | 2016-08-15 18:11:18 -0300 |
commit | 3d918fb13abdbeca7947578f5d7e426eafad7f5e (patch) | |
tree | 2870b57fb1d769b093575252c0a3bfa063996483 /tools/perf/arch | |
parent | 9f3156903df3f7d9bba9acde810d78bca94305d5 (diff) |
perf intel-pt: Fix occasional decoding errors when tracing system-wide
In order to successfully decode Intel PT traces, context switch events
are needed from the moment the trace starts. Currently that is ensured
by using the 'immediate' flag which enables the switch event when it is
opened.
However, since commit 86c2786994bd ("perf intel-pt: Add support for
PERF_RECORD_SWITCH") that might not always happen. When tracing
system-wide the context switch event is added to the tracking event
which was not set as 'immediate'. Change that so it is.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@vger.kernel.org # v4.4+
Fixes: 86c2786994bd ("perf intel-pt: Add support for PERF_RECORD_SWITCH")
Link: http://lkml.kernel.org/r/1471245784-22580-1-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools/perf/arch')
-rw-r--r-- | tools/perf/arch/x86/util/intel-pt.c | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c index fb51457ba338..a2412e9d883b 100644 --- a/tools/perf/arch/x86/util/intel-pt.c +++ b/tools/perf/arch/x86/util/intel-pt.c @@ -501,7 +501,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr, struct intel_pt_recording *ptr = container_of(itr, struct intel_pt_recording, itr); struct perf_pmu *intel_pt_pmu = ptr->intel_pt_pmu; - bool have_timing_info; + bool have_timing_info, need_immediate = false; struct perf_evsel *evsel, *intel_pt_evsel = NULL; const struct cpu_map *cpus = evlist->cpus; bool privileged = geteuid() == 0 || perf_event_paranoid() < 0; @@ -655,6 +655,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr, ptr->have_sched_switch = 3; } else { opts->record_switch_events = true; + need_immediate = true; if (cpu_wide) ptr->have_sched_switch = 3; else @@ -700,6 +701,9 @@ static int intel_pt_recording_options(struct auxtrace_record *itr, tracking_evsel->attr.freq = 0; tracking_evsel->attr.sample_period = 1; + if (need_immediate) + tracking_evsel->immediate = true; + /* In per-cpu case, always need the time of mmap events etc */ if (!cpu_map__empty(cpus)) { perf_evsel__set_sample_bit(tracking_evsel, TIME); |