diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-03-20 10:29:15 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-03-20 10:29:15 -0700 |
| commit | 9c2b957db1772ebf942ae7a9346b14eba6c8ca66 (patch) | |
| tree | 0dbb83e57260ea7fc0dc421f214d5f1b26262005 /tools/perf/builtin-record.c | |
| parent | 0bbfcaff9b2a69c71a95e6902253487ab30cb498 (diff) | |
| parent | bea95c152dee1791dd02cbc708afbb115bb00f9a (diff) | |
Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf events changes for v3.4 from Ingo Molnar:
- New "hardware based branch profiling" feature both on the kernel and
the tooling side, on CPUs that support it. (modern x86 Intel CPUs
with the 'LBR' hardware feature currently.)
This new feature is basically a sophisticated 'magnifying glass' for
branch execution - something that is pretty difficult to extract from
regular, function histogram centric profiles.
The simplest mode is activated via 'perf record -b', and the result
looks like this in perf report:
$ perf record -b any_call,u -e cycles:u branchy
$ perf report -b --sort=symbol
52.34% [.] main [.] f1
24.04% [.] f1 [.] f3
23.60% [.] f1 [.] f2
0.01% [k] _IO_new_file_xsputn [k] _IO_file_overflow
0.01% [k] _IO_vfprintf_internal [k] _IO_new_file_xsputn
0.01% [k] _IO_vfprintf_internal [k] strchrnul
0.01% [k] __printf [k] _IO_vfprintf_internal
0.01% [k] main [k] __printf
This output shows from/to branch columns and shows the highest
percentage (from,to) jump combinations - i.e. the most likely taken
branches in the system. "branches" can also include function calls
and any other synchronous and asynchronous transitions of the
instruction pointer that are not 'next instruction' - such as system
calls, traps, interrupts, etc.
This feature comes with (hopefully intuitive) flat ascii and TUI
support in perf report.
- Various 'perf annotate' visual improvements for us assembly junkies.
It will now recognize function calls in the TUI and by hitting enter
you can follow the call (recursively) and back, amongst other
improvements.
- Multiple threads/processes recording support in perf record, perf
stat, perf top - which is activated via a comma-list of PIDs:
perf top -p 21483,21485
perf stat -p 21483,21485 -ddd
perf record -p 21483,21485
- Support for per UID views, via the --uid paramter to perf top, perf
report, etc. For example 'perf top --uid mingo' will only show the
tasks that I am running, excluding other users, root, etc.
- Jump label restructurings and improvements - this includes the
factoring out of the (hopefully much clearer) include/linux/static_key.h
generic facility:
struct static_key key = STATIC_KEY_INIT_FALSE;
...
if (static_key_false(&key))
do unlikely code
else
do likely code
...
static_key_slow_inc();
...
static_key_slow_inc();
...
The static_key_false() branch will be generated into the code with as
little impact to the likely code path as possible. the
static_key_slow_*() APIs flip the branch via live kernel code patching.
This facility can now be used more widely within the kernel to
micro-optimize hot branches whose likelihood matches the static-key
usage and fast/slow cost patterns.
- SW function tracer improvements: perf support and filtering support.
- Various hardenings of the perf.data ABI, to make older perf.data's
smoother on newer tool versions, to make new features integrate more
smoothly, to support cross-endian recording/analyzing workflows
better, etc.
- Restructuring of the kprobes code, the splitting out of 'optprobes',
and a corner case bugfix.
- Allow the tracing of kernel console output (printk).
- Improvements/fixes to user-space RDPMC support, allowing user-space
self-profiling code to extract PMU counts without performing any
system calls, while playing nice with the kernel side.
- 'perf bench' improvements
- ... and lots of internal restructurings, cleanups and fixes that made
these features possible. And, as usual this list is incomplete as
there were also lots of other improvements
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (120 commits)
perf report: Fix annotate double quit issue in branch view mode
perf report: Remove duplicate annotate choice in branch view mode
perf/x86: Prettify pmu config literals
perf report: Enable TUI in branch view mode
perf report: Auto-detect branch stack sampling mode
perf record: Add HEADER_BRANCH_STACK tag
perf record: Provide default branch stack sampling mode option
perf tools: Make perf able to read files from older ABIs
perf tools: Fix ABI compatibility bug in print_event_desc()
perf tools: Enable reading of perf.data files from different ABI rev
perf: Add ABI reference sizes
perf report: Add support for taken branch sampling
perf record: Add support for sampling taken branch
perf tools: Add code to support PERF_SAMPLE_BRANCH_STACK
x86/kprobes: Split out optprobe related code to kprobes-opt.c
x86/kprobes: Fix a bug which can modify kernel code permanently
x86/kprobes: Fix instruction recovery on optimized path
perf: Add callback to flush branch_stack on context switch
perf: Disable PERF_SAMPLE_BRANCH_* when not supported
perf/x86: Add LBR software filter support for Intel CPUs
...
Diffstat (limited to 'tools/perf/builtin-record.c')
| -rw-r--r-- | tools/perf/builtin-record.c | 152 |
1 files changed, 121 insertions, 31 deletions
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 227b6ae99785..be4e1eee782e 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -44,6 +44,7 @@ struct perf_record { struct perf_evlist *evlist; struct perf_session *session; const char *progname; + const char *uid_str; int output; unsigned int page_size; int realtime_prio; @@ -208,7 +209,7 @@ fallback_missing_features: if (opts->exclude_guest_missing) attr->exclude_guest = attr->exclude_host = 0; retry_sample_id: - attr->sample_id_all = opts->sample_id_all_avail ? 1 : 0; + attr->sample_id_all = opts->sample_id_all_missing ? 0 : 1; try_again: if (perf_evsel__open(pos, evlist->cpus, evlist->threads, opts->group, group_fd) < 0) { @@ -227,11 +228,11 @@ try_again: "guest or host samples.\n"); opts->exclude_guest_missing = true; goto fallback_missing_features; - } else if (opts->sample_id_all_avail) { + } else if (!opts->sample_id_all_missing) { /* * Old kernel, no attr->sample_id_type_all field */ - opts->sample_id_all_avail = false; + opts->sample_id_all_missing = true; if (!opts->sample_time && !opts->raw_samples && !time_needed) attr->sample_type &= ~PERF_SAMPLE_TIME; @@ -396,7 +397,7 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv) { struct stat st; int flags; - int err, output; + int err, output, feat; unsigned long waking = 0; const bool forks = argc > 0; struct machine *machine; @@ -463,8 +464,17 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv) rec->session = session; - if (!rec->no_buildid) - perf_header__set_feat(&session->header, HEADER_BUILD_ID); + for (feat = HEADER_FIRST_FEATURE; feat < HEADER_LAST_FEATURE; feat++) + perf_header__set_feat(&session->header, feat); + + if (rec->no_buildid) + perf_header__clear_feat(&session->header, HEADER_BUILD_ID); + + if (!have_tracepoints(&evsel_list->entries)) + perf_header__clear_feat(&session->header, HEADER_TRACE_INFO); + + if (!rec->opts.branch_stack) + perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK); if (!rec->file_new) { err = perf_session__read_header(session, output); @@ -472,22 +482,6 @@ static int __cmd_record(struct perf_record *rec, int argc, const char **argv) goto out_delete_session; } - if (have_tracepoints(&evsel_list->entries)) - perf_header__set_feat(&session->header, HEADER_TRACE_INFO); - - perf_header__set_feat(&session->header, HEADER_HOSTNAME); - perf_header__set_feat(&session->header, HEADER_OSRELEASE); - perf_header__set_feat(&session->header, HEADER_ARCH); - perf_header__set_feat(&session->header, HEADER_CPUDESC); - perf_header__set_feat(&session->header, HEADER_NRCPUS); - perf_header__set_feat(&session->header, HEADER_EVENT_DESC); - perf_header__set_feat(&session->header, HEADER_CMDLINE); - perf_header__set_feat(&session->header, HEADER_VERSION); - perf_header__set_feat(&session->header, HEADER_CPU_TOPOLOGY); - perf_header__set_feat(&session->header, HEADER_TOTAL_MEM); - perf_header__set_feat(&session->header, HEADER_NUMA_TOPOLOGY); - perf_header__set_feat(&session->header, HEADER_CPUID); - if (forks) { err = perf_evlist__prepare_workload(evsel_list, opts, argv); if (err < 0) { @@ -647,6 +641,90 @@ out_delete_session: return err; } +#define BRANCH_OPT(n, m) \ + { .name = n, .mode = (m) } + +#define BRANCH_END { .name = NULL } + +struct branch_mode { + const char *name; + int mode; +}; + +static const struct branch_mode branch_modes[] = { + BRANCH_OPT("u", PERF_SAMPLE_BRANCH_USER), + BRANCH_OPT("k", PERF_SAMPLE_BRANCH_KERNEL), + BRANCH_OPT("hv", PERF_SAMPLE_BRANCH_HV), + BRANCH_OPT("any", PERF_SAMPLE_BRANCH_ANY), + BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL), + BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN), + BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL), + BRANCH_END +}; + +static int +parse_branch_stack(const struct option *opt, const char *str, int unset) +{ +#define ONLY_PLM \ + (PERF_SAMPLE_BRANCH_USER |\ + PERF_SAMPLE_BRANCH_KERNEL |\ + PERF_SAMPLE_BRANCH_HV) + + uint64_t *mode = (uint64_t *)opt->value; + const struct branch_mode *br; + char *s, *os = NULL, *p; + int ret = -1; + + if (unset) + return 0; + + /* + * cannot set it twice, -b + --branch-filter for instance + */ + if (*mode) + return -1; + + /* str may be NULL in case no arg is passed to -b */ + if (str) { + /* because str is read-only */ + s = os = strdup(str); + if (!s) + return -1; + + for (;;) { + p = strchr(s, ','); + if (p) + *p = '\0'; + + for (br = branch_modes; br->name; br++) { + if (!strcasecmp(s, br->name)) + break; + } + if (!br->name) { + ui__warning("unknown branch filter %s," + " check man page\n", s); + goto error; + } + + *mode |= br->mode; + + if (!p) + break; + + s = p + 1; + } + } + ret = 0; + + /* default to any branch */ + if ((*mode & ~ONLY_PLM) == 0) { + *mode = PERF_SAMPLE_BRANCH_ANY; + } +error: + free(os); + return ret; +} + static const char * const record_usage[] = { "perf record [<options>] [<command>]", "perf record [<options>] -- <command> [<options>]", @@ -665,13 +743,10 @@ static const char * const record_usage[] = { */ static struct perf_record record = { .opts = { - .target_pid = -1, - .target_tid = -1, .mmap_pages = UINT_MAX, .user_freq = UINT_MAX, .user_interval = ULLONG_MAX, .freq = 1000, - .sample_id_all_avail = true, }, .write_mode = WRITE_FORCE, .file_new = true, @@ -690,9 +765,9 @@ const struct option record_options[] = { parse_events_option), OPT_CALLBACK(0, "filter", &record.evlist, "filter", "event filter", parse_filter), - OPT_INTEGER('p', "pid", &record.opts.target_pid, + OPT_STRING('p', "pid", &record.opts.target_pid, "pid", "record events on existing process id"), - OPT_INTEGER('t', "tid", &record.opts.target_tid, + OPT_STRING('t', "tid", &record.opts.target_tid, "tid", "record events on existing thread id"), OPT_INTEGER('r', "realtime", &record.realtime_prio, "collect data with this RT SCHED_FIFO priority"), @@ -738,6 +813,15 @@ const struct option record_options[] = { OPT_CALLBACK('G', "cgroup", &record.evlist, "name", "monitor event in cgroup name only", parse_cgroups), + OPT_STRING('u', "uid", &record.uid_str, "user", "user to profile"), + + OPT_CALLBACK_NOOPT('b', "branch-any", &record.opts.branch_stack, + "branch any", "sample any taken branches", + parse_branch_stack), + + OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack, + "branch filter mask", "branch stack filter modes", + parse_branch_stack), OPT_END() }; @@ -758,8 +842,8 @@ int cmd_record(int argc, const char **argv, const char *prefix __used) argc = parse_options(argc, argv, record_options, record_usage, PARSE_OPT_STOP_AT_NON_OPTION); - if (!argc && rec->opts.target_pid == -1 && rec->opts.target_tid == -1 && - !rec->opts.system_wide && !rec->opts.cpu_list) + if (!argc && !rec->opts.target_pid && !rec->opts.target_tid && + !rec->opts.system_wide && !rec->opts.cpu_list && !rec->uid_str) usage_with_options(record_usage, record_options); if (rec->force && rec->append_file) { @@ -799,11 +883,17 @@ int cmd_record(int argc, const char **argv, const char *prefix __used) goto out_symbol_exit; } - if (rec->opts.target_pid != -1) + rec->opts.uid = parse_target_uid(rec->uid_str, rec->opts.target_tid, + rec->opts.target_pid); + if (rec->uid_str != NULL && rec->opts.uid == UINT_MAX - 1) + goto out_free_fd; + + if (rec->opts.target_pid) rec->opts.target_tid = rec->opts.target_pid; if (perf_evlist__create_maps(evsel_list, rec->opts.target_pid, - rec->opts.target_tid, rec->opts.cpu_list) < 0) + rec->opts.target_tid, rec->opts.uid, + rec->opts.cpu_list) < 0) usage_with_options(record_usage, record_options); list_for_each_entry(pos, &evsel_list->entries, node) { |
