diff options
| author | Jakub Kicinski <kuba@kernel.org> | 2022-07-09 12:24:15 -0700 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2022-07-09 12:24:16 -0700 |
| commit | 0076cad30135f95bf9a144269906f9b7a4eb542c (patch) | |
| tree | 1a48680205d7b23123a3864c25c814d6d0dfbd8e /tools/bpf/bpftool | |
| parent | 877d4e3cedd18cd5a4cef7685b64af72f8322ac1 (diff) | |
| parent | 24bdfdd2ec343c94adf38fb5bc699f12e543713b (diff) | |
Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2022-07-09
We've added 94 non-merge commits during the last 19 day(s) which contain
a total of 125 files changed, 5141 insertions(+), 6701 deletions(-).
The main changes are:
1) Add new way for performing BTF type queries to BPF, from Daniel Müller.
2) Add inlining of calls to bpf_loop() helper when its function callback is
statically known, from Eduard Zingerman.
3) Implement BPF TCP CC framework usability improvements, from Jörn-Thorben Hinz.
4) Add LSM flavor for attaching per-cgroup BPF programs to existing LSM
hooks, from Stanislav Fomichev.
5) Remove all deprecated libbpf APIs in prep for 1.0 release, from Andrii Nakryiko.
6) Add benchmarks around local_storage to BPF selftests, from Dave Marchevsky.
7) AF_XDP sample removal (given move to libxdp) and various improvements around AF_XDP
selftests, from Magnus Karlsson & Maciej Fijalkowski.
8) Add bpftool improvements for memcg probing and bash completion, from Quentin Monnet.
9) Add arm64 JIT support for BPF-2-BPF coupled with tail calls, from Jakub Sitnicki.
10) Sockmap optimizations around throughput of UDP transmissions which have been
improved by 61%, from Cong Wang.
11) Rework perf's BPF prologue code to remove deprecated functions, from Jiri Olsa.
12) Fix sockmap teardown path to avoid sleepable sk_psock_stop, from John Fastabend.
13) Fix libbpf's cleanup around legacy kprobe/uprobe on error case, from Chuang Wang.
14) Fix libbpf's bpf_helpers.h to work with gcc for the case of its sec/pragma
macro, from James Hilliard.
15) Fix libbpf's pt_regs macros for riscv to use a0 for RC register, from Yixun Lan.
16) Fix bpftool to show the name of type BPF_OBJ_LINK, from Yafang Shao.
* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (94 commits)
selftests/bpf: Fix xdp_synproxy build failure if CONFIG_NF_CONNTRACK=m/n
bpf: Correctly propagate errors up from bpf_core_composites_match
libbpf: Disable SEC pragma macro on GCC
bpf: Check attach_func_proto more carefully in check_return_code
selftests/bpf: Add test involving restrict type qualifier
bpftool: Add support for KIND_RESTRICT to gen min_core_btf command
MAINTAINERS: Add entry for AF_XDP selftests files
selftests, xsk: Rename AF_XDP testing app
bpf, docs: Remove deprecated xsk libbpf APIs description
selftests/bpf: Add benchmark for local_storage RCU Tasks Trace usage
libbpf, riscv: Use a0 for RC register
libbpf: Remove unnecessary usdt_rel_ip assignments
selftests/bpf: Fix few more compiler warnings
selftests/bpf: Fix bogus uninitialized variable warning
bpftool: Remove zlib feature test from Makefile
libbpf: Cleanup the legacy uprobe_event on failed add/attach_event()
libbpf: Fix wrong variable used in perf_event_uprobe_open_legacy()
libbpf: Cleanup the legacy kprobe_event on failed add/attach_event()
selftests/bpf: Add type match test against kernel's task_struct
selftests/bpf: Add nested type to type based tests
...
====================
Link: https://lore.kernel.org/r/20220708233145.32365-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'tools/bpf/bpftool')
| -rw-r--r-- | tools/bpf/bpftool/Documentation/bpftool-feature.rst | 12 | ||||
| -rw-r--r-- | tools/bpf/bpftool/Makefile | 11 | ||||
| -rw-r--r-- | tools/bpf/bpftool/bash-completion/bpftool | 28 | ||||
| -rw-r--r-- | tools/bpf/bpftool/cgroup.c | 109 | ||||
| -rw-r--r-- | tools/bpf/bpftool/common.c | 72 | ||||
| -rw-r--r-- | tools/bpf/bpftool/feature.c | 59 | ||||
| -rw-r--r-- | tools/bpf/bpftool/gen.c | 109 | ||||
| -rw-r--r-- | tools/bpf/bpftool/main.h | 2 |
8 files changed, 346 insertions, 56 deletions
diff --git a/tools/bpf/bpftool/Documentation/bpftool-feature.rst b/tools/bpf/bpftool/Documentation/bpftool-feature.rst index 4ce9a77bc1e0..e44039f89be7 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-feature.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-feature.rst @@ -24,9 +24,11 @@ FEATURE COMMANDS ================ | **bpftool** **feature probe** [*COMPONENT*] [**full**] [**unprivileged**] [**macros** [**prefix** *PREFIX*]] +| **bpftool** **feature list_builtins** *GROUP* | **bpftool** **feature help** | | *COMPONENT* := { **kernel** | **dev** *NAME* } +| *GROUP* := { **prog_types** | **map_types** | **attach_types** | **link_types** | **helpers** } DESCRIPTION =========== @@ -70,6 +72,16 @@ DESCRIPTION The keywords **full**, **macros** and **prefix** have the same role as when probing the kernel. + **bpftool feature list_builtins** *GROUP* + List items known to bpftool. These can be BPF program types + (**prog_types**), BPF map types (**map_types**), attach types + (**attach_types**), link types (**link_types**), or BPF helper + functions (**helpers**). The command does not probe the system, but + simply lists the elements that bpftool knows from compilation time, + as provided from libbpf (for all object types) or from the BPF UAPI + header (list of helpers). This can be used in scripts to iterate over + BPF types or helpers. + **bpftool feature help** Print short help message. diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile index c19e0e4c41bd..6b5b3a99f79d 100644 --- a/tools/bpf/bpftool/Makefile +++ b/tools/bpf/bpftool/Makefile @@ -93,10 +93,8 @@ INSTALL ?= install RM ?= rm -f FEATURE_USER = .bpftool -FEATURE_TESTS = libbfd disassembler-four-args zlib libcap \ - clang-bpf-co-re -FEATURE_DISPLAY = libbfd disassembler-four-args zlib libcap \ - clang-bpf-co-re +FEATURE_TESTS = libbfd disassembler-four-args libcap clang-bpf-co-re +FEATURE_DISPLAY = libbfd disassembler-four-args libcap clang-bpf-co-re check_feat := 1 NON_CHECK_FEAT_TARGETS := clean uninstall doc doc-clean doc-install doc-uninstall @@ -204,11 +202,6 @@ $(BOOTSTRAP_OUTPUT)disasm.o: $(srctree)/kernel/bpf/disasm.c $(OUTPUT)disasm.o: $(srctree)/kernel/bpf/disasm.c $(QUIET_CC)$(CC) $(CFLAGS) -c -MMD $< -o $@ -$(OUTPUT)feature.o: -ifneq ($(feature-zlib), 1) - $(error "No zlib found") -endif - $(BPFTOOL_BOOTSTRAP): $(BOOTSTRAP_OBJS) $(LIBBPF_BOOTSTRAP) $(QUIET_LINK)$(HOSTCC) $(HOST_CFLAGS) $(LDFLAGS) $(BOOTSTRAP_OBJS) $(LIBS_BOOTSTRAP) -o $@ diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool index 91f89a9a5b36..dc1641e3670e 100644 --- a/tools/bpf/bpftool/bash-completion/bpftool +++ b/tools/bpf/bpftool/bash-completion/bpftool @@ -703,15 +703,8 @@ _bpftool() return 0 ;; type) - local BPFTOOL_MAP_CREATE_TYPES='hash array \ - prog_array perf_event_array percpu_hash \ - percpu_array stack_trace cgroup_array lru_hash \ - lru_percpu_hash lpm_trie array_of_maps \ - hash_of_maps devmap devmap_hash sockmap cpumap \ - xskmap sockhash cgroup_storage reuseport_sockarray \ - percpu_cgroup_storage queue stack sk_storage \ - struct_ops ringbuf inode_storage task_storage \ - bloom_filter' + local BPFTOOL_MAP_CREATE_TYPES="$(bpftool feature list_builtins map_types 2>/dev/null | \ + grep -v '^unspec$')" COMPREPLY=( $( compgen -W "$BPFTOOL_MAP_CREATE_TYPES" -- "$cur" ) ) return 0 ;; @@ -1039,14 +1032,8 @@ _bpftool() return 0 ;; attach|detach) - local BPFTOOL_CGROUP_ATTACH_TYPES='cgroup_inet_ingress cgroup_inet_egress \ - cgroup_inet_sock_create cgroup_sock_ops cgroup_device cgroup_inet4_bind \ - cgroup_inet6_bind cgroup_inet4_post_bind cgroup_inet6_post_bind \ - cgroup_inet4_connect cgroup_inet6_connect cgroup_inet4_getpeername \ - cgroup_inet6_getpeername cgroup_inet4_getsockname cgroup_inet6_getsockname \ - cgroup_udp4_sendmsg cgroup_udp6_sendmsg cgroup_udp4_recvmsg \ - cgroup_udp6_recvmsg cgroup_sysctl cgroup_getsockopt cgroup_setsockopt \ - cgroup_inet_sock_release' + local BPFTOOL_CGROUP_ATTACH_TYPES="$(bpftool feature list_builtins attach_types 2>/dev/null | \ + grep '^cgroup_')" local ATTACH_FLAGS='multi override' local PROG_TYPE='id pinned tag name' # Check for $prev = $command first @@ -1175,9 +1162,14 @@ _bpftool() _bpftool_once_attr 'full unprivileged' return 0 ;; + list_builtins) + [[ $prev != "$command" ]] && return 0 + COMPREPLY=( $( compgen -W 'prog_types map_types \ + attach_types link_types helpers' -- "$cur" ) ) + ;; *) [[ $prev == $object ]] && \ - COMPREPLY=( $( compgen -W 'help probe' -- "$cur" ) ) + COMPREPLY=( $( compgen -W 'help list_builtins probe' -- "$cur" ) ) ;; esac ;; diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c index 42421fe47a58..cced668fb2a3 100644 --- a/tools/bpf/bpftool/cgroup.c +++ b/tools/bpf/bpftool/cgroup.c @@ -15,6 +15,7 @@ #include <unistd.h> #include <bpf/bpf.h> +#include <bpf/btf.h> #include "main.h" @@ -36,6 +37,8 @@ " cgroup_inet_sock_release }" static unsigned int query_flags; +static struct btf *btf_vmlinux; +static __u32 btf_vmlinux_id; static enum bpf_attach_type parse_attach_type(const char *str) { @@ -64,11 +67,38 @@ static enum bpf_attach_type parse_attach_type(const char *str) return __MAX_BPF_ATTACH_TYPE; } +static void guess_vmlinux_btf_id(__u32 attach_btf_obj_id) +{ + struct bpf_btf_info btf_info = {}; + __u32 btf_len = sizeof(btf_info); + char name[16] = {}; + int err; + int fd; + + btf_info.name = ptr_to_u64(name); + btf_info.name_len = sizeof(name); + + fd = bpf_btf_get_fd_by_id(attach_btf_obj_id); + if (fd < 0) + return; + + err = bpf_obj_get_info_by_fd(fd, &btf_info, &btf_len); + if (err) + goto out; + + if (btf_info.kernel_btf && strncmp(name, "vmlinux", sizeof(name)) == 0) + btf_vmlinux_id = btf_info.id; + +out: + close(fd); +} + static int show_bpf_prog(int id, enum bpf_attach_type attach_type, const char *attach_flags_str, int level) { char prog_name[MAX_PROG_FULL_NAME]; + const char *attach_btf_name = NULL; struct bpf_prog_info info = {}; const char *attach_type_str; __u32 info_len = sizeof(info); @@ -84,6 +114,20 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type, } attach_type_str = libbpf_bpf_attach_type_str(attach_type); + + if (btf_vmlinux) { + if (!btf_vmlinux_id) + guess_vmlinux_btf_id(info.attach_btf_obj_id); + + if (btf_vmlinux_id == info.attach_btf_obj_id && + info.attach_btf_id < btf__type_cnt(btf_vmlinux)) { + const struct btf_type *t = + btf__type_by_id(btf_vmlinux, info.attach_btf_id); + attach_btf_name = + btf__name_by_offset(btf_vmlinux, t->name_off); + } + } + get_prog_full_name(&info, prog_fd, prog_name, sizeof(prog_name)); if (json_output) { jsonw_start_object(json_wtr); @@ -95,6 +139,10 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type, jsonw_string_field(json_wtr, "attach_flags", attach_flags_str); jsonw_string_field(json_wtr, "name", prog_name); + if (attach_btf_name) + jsonw_string_field(json_wtr, "attach_btf_name", attach_btf_name); + jsonw_uint_field(json_wtr, "attach_btf_obj_id", info.attach_btf_obj_id); + jsonw_uint_field(json_wtr, "attach_btf_id", info.attach_btf_id); jsonw_end_object(json_wtr); } else { printf("%s%-8u ", level ? " " : "", info.id); @@ -102,7 +150,13 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type, printf("%-15s", attach_type_str); else printf("type %-10u", attach_type); - printf(" %-15s %-15s\n", attach_flags_str, prog_name); + printf(" %-15s %-15s", attach_flags_str, prog_name); + if (attach_btf_name) + printf(" %-15s", attach_btf_name); + else if (info.attach_btf_id) + printf(" attach_btf_obj_id=%d attach_btf_id=%d", + info.attach_btf_obj_id, info.attach_btf_id); + printf("\n"); } close(prog_fd); @@ -144,40 +198,49 @@ static int cgroup_has_attached_progs(int cgroup_fd) static int show_attached_bpf_progs(int cgroup_fd, enum bpf_attach_type type, int level) { + LIBBPF_OPTS(bpf_prog_query_opts, p); + __u32 prog_attach_flags[1024] = {0}; const char *attach_flags_str; __u32 prog_ids[1024] = {0}; - __u32 prog_cnt, iter; - __u32 attach_flags; char buf[32]; + __u32 iter; int ret; - prog_cnt = ARRAY_SIZE(prog_ids); - ret = bpf_prog_query(cgroup_fd, type, query_flags, &attach_flags, - prog_ids, &prog_cnt); + p.query_flags = query_flags; + p.prog_cnt = ARRAY_SIZE(prog_ids); + p.prog_ids = prog_ids; + p.prog_attach_flags = prog_attach_flags; + + ret = bpf_prog_query_opts(cgroup_fd, type, &p); if (ret) return ret; - if (prog_cnt == 0) + if (p.prog_cnt == 0) return 0; - switch (attach_flags) { - case BPF_F_ALLOW_MULTI: - attach_flags_str = "multi"; - break; - case BPF_F_ALLOW_OVERRIDE: - attach_flags_str = "override"; - break; - case 0: - attach_flags_str = ""; - break; - default: - snprintf(buf, sizeof(buf), "unknown(%x)", attach_flags); - attach_flags_str = buf; - } + for (iter = 0; iter < p.prog_cnt; iter++) { + __u32 attach_flags; + + attach_flags = prog_attach_flags[iter] ?: p.attach_flags; + + switch (attach_flags) { + case BPF_F_ALLOW_MULTI: + attach_flags_str = "multi"; + break; + case BPF_F_ALLOW_OVERRIDE: + attach_flags_str = "override"; + break; + case 0: + attach_flags_str = ""; + break; + default: + snprintf(buf, sizeof(buf), "unknown(%x)", attach_flags); + attach_flags_str = buf; + } - for (iter = 0; iter < prog_cnt; iter++) show_bpf_prog(prog_ids[iter], type, attach_flags_str, level); + } return 0; } @@ -233,6 +296,7 @@ static int do_show(int argc, char **argv) printf("%-8s %-15s %-15s %-15s\n", "ID", "AttachType", "AttachFlags", "Name"); + btf_vmlinux = libbpf_find_kernel_btf(); for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) { /* * Not all attach types may be supported, so it's expected, @@ -296,6 +360,7 @@ static int do_show_tree_fn(const char *fpath, const struct stat *sb, printf("%s\n", fpath); } + btf_vmlinux = libbpf_find_kernel_btf(); for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) show_attached_bpf_progs(cgroup_fd, type, ftw->level); diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c index a0d4acd7c54a..067e9ea59e3b 100644 --- a/tools/bpf/bpftool/common.c +++ b/tools/bpf/bpftool/common.c @@ -13,14 +13,17 @@ #include <stdlib.h> #include <string.h> #include <unistd.h> -#include <linux/limits.h> -#include <linux/magic.h> #include <net/if.h> #include <sys/mount.h> #include <sys/resource.h> #include <sys/stat.h> #include <sys/vfs.h> +#include <linux/filter.h> +#include <linux/limits.h> +#include <linux/magic.h> +#include <linux/unistd.h> + #include <bpf/bpf.h> #include <bpf/hashmap.h> #include <bpf/libbpf.h> /* libbpf_num_possible_cpus */ @@ -73,11 +76,73 @@ static bool is_bpffs(char *path) return (unsigned long)st_fs.f_type == BPF_FS_MAGIC; } +/* Probe whether kernel switched from memlock-based (RLIMIT_MEMLOCK) to + * memcg-based memory accounting for BPF maps and programs. This was done in + * commit 97306be45fbe ("Merge branch 'switch to memcg-based memory + * accounting'"), in Linux 5.11. + * + * Libbpf also offers to probe for memcg-based accounting vs rlimit, but does + * so by checking for the availability of a given BPF helper and this has + * failed on some kernels with backports in the past, see commit 6b4384ff1088 + * ("Revert "bpftool: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK""). + * Instead, we can probe by lowering the process-based rlimit to 0, trying to + * load a BPF object, and resetting the rlimit. If the load succeeds then + * memcg-based accounting is supported. + * + * This would be too dangerous to do in the library, because multithreaded + * applications might attempt to load items while the rlimit is at 0. Given + * that bpftool is single-threaded, this is fine to do here. + */ +static bool known_to_need_rlimit(void) +{ + struct rlimit rlim_init, rlim_cur_zero = {}; + struct bpf_insn insns[] = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + size_t insn_cnt = ARRAY_SIZE(insns); + union bpf_attr attr; + int prog_fd, err; + + memset(&attr, 0, sizeof(attr)); + attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER; + attr.insns = ptr_to_u64(insns); + attr.insn_cnt = insn_cnt; + attr.license = ptr_to_u64("GPL"); + + if (getrlimit(RLIMIT_MEMLOCK, &rlim_init)) + return false; + + /* Drop the soft limit to zero. We maintain the hard limit to its + * current value, because lowering it would be a permanent operation + * for unprivileged users. + */ + rlim_cur_zero.rlim_max = rlim_init.rlim_max; + if (setrlimit(RLIMIT_MEMLOCK, &rlim_cur_zero)) + return false; + + /* Do not use bpf_prog_load() from libbpf here, because it calls + * bump_rlimit_memlock(), interfering with the current probe. + */ + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + err = errno; + + /* reset soft rlimit to its initial value */ + setrlimit(RLIMIT_MEMLOCK, &rlim_init); + + if (prog_fd < 0) + return err == EPERM; + + close(prog_fd); + return false; +} + void set_max_rlimit(void) { struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY }; - setrlimit(RLIMIT_MEMLOCK, &rinf); + if (known_to_need_rlimit()) + setrlimit(RLIMIT_MEMLOCK, &rinf); } static int @@ -251,6 +316,7 @@ const char *get_fd_type_name(enum bpf_obj_type type) [BPF_OBJ_UNKNOWN] = "unknown", [BPF_OBJ_PROG] = "prog", [BPF_OBJ_MAP] = "map", + [BPF_OBJ_LINK] = "link", }; if (type < 0 || type >= ARRAY_SIZE(names) || !names[type]) diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c index bac4ef428a02..7ecabf7947fb 100644 --- a/tools/bpf/bpftool/feature.c +++ b/tools/bpf/bpftool/feature.c @@ -1258,6 +1258,58 @@ exit_close_json: return 0; } +static const char *get_helper_name(unsigned int id) +{ + if (id >= ARRAY_SIZE(helper_name)) + return NULL; + + return helper_name[id]; +} + +static int do_list_builtins(int argc, char **argv) +{ + const char *(*get_name)(unsigned int id); + unsigned int id = 0; + + if (argc < 1) + usage(); + + if (is_prefix(*argv, "prog_types")) { + get_name = (const char *(*)(unsigned int))libbpf_bpf_prog_type_str; + } else if (is_prefix(*argv, "map_types")) { + get_name = (const char *(*)(unsigned int))libbpf_bpf_map_type_str; + } else if (is_prefix(*argv, "attach_types")) { + get_name = (const char *(*)(unsigned int))libbpf_bpf_attach_type_str; + } else if (is_prefix(*argv, "link_types")) { + get_name = (const char *(*)(unsigned int))libbpf_bpf_link_type_str; + } else if (is_prefix(*argv, "helpers")) { + get_name = get_helper_name; + } else { + p_err("expected 'prog_types', 'map_types', 'attach_types', 'link_types' or 'helpers', got: %s", *argv); + return -1; + } + + if (json_output) + jsonw_start_array(json_wtr); /* root array */ + + while (true) { + const char *name; + + name = get_name(id++); + if (!name) + break; + if (json_output) + jsonw_string(json_wtr, name); + else + printf("%s\n", name); + } + + if (json_output) + jsonw_end_array(json_wtr); /* root array */ + + return 0; +} + static int do_help(int argc, char **argv) { if (json_output) { @@ -1267,9 +1319,11 @@ static int do_help(int argc, char **argv) fprintf(stderr, "Usage: %1$s %2$s probe [COMPONENT] [full] [unprivileged] [macros [prefix PREFIX]]\n" + " %1$s %2$s list_builtins GROUP\n" " %1$s %2$s help\n" "\n" " COMPONENT := { kernel | dev NAME }\n" + " GROUP := { prog_types | map_types | attach_types | link_types | helpers }\n" " " HELP_SPEC_OPTIONS " }\n" "", bin_name, argv[-2]); @@ -1278,8 +1332,9 @@ static int do_help(int argc, char **argv) } static const struct cmd cmds[] = { - { "probe", do_probe }, - { "help", do_help }, + { "probe", do_probe }, + { "list_builtins", do_list_builtins }, + { "help", do_help }, { 0 } }; diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index 480cbd859359..1cf53bb01936 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -1762,6 +1762,7 @@ btfgen_mark_type(struct btfgen_info *info, unsigned int type_id, bool follow_poi } break; case BTF_KIND_CONST: + case BTF_KIND_RESTRICT: case BTF_KIND_VOLATILE: case BTF_KIND_TYPEDEF: err = btfgen_mark_type(info, btf_type->type, follow_pointers); @@ -1856,6 +1857,112 @@ static int btfgen_record_field_relo(struct btfgen_info *info, struct bpf_core_sp return 0; } +/* Mark types, members, and member types. Compared to btfgen_record_field_relo, + * this function does not rely on the target spec for inferring members, but + * uses the associated BTF. + * + * The `behind_ptr` argument is used to stop marking of composite types reached + * through a pointer. This way, we can keep BTF size in check while providing + * reasonable match semantics. + */ +static int btfgen_mark_type_match(struct btfgen_info *info, __u32 type_id, bool behind_ptr) +{ + const struct btf_type *btf_type; + struct btf *btf = info->src_btf; + struct btf_type *cloned_type; + int i, err; + + if (type_id == 0) + return 0; + + btf_type = btf__type_by_id(btf, type_id); + /* mark type on cloned BTF as used */ + cloned_type = (struct btf_type *)btf__type_by_id(info->marked_btf, type_id); + cloned_type->name_off = MARKED; + + switch (btf_kind(btf_type)) { + case BTF_KIND_UNKN: + case BTF_KIND_INT: + case BTF_KIND_FLOAT: + case BTF_KIND_ENUM: + case BTF_KIND_ENUM64: + break; + case BTF_KIND_STRUCT: + case BTF_KIND_UNION: { + struct btf_member *m = btf_members(btf_type); + __u16 vlen = btf_vlen(btf_type); + + if (behind_ptr) + break; + + for (i = 0; i < vlen; i++, m++) { + /* mark member */ + btfgen_mark_member(info, type_id, i); + + /* mark member's type */ + err = btfgen_mark_type_match(info, m->type, false); + if (err) + return err; + } + break; + } + case BTF_KIND_CONST: + case BTF_KIND_FWD: + case BTF_KIND_RESTRICT: + case BTF_KIND_TYPEDEF: + case BTF_KIND_VOLATILE: + return btfgen_mark_type_match(info, btf_type->type, behind_ptr); + case BTF_KIND_PTR: + return btfgen_mark_type_match(info, btf_type->type, true); + case BTF_KIND_ARRAY: { + struct btf_array *array; + + array = btf_array(btf_type); + /* mark array type */ + err = btfgen_mark_type_match(info, array->type, false); + /* mark array's index type */ + err = err ? : btfgen_mark_type_match(info, array->index_type, false); + if (err) + return err; + break; + } + case BTF_KIND_FUNC_PROTO: { + __u16 vlen = btf_vlen(btf_type); + struct btf_param *param; + + /* mark ret type */ + err = btfgen_mark_type_match(info, btf_type->type, false); + if (err) + return err; + + /* mark parameters types */ + param = btf_params(btf_type); + for (i = 0; i < vlen; i++) { + err = btfgen_mark_type_match(info, param->type, false); + if (err) + return err; + param++; + } + break; + } + /* tells if some other type needs to be handled */ + default: + p_err("unsupported kind: %s (%d)", btf_kind_str(btf_type), type_id); + return -EINVAL; + } + + return 0; +} + +/* Mark types, members, and member types. Compared to btfgen_record_field_relo, + * this function does not rely on the target spec for inferring members, but + * uses the associated BTF. + */ +static int btfgen_record_type_match_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec) +{ + return btfgen_mark_type_match(info, targ_spec->root_type_id, false); +} + static int btfgen_record_type_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec) { return btfgen_mark_type(info, targ_spec->root_type_id, true); @@ -1882,6 +1989,8 @@ static int btfgen_record_reloc(struct btfgen_info *info, struct bpf_core_spec *r case BPF_CORE_TYPE_EXISTS: case BPF_CORE_TYPE_SIZE: return btfgen_record_type_relo(info, res); + case BPF_CORE_TYPE_MATCHES: + return btfgen_record_type_match_relo(info, res); case BPF_CORE_ENUMVAL_EXISTS: case BPF_CORE_ENUMVAL_VALUE: return btfgen_record_enumval_relo(info, res); diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h index 589cb76b227a..5e5060c2ac04 100644 --- a/tools/bpf/bpftool/main.h +++ b/tools/bpf/bpftool/main.h @@ -63,8 +63,6 @@ static inline void *u64_to_ptr(__u64 ptr) #define HELP_SPEC_LINK \ "LINK := { id LINK_ID | pinned FILE }" -extern const char * const attach_type_name[__MAX_BPF_ATTACH_TYPE]; - /* keep in sync with the definition in skeleton/pid_iter.bpf.c */ enum bpf_obj_type { BPF_OBJ_UNKNOWN, |
