summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@kernel.org>2026-01-20 16:15:58 -0800
committerAlexei Starovoitov <ast@kernel.org>2026-01-20 16:22:38 -0800
commitb236134f70ba1e98a85d00623ea8fafb41dacf7f (patch)
tree8d9bf4462f03a5f2599978c564601043240e62ad
parent2e6690d4f7fc41c4fae7d0a4c0bf11f1973e5650 (diff)
parent74bc4f6127207624ec06f0d0984b280a390992aa (diff)
Merge branch 'bpf-kernel-functions-with-kf_implicit_args'
Ihor Solodrai says: ==================== bpf: Kernel functions with KF_IMPLICIT_ARGS This series implements a generic "implicit arguments" feature for BPF kernel functions. For context see prior work [1][2]. A mechanism is created for kfuncs to have arguments that are not visible to the BPF programs, and are provided to the kernel function implementation by the verifier. This mechanism is then used in the kfuncs that have a parameter with __prog annotation [3], which is the current way of passing struct bpf_prog_aux pointer to kfuncs. The function with implicit arguments is defined by KF_IMPLICIT_ARGS flag in BTF_IDS_FLAGS set. In this series, only a pointer to struct bpf_prog_aux can be implicit, although it is simple to extend this to more types. The verifier handles a kfunc with KF_IMPLICIT_ARGS by resolving it to a different (actual) BTF prototype early in verification (patch #3). A <kfunc>_impl function generated in BTF for a kfunc with implicit args does not have a "bpf_kfunc" decl tag, and a kernel address. The verifier will reject a program trying to call such an _impl kfunc. The usage of <kfunc>_impl functions in BPF is only allowed for kfuncs with an explicit kernel (or kmodule) declaration, that is in "legacy" cases. As of this series, there are no legacy kernel functions, as all __prog users are migrated to KF_IMPLICIT_ARGS. However the implementation allows for legacy cases support in principle. The series removes the following BPF kernel functions: - bpf_stream_vprintk_impl - bpf_task_work_schedule_resume_impl - bpf_task_work_schedule_signal_impl - bpf_wq_set_callback_impl This will break existing BPF programs calling these functions (the verifier will not load them) on new kernels. To mitigate, BPF users are advised to use the following pattern [4]: if (xxx_impl) xxx_impl(..., NULL); else xxx(...); Which can be wrapped in a macro. The series consists of the following patches: - patches #1 and #2 are non-functional refactoring in kernel/bpf - patch #3 defines KF_IMPLICIT_ARGS flag and teaches the verifier about it - patches #4-#5 implement btf2btf transformation in resolve_btfids - patch #6 adds selftests specific to KF_IMPLICIT_ARGS feature - patches #7-#11 migrate the current users of __prog argument to KF_IMPLICIT_ARGS - patch #12 removes __prog arg suffix support from the kernel - patch #13 updates the docs [1] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/ [2] https://lore.kernel.org/bpf/20250924211716.1287715-1-ihor.solodrai@linux.dev/ [3] https://docs.kernel.org/bpf/kfuncs.html#prog-annotation [4] https://lore.kernel.org/bpf/CAEf4BzbgPfRm9BX=TsZm-TsHFAHcwhPY4vTt=9OT-uhWqf8tqw@mail.gmail.com/ --- v2->v3: - resolve_btfids: Use dynamic reallocation for btf2btf_context arrays (Andrii) - resolve_btfids: Add missing free() for btf2btf_context arrays (AI) - Other nits in resolve_btfids (Andrii, Eduard) v2: https://lore.kernel.org/bpf/20260116201700.864797-1-ihor.solodrai@linux.dev/ v1->v2: - Replace the following kernel functions with KF_IMPLICIT_ARGS version: - bpf_stream_vprintk_impl -> bpf_stream_vprintk - bpf_task_work_schedule_resume_impl -> bpf_task_work_schedule_resume - bpf_task_work_schedule_signal_impl -> bpf_task_work_schedule_signal - bpf_wq_set_callback_impl -> bpf_wq_set_callback_impl - Remove __prog arg suffix support from the verifier - Rework btf2btf implementation in resolve_btfids - Do distill base and sort before BTF_ids patching - Collect kfuncs based on BTF decl tags, before BTF_ids are patched - resolve_btfids: use dynamic memory for intermediate data (Andrii) - verifier: reset .subreg_def for caller saved registers on kfunc call (Eduard) - selftests/hid: remove Makefile changes (Benjamin) - selftests/bpf: Add a patch (#11) migrating struct_ops_assoc test to KF_IMPLICIT_ARGS - Various nits across the series (Alexei, Andrii, Eduard) v1: https://lore.kernel.org/bpf/20260109184852.1089786-1-ihor.solodrai@linux.dev/ --- ==================== Link: https://patch.msgid.link/20260120222638.3976562-1-ihor.solodrai@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-rw-r--r--Documentation/bpf/kfuncs.rst49
-rw-r--r--drivers/hid/bpf/progs/hid_bpf_helpers.h8
-rw-r--r--include/linux/btf.h5
-rw-r--r--kernel/bpf/btf.c70
-rw-r--r--kernel/bpf/helpers.c43
-rw-r--r--kernel/bpf/stream.c5
-rw-r--r--kernel/bpf/verifier.c245
-rw-r--r--tools/bpf/resolve_btfids/main.c451
-rw-r--r--tools/lib/bpf/bpf_helpers.h6
-rw-r--r--tools/testing/selftests/bpf/bpf_experimental.h5
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c10
-rw-r--r--tools/testing/selftests/bpf/progs/file_reader.c2
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_implicit_args.c41
-rw-r--r--tools/testing/selftests/bpf/progs/stream_fail.c6
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_assoc.c8
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c4
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c6
-rw-r--r--tools/testing/selftests/bpf/progs/task_work.c7
-rw-r--r--tools/testing/selftests/bpf/progs/task_work_fail.c8
-rw-r--r--tools/testing/selftests/bpf/progs/task_work_stress.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_async_cb_context.c8
-rw-r--r--tools/testing/selftests/bpf/progs/wq_failures.c4
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_testmod.c35
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h6
-rw-r--r--tools/testing/selftests/hid/progs/hid_bpf_helpers.h8
25 files changed, 819 insertions, 225 deletions
diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst
index 3eb59a8f9f34..75e6c078e0e7 100644
--- a/Documentation/bpf/kfuncs.rst
+++ b/Documentation/bpf/kfuncs.rst
@@ -232,23 +232,6 @@ Or::
...
}
-2.3.6 __prog Annotation
----------------------------
-This annotation is used to indicate that the argument needs to be fixed up to
-the bpf_prog_aux of the caller BPF program. Any value passed into this argument
-is ignored, and rewritten by the verifier.
-
-An example is given below::
-
- __bpf_kfunc int bpf_wq_set_callback_impl(struct bpf_wq *wq,
- int (callback_fn)(void *map, int *key, void *value),
- unsigned int flags,
- void *aux__prog)
- {
- struct bpf_prog_aux *aux = aux__prog;
- ...
- }
-
.. _BPF_kfunc_nodef:
2.4 Using an existing kernel function
@@ -381,6 +364,38 @@ encouraged to make their use-cases known as early as possible, and participate
in upstream discussions regarding whether to keep, change, deprecate, or remove
those kfuncs if and when such discussions occur.
+2.5.9 KF_IMPLICIT_ARGS flag
+------------------------------------
+
+The KF_IMPLICIT_ARGS flag is used to indicate that the BPF signature
+of the kfunc is different from it's kernel signature, and the values
+for implicit arguments are provided at load time by the verifier.
+
+Only arguments of specific types are implicit.
+Currently only ``struct bpf_prog_aux *`` type is supported.
+
+A kfunc with KF_IMPLICIT_ARGS flag therefore has two types in BTF: one
+function matching the kernel declaration (with _impl suffix in the
+name by convention), and another matching the intended BPF API.
+
+Verifier only allows calls to the non-_impl version of a kfunc, that
+uses a signature without the implicit arguments.
+
+Example declaration:
+
+.. code-block:: c
+
+ __bpf_kfunc int bpf_task_work_schedule_signal(struct task_struct *task, struct bpf_task_work *tw,
+ void *map__map, bpf_task_work_callback_t callback,
+ struct bpf_prog_aux *aux) { ... }
+
+Example usage in BPF program:
+
+.. code-block:: c
+
+ /* note that the last argument is omitted */
+ bpf_task_work_schedule_signal(task, &work->tw, &arrmap, task_work_callback);
+
2.6 Registering the kfuncs
--------------------------
diff --git a/drivers/hid/bpf/progs/hid_bpf_helpers.h b/drivers/hid/bpf/progs/hid_bpf_helpers.h
index bf19785a6b06..228f8d787567 100644
--- a/drivers/hid/bpf/progs/hid_bpf_helpers.h
+++ b/drivers/hid/bpf/progs/hid_bpf_helpers.h
@@ -33,11 +33,9 @@ extern int hid_bpf_try_input_report(struct hid_bpf_ctx *ctx,
/* bpf_wq implementation */
extern int bpf_wq_init(struct bpf_wq *wq, void *p__map, unsigned int flags) __weak __ksym;
extern int bpf_wq_start(struct bpf_wq *wq, unsigned int flags) __weak __ksym;
-extern int bpf_wq_set_callback_impl(struct bpf_wq *wq,
- int (callback_fn)(void *map, int *key, void *value),
- unsigned int flags__k, void *aux__ign) __ksym;
-#define bpf_wq_set_callback(wq, cb, flags) \
- bpf_wq_set_callback_impl(wq, cb, flags, NULL)
+extern int bpf_wq_set_callback(struct bpf_wq *wq,
+ int (*callback_fn)(void *, int *, void *),
+ unsigned int flags) __weak __ksym;
#define HID_MAX_DESCRIPTOR_SIZE 4096
#define HID_IGNORE_EVENT -1
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 78dc79810c7d..48108471c5b1 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -78,6 +78,7 @@
#define KF_ARENA_RET (1 << 13) /* kfunc returns an arena pointer */
#define KF_ARENA_ARG1 (1 << 14) /* kfunc takes an arena pointer as its first argument */
#define KF_ARENA_ARG2 (1 << 15) /* kfunc takes an arena pointer as its second argument */
+#define KF_IMPLICIT_ARGS (1 << 16) /* kfunc has implicit arguments supplied by the verifier */
/*
* Tag marking a kernel function as a kfunc. This is meant to minimize the
@@ -575,8 +576,8 @@ const char *btf_name_by_offset(const struct btf *btf, u32 offset);
const char *btf_str_by_offset(const struct btf *btf, u32 offset);
struct btf *btf_parse_vmlinux(void);
struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
-u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
- const struct bpf_prog *prog);
+u32 *btf_kfunc_flags(const struct btf *btf, u32 kfunc_btf_id, const struct bpf_prog *prog);
+bool btf_kfunc_is_allowed(const struct btf *btf, u32 kfunc_btf_id, const struct bpf_prog *prog);
u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
const struct bpf_prog *prog);
int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 364dd84bfc5a..d10b3404260f 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -8757,24 +8757,17 @@ end:
return ret;
}
-static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
- enum btf_kfunc_hook hook,
- u32 kfunc_btf_id,
- const struct bpf_prog *prog)
+static u32 *btf_kfunc_id_set_contains(const struct btf *btf,
+ enum btf_kfunc_hook hook,
+ u32 kfunc_btf_id)
{
- struct btf_kfunc_hook_filter *hook_filter;
struct btf_id_set8 *set;
- u32 *id, i;
+ u32 *id;
if (hook >= BTF_KFUNC_HOOK_MAX)
return NULL;
if (!btf->kfunc_set_tab)
return NULL;
- hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
- for (i = 0; i < hook_filter->nr_filters; i++) {
- if (hook_filter->filters[i](prog, kfunc_btf_id))
- return NULL;
- }
set = btf->kfunc_set_tab->sets[hook];
if (!set)
return NULL;
@@ -8785,6 +8778,28 @@ static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
return id + 1;
}
+static bool __btf_kfunc_is_allowed(const struct btf *btf,
+ enum btf_kfunc_hook hook,
+ u32 kfunc_btf_id,
+ const struct bpf_prog *prog)
+{
+ struct btf_kfunc_hook_filter *hook_filter;
+ int i;
+
+ if (hook >= BTF_KFUNC_HOOK_MAX)
+ return false;
+ if (!btf->kfunc_set_tab)
+ return false;
+
+ hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
+ for (i = 0; i < hook_filter->nr_filters; i++) {
+ if (hook_filter->filters[i](prog, kfunc_btf_id))
+ return false;
+ }
+
+ return true;
+}
+
static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
{
switch (prog_type) {
@@ -8832,6 +8847,26 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
}
}
+bool btf_kfunc_is_allowed(const struct btf *btf,
+ u32 kfunc_btf_id,
+ const struct bpf_prog *prog)
+{
+ enum bpf_prog_type prog_type = resolve_prog_type(prog);
+ enum btf_kfunc_hook hook;
+ u32 *kfunc_flags;
+
+ kfunc_flags = btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
+ if (kfunc_flags && __btf_kfunc_is_allowed(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id, prog))
+ return true;
+
+ hook = bpf_prog_type_to_kfunc_hook(prog_type);
+ kfunc_flags = btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
+ if (kfunc_flags && __btf_kfunc_is_allowed(btf, hook, kfunc_btf_id, prog))
+ return true;
+
+ return false;
+}
+
/* Caution:
* Reference to the module (obtained using btf_try_get_module) corresponding to
* the struct btf *MUST* be held when calling this function from verifier
@@ -8839,26 +8874,27 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
* keeping the reference for the duration of the call provides the necessary
* protection for looking up a well-formed btf->kfunc_set_tab.
*/
-u32 *btf_kfunc_id_set_contains(const struct btf *btf,
- u32 kfunc_btf_id,
- const struct bpf_prog *prog)
+u32 *btf_kfunc_flags(const struct btf *btf, u32 kfunc_btf_id, const struct bpf_prog *prog)
{
enum bpf_prog_type prog_type = resolve_prog_type(prog);
enum btf_kfunc_hook hook;
u32 *kfunc_flags;
- kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id, prog);
+ kfunc_flags = btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
if (kfunc_flags)
return kfunc_flags;
hook = bpf_prog_type_to_kfunc_hook(prog_type);
- return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id, prog);
+ return btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
}
u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
const struct bpf_prog *prog)
{
- return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id, prog);
+ if (!__btf_kfunc_is_allowed(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id, prog))
+ return NULL;
+
+ return btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id);
}
static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 9eaa4185e0a7..f8aa1320e2f7 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -3120,12 +3120,11 @@ __bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
return 0;
}
-__bpf_kfunc int bpf_wq_set_callback_impl(struct bpf_wq *wq,
- int (callback_fn)(void *map, int *key, void *value),
- unsigned int flags,
- void *aux__prog)
+__bpf_kfunc int bpf_wq_set_callback(struct bpf_wq *wq,
+ int (callback_fn)(void *map, int *key, void *value),
+ unsigned int flags,
+ struct bpf_prog_aux *aux)
{
- struct bpf_prog_aux *aux = (struct bpf_prog_aux *)aux__prog;
struct bpf_async_kern *async = (struct bpf_async_kern *)wq;
if (flags)
@@ -4275,41 +4274,39 @@ release_prog:
}
/**
- * bpf_task_work_schedule_signal_impl - Schedule BPF callback using task_work_add with TWA_SIGNAL
+ * bpf_task_work_schedule_signal - Schedule BPF callback using task_work_add with TWA_SIGNAL
* mode
* @task: Task struct for which callback should be scheduled
* @tw: Pointer to struct bpf_task_work in BPF map value for internal bookkeeping
* @map__map: bpf_map that embeds struct bpf_task_work in the values
* @callback: pointer to BPF subprogram to call
- * @aux__prog: user should pass NULL
+ * @aux: pointer to bpf_prog_aux of the caller BPF program, implicitly set by the verifier
*
* Return: 0 if task work has been scheduled successfully, negative error code otherwise
*/
-__bpf_kfunc int bpf_task_work_schedule_signal_impl(struct task_struct *task,
- struct bpf_task_work *tw, void *map__map,
- bpf_task_work_callback_t callback,
- void *aux__prog)
+__bpf_kfunc int bpf_task_work_schedule_signal(struct task_struct *task, struct bpf_task_work *tw,
+ void *map__map, bpf_task_work_callback_t callback,
+ struct bpf_prog_aux *aux)
{
- return bpf_task_work_schedule(task, tw, map__map, callback, aux__prog, TWA_SIGNAL);
+ return bpf_task_work_schedule(task, tw, map__map, callback, aux, TWA_SIGNAL);
}
/**
- * bpf_task_work_schedule_resume_impl - Schedule BPF callback using task_work_add with TWA_RESUME
+ * bpf_task_work_schedule_resume - Schedule BPF callback using task_work_add with TWA_RESUME
* mode
* @task: Task struct for which callback should be scheduled
* @tw: Pointer to struct bpf_task_work in BPF map value for internal bookkeeping
* @map__map: bpf_map that embeds struct bpf_task_work in the values
* @callback: pointer to BPF subprogram to call
- * @aux__prog: user should pass NULL
+ * @aux: pointer to bpf_prog_aux of the caller BPF program, implicitly set by the verifier
*
* Return: 0 if task work has been scheduled successfully, negative error code otherwise
*/
-__bpf_kfunc int bpf_task_work_schedule_resume_impl(struct task_struct *task,
- struct bpf_task_work *tw, void *map__map,
- bpf_task_work_callback_t callback,
- void *aux__prog)
+__bpf_kfunc int bpf_task_work_schedule_resume(struct task_struct *task, struct bpf_task_work *tw,
+ void *map__map, bpf_task_work_callback_t callback,
+ struct bpf_prog_aux *aux)
{
- return bpf_task_work_schedule(task, tw, map__map, callback, aux__prog, TWA_RESUME);
+ return bpf_task_work_schedule(task, tw, map__map, callback, aux, TWA_RESUME);
}
static int make_file_dynptr(struct file *file, u32 flags, bool may_sleep,
@@ -4488,7 +4485,7 @@ BTF_ID_FLAGS(func, bpf_dynptr_memset)
BTF_ID_FLAGS(func, bpf_modify_return_test_tp)
#endif
BTF_ID_FLAGS(func, bpf_wq_init)
-BTF_ID_FLAGS(func, bpf_wq_set_callback_impl)
+BTF_ID_FLAGS(func, bpf_wq_set_callback, KF_IMPLICIT_ARGS)
BTF_ID_FLAGS(func, bpf_wq_start)
BTF_ID_FLAGS(func, bpf_preempt_disable)
BTF_ID_FLAGS(func, bpf_preempt_enable)
@@ -4536,9 +4533,9 @@ BTF_ID_FLAGS(func, bpf_strncasestr);
#if defined(CONFIG_BPF_LSM) && defined(CONFIG_CGROUPS)
BTF_ID_FLAGS(func, bpf_cgroup_read_xattr, KF_RCU)
#endif
-BTF_ID_FLAGS(func, bpf_stream_vprintk_impl)
-BTF_ID_FLAGS(func, bpf_task_work_schedule_signal_impl)
-BTF_ID_FLAGS(func, bpf_task_work_schedule_resume_impl)
+BTF_ID_FLAGS(func, bpf_stream_vprintk, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_task_work_schedule_signal, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_task_work_schedule_resume, KF_IMPLICIT_ARGS)
BTF_ID_FLAGS(func, bpf_dynptr_from_file)
BTF_ID_FLAGS(func, bpf_dynptr_file_discard)
BTF_KFUNCS_END(common_btf_ids)
diff --git a/kernel/bpf/stream.c b/kernel/bpf/stream.c
index 0b6bc3f30335..24730df55e69 100644
--- a/kernel/bpf/stream.c
+++ b/kernel/bpf/stream.c
@@ -212,14 +212,13 @@ __bpf_kfunc_start_defs();
* Avoid using enum bpf_stream_id so that kfunc users don't have to pull in the
* enum in headers.
*/
-__bpf_kfunc int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const void *args,
- u32 len__sz, void *aux__prog)
+__bpf_kfunc int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args,
+ u32 len__sz, struct bpf_prog_aux *aux)
{
struct bpf_bprintf_data data = {
.get_bin_args = true,
.get_buf = true,
};
- struct bpf_prog_aux *aux = aux__prog;
u32 fmt_size = strlen(fmt__str) + 1;
struct bpf_stream *stream;
u32 data_len = len__sz;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b67d8981b058..919556614505 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -294,6 +294,14 @@ struct bpf_call_arg_meta {
s64 const_map_key;
};
+struct bpf_kfunc_meta {
+ struct btf *btf;
+ const struct btf_type *proto;
+ const char *name;
+ const u32 *flags;
+ s32 id;
+};
+
struct bpf_kfunc_call_arg_meta {
/* In parameters */
struct btf *btf;
@@ -512,7 +520,7 @@ static bool is_async_callback_calling_kfunc(u32 btf_id);
static bool is_callback_calling_kfunc(u32 btf_id);
static bool is_bpf_throw_kfunc(struct bpf_insn *insn);
-static bool is_bpf_wq_set_callback_impl_kfunc(u32 btf_id);
+static bool is_bpf_wq_set_callback_kfunc(u32 btf_id);
static bool is_task_work_add_kfunc(u32 func_id);
static bool is_sync_callback_calling_function(enum bpf_func_id func_id)
@@ -554,7 +562,7 @@ static bool is_async_cb_sleepable(struct bpf_verifier_env *env, struct bpf_insn
/* bpf_wq and bpf_task_work callbacks are always sleepable. */
if (bpf_pseudo_kfunc_call(insn) && insn->off == 0 &&
- (is_bpf_wq_set_callback_impl_kfunc(insn->imm) || is_task_work_add_kfunc(insn->imm)))
+ (is_bpf_wq_set_callback_kfunc(insn->imm) || is_task_work_add_kfunc(insn->imm)))
return true;
verifier_bug(env, "unhandled async callback in is_async_cb_sleepable");
@@ -3263,16 +3271,105 @@ static struct btf *find_kfunc_desc_btf(struct bpf_verifier_env *env, s16 offset)
return btf_vmlinux ?: ERR_PTR(-ENOENT);
}
-static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
+#define KF_IMPL_SUFFIX "_impl"
+
+static const struct btf_type *find_kfunc_impl_proto(struct bpf_verifier_env *env,
+ struct btf *btf,
+ const char *func_name)
+{
+ char *buf = env->tmp_str_buf;
+ const struct btf_type *func;
+ s32 impl_id;
+ int len;
+
+ len = snprintf(buf, TMP_STR_BUF_LEN, "%s%s", func_name, KF_IMPL_SUFFIX);
+ if (len < 0 || len >= TMP_STR_BUF_LEN) {
+ verbose(env, "function name %s%s is too long\n", func_name, KF_IMPL_SUFFIX);
+ return NULL;
+ }
+
+ impl_id = btf_find_by_name_kind(btf, buf, BTF_KIND_FUNC);
+ if (impl_id <= 0) {
+ verbose(env, "cannot find function %s in BTF\n", buf);
+ return NULL;
+ }
+
+ func = btf_type_by_id(btf, impl_id);
+
+ return btf_type_by_id(btf, func->type);
+}
+
+static int fetch_kfunc_meta(struct bpf_verifier_env *env,
+ s32 func_id,
+ s16 offset,
+ struct bpf_kfunc_meta *kfunc)
{
const struct btf_type *func, *func_proto;
+ const char *func_name;
+ u32 *kfunc_flags;
+ struct btf *btf;
+
+ if (func_id <= 0) {
+ verbose(env, "invalid kernel function btf_id %d\n", func_id);
+ return -EINVAL;
+ }
+
+ btf = find_kfunc_desc_btf(env, offset);
+ if (IS_ERR(btf)) {
+ verbose(env, "failed to find BTF for kernel function\n");
+ return PTR_ERR(btf);
+ }
+
+ /*
+ * Note that kfunc_flags may be NULL at this point, which
+ * means that we couldn't find func_id in any relevant
+ * kfunc_id_set. This most likely indicates an invalid kfunc
+ * call. However we don't fail with an error here,
+ * and let the caller decide what to do with NULL kfunc->flags.
+ */
+ kfunc_flags = btf_kfunc_flags(btf, func_id, env->prog);
+
+ func = btf_type_by_id(btf, func_id);
+ if (!func || !btf_type_is_func(func)) {
+ verbose(env, "kernel btf_id %d is not a function\n", func_id);
+ return -EINVAL;
+ }
+
+ func_name = btf_name_by_offset(btf, func->name_off);
+
+ /*
+ * An actual prototype of a kfunc with KF_IMPLICIT_ARGS flag
+ * can be found through the counterpart _impl kfunc.
+ */
+ if (kfunc_flags && (*kfunc_flags & KF_IMPLICIT_ARGS))
+ func_proto = find_kfunc_impl_proto(env, btf, func_name);
+ else
+ func_proto = btf_type_by_id(btf, func->type);
+
+ if (!func_proto || !btf_type_is_func_proto(func_proto)) {
+ verbose(env, "kernel function btf_id %d does not have a valid func_proto\n",
+ func_id);
+ return -EINVAL;
+ }
+
+ memset(kfunc, 0, sizeof(*kfunc));
+ kfunc->btf = btf;
+ kfunc->id = func_id;
+ kfunc->name = func_name;
+ kfunc->proto = func_proto;
+ kfunc->flags = kfunc_flags;
+
+ return 0;
+}
+
+static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
+{
struct bpf_kfunc_btf_tab *btf_tab;
struct btf_func_model func_model;
struct bpf_kfunc_desc_tab *tab;
struct bpf_prog_aux *prog_aux;
+ struct bpf_kfunc_meta kfunc;
struct bpf_kfunc_desc *desc;
- const char *func_name;
- struct btf *desc_btf;
unsigned long addr;
int err;
@@ -3322,12 +3419,6 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
prog_aux->kfunc_btf_tab = btf_tab;
}
- desc_btf = find_kfunc_desc_btf(env, offset);
- if (IS_ERR(desc_btf)) {
- verbose(env, "failed to find BTF for kernel function\n");
- return PTR_ERR(desc_btf);
- }
-
if (find_kfunc_desc(env->prog, func_id, offset))
return 0;
@@ -3336,24 +3427,13 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
return -E2BIG;
}
- func = btf_type_by_id(desc_btf, func_id);
- if (!func || !btf_type_is_func(func)) {
- verbose(env, "kernel btf_id %u is not a function\n",
- func_id);
- return -EINVAL;
- }
- func_proto = btf_type_by_id(desc_btf, func->type);
- if (!func_proto || !btf_type_is_func_proto(func_proto)) {
- verbose(env, "kernel function btf_id %u does not have a valid func_proto\n",
- func_id);
- return -EINVAL;
- }
+ err = fetch_kfunc_meta(env, func_id, offset, &kfunc);
+ if (err)
+ return err;
- func_name = btf_name_by_offset(desc_btf, func->name_off);
- addr = kallsyms_lookup_name(func_name);
+ addr = kallsyms_lookup_name(kfunc.name);
if (!addr) {
- verbose(env, "cannot find address for kernel function %s\n",
- func_name);
+ verbose(env, "cannot find address for kernel function %s\n", kfunc.name);
return -EINVAL;
}
@@ -3363,9 +3443,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
return err;
}
- err = btf_distill_func_proto(&env->log, desc_btf,
- func_proto, func_name,
- &func_model);
+ err = btf_distill_func_proto(&env->log, kfunc.btf, kfunc.proto, kfunc.name, &func_model);
if (err)
return err;
@@ -12133,11 +12211,6 @@ static bool is_kfunc_arg_irq_flag(const struct btf *btf, const struct btf_param
return btf_param_match_suffix(btf, arg, "__irq_flag");
}
-static bool is_kfunc_arg_prog(const struct btf *btf, const struct btf_param *arg)
-{
- return btf_param_match_suffix(btf, arg, "__prog");
-}
-
static bool is_kfunc_arg_scalar_with_name(const struct btf *btf,
const struct btf_param *arg,
const char *name)
@@ -12166,6 +12239,7 @@ enum {
KF_ARG_WORKQUEUE_ID,
KF_ARG_RES_SPIN_LOCK_ID,
KF_ARG_TASK_WORK_ID,
+ KF_ARG_PROG_AUX_ID
};
BTF_ID_LIST(kf_arg_btf_ids)
@@ -12177,6 +12251,7 @@ BTF_ID(struct, bpf_rb_node)
BTF_ID(struct, bpf_wq)
BTF_ID(struct, bpf_res_spin_lock)
BTF_ID(struct, bpf_task_work)
+BTF_ID(struct, bpf_prog_aux)
static bool __is_kfunc_ptr_arg_type(const struct btf *btf,
const struct btf_param *arg, int type)
@@ -12257,6 +12332,11 @@ static bool is_kfunc_arg_callback(struct bpf_verifier_env *env, const struct btf
return true;
}
+static bool is_kfunc_arg_prog_aux(const struct btf *btf, const struct btf_param *arg)
+{
+ return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_PROG_AUX_ID);
+}
+
/* Returns true if struct is composed of scalars, 4 levels of nesting allowed */
static bool __btf_type_is_scalar_struct(struct bpf_verifier_env *env,
const struct btf *btf,
@@ -12350,7 +12430,7 @@ enum special_kfunc_type {
KF_bpf_percpu_obj_new_impl,
KF_bpf_percpu_obj_drop_impl,
KF_bpf_throw,
- KF_bpf_wq_set_callback_impl,
+ KF_bpf_wq_set_callback,
KF_bpf_preempt_disable,
KF_bpf_preempt_enable,
KF_bpf_iter_css_task_new,
@@ -12370,8 +12450,8 @@ enum special_kfunc_type {
KF_bpf_dynptr_from_file,
KF_bpf_dynptr_file_discard,
KF___bpf_trap,
- KF_bpf_task_work_schedule_signal_impl,
- KF_bpf_task_work_schedule_resume_impl,
+ KF_bpf_task_work_schedule_signal,
+ KF_bpf_task_work_schedule_resume,
KF_bpf_arena_alloc_pages,
KF_bpf_arena_free_pages,
KF_bpf_arena_reserve_pages,
@@ -12414,7 +12494,7 @@ BTF_ID(func, bpf_dynptr_clone)
BTF_ID(func, bpf_percpu_obj_new_impl)
BTF_ID(func, bpf_percpu_obj_drop_impl)
BTF_ID(func, bpf_throw)
-BTF_ID(func, bpf_wq_set_callback_impl)
+BTF_ID(func, bpf_wq_set_callback)
BTF_ID(func, bpf_preempt_disable)
BTF_ID(func, bpf_preempt_enable)
#ifdef CONFIG_CGROUPS
@@ -12447,16 +12527,16 @@ BTF_ID(func, bpf_res_spin_unlock_irqrestore)
BTF_ID(func, bpf_dynptr_from_file)
BTF_ID(func, bpf_dynptr_file_discard)
BTF_ID(func, __bpf_trap)
-BTF_ID(func, bpf_task_work_schedule_signal_impl)
-BTF_ID(func, bpf_task_work_schedule_resume_impl)
+BTF_ID(func, bpf_task_work_schedule_signal)
+BTF_ID(func, bpf_task_work_schedule_resume)
BTF_ID(func, bpf_arena_alloc_pages)
BTF_ID(func, bpf_arena_free_pages)
BTF_ID(func, bpf_arena_reserve_pages)
static bool is_task_work_add_kfunc(u32 func_id)
{
- return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal_impl] ||
- func_id == special_kfunc_list[KF_bpf_task_work_schedule_resume_impl];
+ return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal] ||
+ func_id == special_kfunc_list[KF_bpf_task_work_schedule_resume];
}
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
@@ -12907,7 +12987,7 @@ static bool is_sync_callback_calling_kfunc(u32 btf_id)
static bool is_async_callback_calling_kfunc(u32 btf_id)
{
- return btf_id == special_kfunc_list[KF_bpf_wq_set_callback_impl] ||
+ return is_bpf_wq_set_callback_kfunc(btf_id) ||
is_task_work_add_kfunc(btf_id);
}
@@ -12917,9 +12997,9 @@ static bool is_bpf_throw_kfunc(struct bpf_insn *insn)
insn->imm == special_kfunc_list[KF_bpf_throw];
}
-static bool is_bpf_wq_set_callback_impl_kfunc(u32 btf_id)
+static bool is_bpf_wq_set_callback_kfunc(u32 btf_id)
{
- return btf_id == special_kfunc_list[KF_bpf_wq_set_callback_impl];
+ return btf_id == special_kfunc_list[KF_bpf_wq_set_callback];
}
static bool is_callback_calling_kfunc(u32 btf_id)
@@ -13193,8 +13273,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (is_kfunc_arg_ignore(btf, &args[i]))
continue;
- if (is_kfunc_arg_prog(btf, &args[i])) {
- /* Used to reject repeated use of __prog. */
+ if (is_kfunc_arg_prog_aux(btf, &args[i])) {
+ /* Reject repeated use bpf_prog_aux */
if (meta->arg_prog) {
verifier_bug(env, "Only 1 prog->aux argument supported per-kfunc");
return -EFAULT;
@@ -13696,44 +13776,28 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return 0;
}
-static int fetch_kfunc_meta(struct bpf_verifier_env *env,
- struct bpf_insn *insn,
- struct bpf_kfunc_call_arg_meta *meta,
- const char **kfunc_name)
+static int fetch_kfunc_arg_meta(struct bpf_verifier_env *env,
+ s32 func_id,
+ s16 offset,
+ struct bpf_kfunc_call_arg_meta *meta)
{
- const struct btf_type *func, *func_proto;
- u32 func_id, *kfunc_flags;
- const char *func_name;
- struct btf *desc_btf;
-
- if (kfunc_name)
- *kfunc_name = NULL;
+ struct bpf_kfunc_meta kfunc;
+ int err;
- if (!insn->imm)
- return -EINVAL;
+ err = fetch_kfunc_meta(env, func_id, offset, &kfunc);
+ if (err)
+ return err;
- desc_btf = find_kfunc_desc_btf(env, insn->off);
- if (IS_ERR(desc_btf))
- return PTR_ERR(desc_btf);
+ memset(meta, 0, sizeof(*meta));
+ meta->btf = kfunc.btf;
+ meta->func_id = kfunc.id;
+ meta->func_proto = kfunc.proto;
+ meta->func_name = kfunc.name;
- func_id = insn->imm;
- func = btf_type_by_id(desc_btf, func_id);
- func_name = btf_name_by_offset(desc_btf, func->name_off);
- if (kfunc_name)
- *kfunc_name = func_name;
- func_proto = btf_type_by_id(desc_btf, func->type);
-
- kfunc_flags = btf_kfunc_id_set_contains(desc_btf, func_id, env->prog);
- if (!kfunc_flags) {
+ if (!kfunc.flags || !btf_kfunc_is_allowed(kfunc.btf, kfunc.id, env->prog))
return -EACCES;
- }
- memset(meta, 0, sizeof(*meta));
- meta->btf = desc_btf;
- meta->func_id = func_id;
- meta->kfunc_flags = *kfunc_flags;
- meta->func_proto = func_proto;
- meta->func_name = func_name;
+ meta->kfunc_flags = *kfunc.flags;
return 0;
}
@@ -13938,12 +14002,13 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (!insn->imm)
return 0;
- err = fetch_kfunc_meta(env, insn, &meta, &func_name);
- if (err == -EACCES && func_name)
- verbose(env, "calling kernel function %s is not allowed\n", func_name);
+ err = fetch_kfunc_arg_meta(env, insn->imm, insn->off, &meta);
+ if (err == -EACCES && meta.func_name)
+ verbose(env, "calling kernel function %s is not allowed\n", meta.func_name);
if (err)
return err;
desc_btf = meta.btf;
+ func_name = meta.func_name;
insn_aux = &env->insn_aux_data[insn_idx];
insn_aux->is_iter_next = is_iter_next_kfunc(&meta);
@@ -14013,7 +14078,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
meta.r0_rdonly = false;
}
- if (is_bpf_wq_set_callback_impl_kfunc(meta.func_id)) {
+ if (is_bpf_wq_set_callback_kfunc(meta.func_id)) {
err = push_callback_call(env, insn, insn_idx, meta.subprogno,
set_timer_callback_state);
if (err) {
@@ -14151,8 +14216,12 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
}
}
- for (i = 0; i < CALLER_SAVED_REGS; i++)
- mark_reg_not_init(env, regs, caller_saved[i]);
+ for (i = 0; i < CALLER_SAVED_REGS; i++) {
+ u32 regno = caller_saved[i];
+
+ mark_reg_not_init(env, regs, regno);
+ regs[regno].subreg_def = DEF_NOT_SUBREG;
+ }
/* Check return type */
t = btf_type_skip_modifiers(desc_btf, meta.func_proto->type, NULL);
@@ -17789,7 +17858,7 @@ static bool get_call_summary(struct bpf_verifier_env *env, struct bpf_insn *call
if (bpf_pseudo_kfunc_call(call)) {
int err;
- err = fetch_kfunc_meta(env, call, &meta, NULL);
+ err = fetch_kfunc_arg_meta(env, call->imm, call->off, &meta);
if (err < 0)
/* error would be reported later */
return false;
@@ -18297,7 +18366,7 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
} else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
struct bpf_kfunc_call_arg_meta meta;
- ret = fetch_kfunc_meta(env, insn, &meta, NULL);
+ ret = fetch_kfunc_arg_meta(env, insn->imm, insn->off, &meta);
if (ret == 0 && is_iter_next_kfunc(&meta)) {
mark_prune_point(env, t);
/* Checking and saving state checkpoints at iter_next() call
diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index 343d08050116..db8d1554bdcc 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -152,6 +152,25 @@ struct object {
int nr_typedefs;
};
+#define KF_IMPLICIT_ARGS (1 << 16)
+#define KF_IMPL_SUFFIX "_impl"
+
+struct kfunc {
+ const char *name;
+ u32 btf_id;
+ u32 flags;
+};
+
+struct btf2btf_context {
+ struct btf *btf;
+ u32 *decl_tags;
+ u32 nr_decl_tags;
+ u32 max_decl_tags;
+ struct kfunc *kfuncs;
+ u32 nr_kfuncs;
+ u32 max_kfuncs;
+};
+
static int verbose;
static int warnings;
@@ -563,19 +582,6 @@ static int load_btf(struct object *obj)
obj->base_btf = base_btf;
obj->btf = btf;
- if (obj->base_btf && obj->distill_base) {
- err = btf__distill_base(obj->btf, &base_btf, &btf);
- if (err) {
- pr_err("FAILED to distill base BTF: %s\n", strerror(errno));
- goto out_err;
- }
-
- btf__free(obj->base_btf);
- btf__free(obj->btf);
- obj->base_btf = base_btf;
- obj->btf = btf;
- }
-
return 0;
out_err:
@@ -850,6 +856,366 @@ static int dump_raw_btf(struct btf *btf, const char *out_path)
return 0;
}
+static const struct btf_type *btf_type_skip_qualifiers(const struct btf *btf, s32 type_id)
+{
+ const struct btf_type *t = btf__type_by_id(btf, type_id);
+
+ while (btf_is_mod(t))
+ t = btf__type_by_id(btf, t->type);
+
+ return t;
+}
+
+static int push_decl_tag_id(struct btf2btf_context *ctx, u32 decl_tag_id)
+{
+ u32 *arr = ctx->decl_tags;
+ u32 cap = ctx->max_decl_tags;
+
+ if (ctx->nr_decl_tags + 1 > cap) {
+ cap = max(cap + 256, cap * 2);
+ arr = realloc(arr, sizeof(u32) * cap);
+ if (!arr)
+ return -ENOMEM;
+ ctx->max_decl_tags = cap;
+ ctx->decl_tags = arr;
+ }
+
+ ctx->decl_tags[ctx->nr_decl_tags++] = decl_tag_id;
+
+ return 0;
+}
+
+static int push_kfunc(struct btf2btf_context *ctx, struct kfunc *kfunc)
+{
+ struct kfunc *arr = ctx->kfuncs;
+ u32 cap = ctx->max_kfuncs;
+
+ if (ctx->nr_kfuncs + 1 > cap) {
+ cap = max(cap + 256, cap * 2);
+ arr = realloc(arr, sizeof(struct kfunc) * cap);
+ if (!arr)
+ return -ENOMEM;
+ ctx->max_kfuncs = cap;
+ ctx->kfuncs = arr;
+ }
+
+ ctx->kfuncs[ctx->nr_kfuncs++] = *kfunc;
+
+ return 0;
+}
+
+static int collect_decl_tags(struct btf2btf_context *ctx)
+{
+ const u32 type_cnt = btf__type_cnt(ctx->btf);
+ struct btf *btf = ctx->btf;
+ const struct btf_type *t;
+ int err;
+
+ for (u32 id = 1; id < type_cnt; id++) {
+ t = btf__type_by_id(btf, id);
+ if (!btf_is_decl_tag(t))
+ continue;
+ err = push_decl_tag_id(ctx, id);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+/*
+ * To find the kfunc flags having its struct btf_id (with ELF addresses)
+ * we need to find the address that is in range of a set8.
+ * If a set8 is found, then the flags are located at addr + 4 bytes.
+ * Return 0 (no flags!) if not found.
+ */
+static u32 find_kfunc_flags(struct object *obj, struct btf_id *kfunc_id)
+{
+ const u32 *elf_data_ptr = obj->efile.idlist->d_buf;
+ u64 set_lower_addr, set_upper_addr, addr;
+ struct btf_id *set_id;
+ struct rb_node *next;
+ u32 flags;
+ u64 idx;
+
+ for (next = rb_first(&obj->sets); next; next = rb_next(next)) {
+ set_id = rb_entry(next, struct btf_id, rb_node);
+ if (set_id->kind != BTF_ID_KIND_SET8 || set_id->addr_cnt != 1)
+ continue;
+
+ set_lower_addr = set_id->addr[0];
+ set_upper_addr = set_lower_addr + set_id->cnt * sizeof(u64);
+
+ for (u32 i = 0; i < kfunc_id->addr_cnt; i++) {
+ addr = kfunc_id->addr[i];
+ /*
+ * Lower bound is exclusive to skip the 8-byte header of the set.
+ * Upper bound is inclusive to capture the last entry at offset 8*cnt.
+ */
+ if (set_lower_addr < addr && addr <= set_upper_addr) {
+ pr_debug("found kfunc %s in BTF_ID_FLAGS %s\n",
+ kfunc_id->name, set_id->name);
+ idx = addr - obj->efile.idlist_addr;
+ idx = idx / sizeof(u32) + 1;
+ flags = elf_data_ptr[idx];
+
+ return flags;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int collect_kfuncs(struct object *obj, struct btf2btf_context *ctx)
+{
+ const char *tag_name, *func_name;
+ struct btf *btf = ctx->btf;
+ const struct btf_type *t;
+ u32 flags, func_id;
+ struct kfunc kfunc;
+ struct btf_id *id;
+ int err;
+
+ if (ctx->nr_decl_tags == 0)
+ return 0;
+
+ for (u32 i = 0; i < ctx->nr_decl_tags; i++) {
+ t = btf__type_by_id(btf, ctx->decl_tags[i]);
+ if (btf_kflag(t) || btf_decl_tag(t)->component_idx != -1)
+ continue;
+
+ tag_name = btf__name_by_offset(btf, t->name_off);
+ if (strcmp(tag_name, "bpf_kfunc") != 0)
+ continue;
+
+ func_id = t->type;
+ t = btf__type_by_id(btf, func_id);
+ if (!btf_is_func(t))
+ continue;
+
+ func_name = btf__name_by_offset(btf, t->name_off);
+ if (!func_name)
+ continue;
+
+ id = btf_id__find(&obj->funcs, func_name);
+ if (!id || id->kind != BTF_ID_KIND_SYM)
+ continue;
+
+ flags = find_kfunc_flags(obj, id);
+
+ kfunc.name = id->name;
+ kfunc.btf_id = func_id;
+ kfunc.flags = flags;
+
+ err = push_kfunc(ctx, &kfunc);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int build_btf2btf_context(struct object *obj, struct btf2btf_context *ctx)
+{
+ int err;
+
+ ctx->btf = obj->btf;
+
+ err = collect_decl_tags(ctx);
+ if (err) {
+ pr_err("ERROR: resolve_btfids: failed to collect decl tags from BTF\n");
+ return err;
+ }
+
+ err = collect_kfuncs(obj, ctx);
+ if (err) {
+ pr_err("ERROR: resolve_btfids: failed to collect kfuncs from BTF\n");
+ return err;
+ }
+
+ return 0;
+}
+
+
+/* Implicit BPF kfunc arguments can only be of particular types */
+static bool is_kf_implicit_arg(const struct btf *btf, const struct btf_param *p)
+{
+ static const char *const kf_implicit_arg_types[] = {
+ "bpf_prog_aux",
+ };
+ const struct btf_type *t;
+ const char *name;
+
+ t = btf_type_skip_qualifiers(btf, p->type);
+ if (!btf_is_ptr(t))
+ return false;
+
+ t = btf_type_skip_qualifiers(btf, t->type);
+ if (!btf_is_struct(t))
+ return false;
+
+ name = btf__name_by_offset(btf, t->name_off);
+ if (!name)
+ return false;
+
+ for (int i = 0; i < ARRAY_SIZE(kf_implicit_arg_types); i++)
+ if (strcmp(name, kf_implicit_arg_types[i]) == 0)
+ return true;
+
+ return false;
+}
+
+/*
+ * For a kfunc with KF_IMPLICIT_ARGS we do the following:
+ * 1. Add a new function with _impl suffix in the name, with the prototype
+ * of the original kfunc.
+ * 2. Add all decl tags except "bpf_kfunc" for the _impl func.
+ * 3. Add a new function prototype with modified list of arguments:
+ * omitting implicit args.
+ * 4. Change the prototype of the original kfunc to the new one.
+ *
+ * This way we transform the BTF associated with the kfunc from
+ * __bpf_kfunc bpf_foo(int arg1, void *implicit_arg);
+ * into
+ * bpf_foo_impl(int arg1, void *implicit_arg);
+ * __bpf_kfunc bpf_foo(int arg1);
+ *
+ * If a kfunc with KF_IMPLICIT_ARGS already has an _impl counterpart
+ * in BTF, then it's a legacy case: an _impl function is declared in the
+ * source code. In this case, we can skip adding an _impl function, but we
+ * still have to add a func prototype that omits implicit args.
+ */
+static int process_kfunc_with_implicit_args(struct btf2btf_context *ctx, struct kfunc *kfunc)
+{
+ s32 idx, new_proto_id, new_func_id, proto_id;
+ const char *param_name, *tag_name;
+ const struct btf_param *params;
+ enum btf_func_linkage linkage;
+ char tmp_name[KSYM_NAME_LEN];
+ struct btf *btf = ctx->btf;
+ int err, len, nr_params;
+ struct btf_type *t;
+
+ t = (struct btf_type *)btf__type_by_id(btf, kfunc->btf_id);
+ if (!t || !btf_is_func(t)) {
+ pr_err("ERROR: resolve_btfids: btf id %d is not a function\n", kfunc->btf_id);
+ return -EINVAL;
+ }
+
+ linkage = btf_vlen(t);
+
+ proto_id = t->type;
+ t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+ if (!t || !btf_is_func_proto(t)) {
+ pr_err("ERROR: resolve_btfids: btf id %d is not a function prototype\n", proto_id);
+ return -EINVAL;
+ }
+
+ len = snprintf(tmp_name, sizeof(tmp_name), "%s%s", kfunc->name, KF_IMPL_SUFFIX);
+ if (len < 0 || len >= sizeof(tmp_name)) {
+ pr_err("ERROR: function name is too long: %s%s\n", kfunc->name, KF_IMPL_SUFFIX);
+ return -E2BIG;
+ }
+
+ if (btf__find_by_name_kind(btf, tmp_name, BTF_KIND_FUNC) > 0) {
+ pr_debug("resolve_btfids: function %s already exists in BTF\n", tmp_name);
+ goto add_new_proto;
+ }
+
+ /* Add a new function with _impl suffix and original prototype */
+ new_func_id = btf__add_func(btf, tmp_name, linkage, proto_id);
+ if (new_func_id < 0) {
+ pr_err("ERROR: resolve_btfids: failed to add func %s to BTF\n", tmp_name);
+ return new_func_id;
+ }
+
+ /* Copy all decl tags except "bpf_kfunc" from the original kfunc to the new one */
+ for (int i = 0; i < ctx->nr_decl_tags; i++) {
+ t = (struct btf_type *)btf__type_by_id(btf, ctx->decl_tags[i]);
+ if (t->type != kfunc->btf_id)
+ continue;
+
+ tag_name = btf__name_by_offset(btf, t->name_off);
+ if (strcmp(tag_name, "bpf_kfunc") == 0)
+ continue;
+
+ idx = btf_decl_tag(t)->component_idx;
+
+ if (btf_kflag(t))
+ err = btf__add_decl_attr(btf, tag_name, new_func_id, idx);
+ else
+ err = btf__add_decl_tag(btf, tag_name, new_func_id, idx);
+
+ if (err < 0) {
+ pr_err("ERROR: resolve_btfids: failed to add decl tag %s for %s\n",
+ tag_name, tmp_name);
+ return -EINVAL;
+ }
+ }
+
+add_new_proto:
+ t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+ new_proto_id = btf__add_func_proto(btf, t->type);
+ if (new_proto_id < 0) {
+ pr_err("ERROR: resolve_btfids: failed to add func proto for %s\n", kfunc->name);
+ return new_proto_id;
+ }
+
+ /* Add non-implicit args to the new prototype */
+ t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+ nr_params = btf_vlen(t);
+ for (int i = 0; i < nr_params; i++) {
+ params = btf_params(t);
+ if (is_kf_implicit_arg(btf, &params[i]))
+ break;
+ param_name = btf__name_by_offset(btf, params[i].name_off);
+ err = btf__add_func_param(btf, param_name, params[i].type);
+ if (err < 0) {
+ pr_err("ERROR: resolve_btfids: failed to add param %s for %s\n",
+ param_name, kfunc->name);
+ return err;
+ }
+ t = (struct btf_type *)btf__type_by_id(btf, proto_id);
+ }
+
+ /* Finally change the prototype of the original kfunc to the new one */
+ t = (struct btf_type *)btf__type_by_id(btf, kfunc->btf_id);
+ t->type = new_proto_id;
+
+ pr_debug("resolve_btfids: updated BTF for kfunc with implicit args %s\n", kfunc->name);
+
+ return 0;
+}
+
+static int btf2btf(struct object *obj)
+{
+ struct btf2btf_context ctx = {};
+ int err;
+
+ err = build_btf2btf_context(obj, &ctx);
+ if (err)
+ goto out;
+
+ for (u32 i = 0; i < ctx.nr_kfuncs; i++) {
+ struct kfunc *kfunc = &ctx.kfuncs[i];
+
+ if (!(kfunc->flags & KF_IMPLICIT_ARGS))
+ continue;
+
+ err = process_kfunc_with_implicit_args(&ctx, kfunc);
+ if (err)
+ goto out;
+ }
+
+ err = 0;
+out:
+ free(ctx.decl_tags);
+ free(ctx.kfuncs);
+
+ return err;
+}
+
/*
* Sort types by name in ascending order resulting in all
* anonymous types being placed before named types.
@@ -911,6 +1277,41 @@ out:
return err;
}
+static int finalize_btf(struct object *obj)
+{
+ struct btf *base_btf = obj->base_btf, *btf = obj->btf;
+ int err;
+
+ if (obj->base_btf && obj->distill_base) {
+ err = btf__distill_base(obj->btf, &base_btf, &btf);
+ if (err) {
+ pr_err("FAILED to distill base BTF: %s\n", strerror(errno));
+ goto out_err;
+ }
+
+ btf__free(obj->base_btf);
+ btf__free(obj->btf);
+ obj->base_btf = base_btf;
+ obj->btf = btf;
+ }
+
+ err = sort_btf_by_name(obj->btf);
+ if (err) {
+ pr_err("FAILED to sort BTF: %s\n", strerror(errno));
+ goto out_err;
+ }
+
+ return 0;
+
+out_err:
+ btf__free(base_btf);
+ btf__free(btf);
+ obj->base_btf = NULL;
+ obj->btf = NULL;
+
+ return err;
+}
+
static inline int make_out_path(char *buf, u32 buf_sz, const char *in_path, const char *suffix)
{
int len = snprintf(buf, buf_sz, "%s%s", in_path, suffix);
@@ -1054,6 +1455,7 @@ int main(int argc, const char **argv)
};
const char *btfids_path = NULL;
bool fatal_warnings = false;
+ bool resolve_btfids = true;
char out_path[PATH_MAX];
struct option btfid_options[] = {
@@ -1083,12 +1485,6 @@ int main(int argc, const char **argv)
if (btfids_path)
return patch_btfids(btfids_path, obj.path);
- if (load_btf(&obj))
- goto out;
-
- if (sort_btf_by_name(obj.btf))
- goto out;
-
if (elf_collect(&obj))
goto out;
@@ -1099,12 +1495,25 @@ int main(int argc, const char **argv)
if (obj.efile.idlist_shndx == -1 ||
obj.efile.symbols_shndx == -1) {
pr_debug("Cannot find .BTF_ids or symbols sections, skip symbols resolution\n");
- goto dump_btf;
+ resolve_btfids = false;
}
- if (symbols_collect(&obj))
+ if (resolve_btfids)
+ if (symbols_collect(&obj))
+ goto out;
+
+ if (load_btf(&obj))
goto out;
+ if (btf2btf(&obj))
+ goto out;
+
+ if (finalize_btf(&obj))
+ goto out;
+
+ if (!resolve_btfids)
+ goto dump_btf;
+
if (symbols_resolve(&obj))
goto out;
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index d4e4e388e625..c145da05a67c 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -315,8 +315,8 @@ enum libbpf_tristate {
___param, sizeof(___param)); \
})
-extern int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const void *args,
- __u32 len__sz, void *aux__prog) __weak __ksym;
+extern int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args,
+ __u32 len__sz) __weak __ksym;
#define bpf_stream_printk(stream_id, fmt, args...) \
({ \
@@ -328,7 +328,7 @@ extern int bpf_stream_vprintk_impl(int stream_id, const char *fmt__str, const vo
___bpf_fill(___param, args); \
_Pragma("GCC diagnostic pop") \
\
- bpf_stream_vprintk_impl(stream_id, ___fmt, ___param, sizeof(___param), NULL); \
+ bpf_stream_vprintk(stream_id, ___fmt, ___param, sizeof(___param)); \
})
/* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 2cd9165c7348..68a49b1f77ae 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -580,11 +580,6 @@ extern void bpf_iter_css_destroy(struct bpf_iter_css *it) __weak __ksym;
extern int bpf_wq_init(struct bpf_wq *wq, void *p__map, unsigned int flags) __weak __ksym;
extern int bpf_wq_start(struct bpf_wq *wq, unsigned int flags) __weak __ksym;
-extern int bpf_wq_set_callback_impl(struct bpf_wq *wq,
- int (callback_fn)(void *map, int *key, void *value),
- unsigned int flags__k, void *aux__ign) __ksym;
-#define bpf_wq_set_callback(timer, cb, flags) \
- bpf_wq_set_callback_impl(timer, cb, flags, NULL)
struct bpf_iter_kmem_cache;
extern int bpf_iter_kmem_cache_new(struct bpf_iter_kmem_cache *it) __weak __ksym;
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c b/tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c
new file mode 100644
index 000000000000..5e4793c9c29a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_implicit_args.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include "kfunc_implicit_args.skel.h"
+
+void test_kfunc_implicit_args(void)
+{
+ RUN_TESTS(kfunc_implicit_args);
+}
diff --git a/tools/testing/selftests/bpf/progs/file_reader.c b/tools/testing/selftests/bpf/progs/file_reader.c
index 4d756b623557..462712ff3b8a 100644
--- a/tools/testing/selftests/bpf/progs/file_reader.c
+++ b/tools/testing/selftests/bpf/progs/file_reader.c
@@ -77,7 +77,7 @@ int on_open_validate_file_read(void *c)
err = 1;
return 0;
}
- bpf_task_work_schedule_signal_impl(task, &work->tw, &arrmap, task_work_callback, NULL);
+ bpf_task_work_schedule_signal(task, &work->tw, &arrmap, task_work_callback);
return 0;
}
diff --git a/tools/testing/selftests/bpf/progs/kfunc_implicit_args.c b/tools/testing/selftests/bpf/progs/kfunc_implicit_args.c
new file mode 100644
index 000000000000..89b6a47e22dd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/kfunc_implicit_args.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+extern int bpf_kfunc_implicit_arg(int a) __weak __ksym;
+extern int bpf_kfunc_implicit_arg_impl(int a, struct bpf_prog_aux *aux) __weak __ksym; /* illegal */
+extern int bpf_kfunc_implicit_arg_legacy(int a, int b) __weak __ksym;
+extern int bpf_kfunc_implicit_arg_legacy_impl(int a, int b, struct bpf_prog_aux *aux) __weak __ksym;
+
+char _license[] SEC("license") = "GPL";
+
+SEC("syscall")
+__retval(5)
+int test_kfunc_implicit_arg(void *ctx)
+{
+ return bpf_kfunc_implicit_arg(5);
+}
+
+SEC("syscall")
+__failure __msg("cannot find address for kernel function bpf_kfunc_implicit_arg_impl")
+int test_kfunc_implicit_arg_impl_illegal(void *ctx)
+{
+ return bpf_kfunc_implicit_arg_impl(5, NULL);
+}
+
+SEC("syscall")
+__retval(7)
+int test_kfunc_implicit_arg_legacy(void *ctx)
+{
+ return bpf_kfunc_implicit_arg_legacy(3, 4);
+}
+
+SEC("syscall")
+__retval(11)
+int test_kfunc_implicit_arg_legacy_impl(void *ctx)
+{
+ return bpf_kfunc_implicit_arg_legacy_impl(5, 6, NULL);
+}
diff --git a/tools/testing/selftests/bpf/progs/stream_fail.c b/tools/testing/selftests/bpf/progs/stream_fail.c
index 3662515f0107..8e8249f3521c 100644
--- a/tools/testing/selftests/bpf/progs/stream_fail.c
+++ b/tools/testing/selftests/bpf/progs/stream_fail.c
@@ -10,7 +10,7 @@ SEC("syscall")
__failure __msg("Possibly NULL pointer passed")
int stream_vprintk_null_arg(void *ctx)
{
- bpf_stream_vprintk_impl(BPF_STDOUT, "", NULL, 0, NULL);
+ bpf_stream_vprintk(BPF_STDOUT, "", NULL, 0);
return 0;
}
@@ -18,7 +18,7 @@ SEC("syscall")
__failure __msg("R3 type=scalar expected=")
int stream_vprintk_scalar_arg(void *ctx)
{
- bpf_stream_vprintk_impl(BPF_STDOUT, "", (void *)46, 0, NULL);
+ bpf_stream_vprintk(BPF_STDOUT, "", (void *)46, 0);
return 0;
}
@@ -26,7 +26,7 @@ SEC("syscall")
__failure __msg("arg#1 doesn't point to a const string")
int stream_vprintk_string_arg(void *ctx)
{
- bpf_stream_vprintk_impl(BPF_STDOUT, ctx, NULL, 0, NULL);
+ bpf_stream_vprintk(BPF_STDOUT, ctx, NULL, 0);
return 0;
}
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_assoc.c b/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
index 8f1097903e22..68842e3f936b 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_assoc.c
@@ -32,7 +32,7 @@ int BPF_PROG(sys_enter_prog_a, struct pt_regs *regs, long id)
if (!test_pid || task->pid != test_pid)
return 0;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
if (ret != MAP_A_MAGIC)
test_err_a++;
@@ -45,7 +45,7 @@ int syscall_prog_a(void *ctx)
struct st_ops_args args = {};
int ret;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
if (ret != MAP_A_MAGIC)
test_err_a++;
@@ -79,7 +79,7 @@ int BPF_PROG(sys_enter_prog_b, struct pt_regs *regs, long id)
if (!test_pid || task->pid != test_pid)
return 0;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
if (ret != MAP_B_MAGIC)
test_err_b++;
@@ -92,7 +92,7 @@ int syscall_prog_b(void *ctx)
struct st_ops_args args = {};
int ret;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
if (ret != MAP_B_MAGIC)
test_err_b++;
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c b/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c
index d5a2ea934284..0bed49e9f217 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_assoc_in_timer.c
@@ -31,7 +31,7 @@ __noinline static int timer_cb(void *map, int *key, struct bpf_timer *timer)
struct st_ops_args args = {};
recur++;
- timer_test_1_ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ timer_test_1_ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
recur--;
timer_cb_run++;
@@ -64,7 +64,7 @@ int syscall_prog(void *ctx)
struct st_ops_args args = {};
int ret;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
if (ret != MAP_MAGIC)
test_err++;
diff --git a/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c b/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c
index 5bb6ebf5eed4..396b3e58c729 100644
--- a/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c
+++ b/tools/testing/selftests/bpf/progs/struct_ops_assoc_reuse.c
@@ -23,7 +23,7 @@ int BPF_PROG(test_1_a, struct st_ops_args *args)
if (!recur) {
recur++;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(args);
if (ret != -1)
test_err_a++;
recur--;
@@ -40,7 +40,7 @@ int syscall_prog_a(void *ctx)
struct st_ops_args args = {};
int ret;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
if (ret != MAP_A_MAGIC)
test_err_a++;
@@ -62,7 +62,7 @@ int syscall_prog_b(void *ctx)
struct st_ops_args args = {};
int ret;
- ret = bpf_kfunc_multi_st_ops_test_1_impl(&args, NULL);
+ ret = bpf_kfunc_multi_st_ops_test_1_assoc(&args);
if (ret != MAP_A_MAGIC)
test_err_b++;
diff --git a/tools/testing/selftests/bpf/progs/task_work.c b/tools/testing/selftests/bpf/progs/task_work.c
index 663a80990f8f..a6009d105158 100644
--- a/tools/testing/selftests/bpf/progs/task_work.c
+++ b/tools/testing/selftests/bpf/progs/task_work.c
@@ -65,8 +65,7 @@ int oncpu_hash_map(struct pt_regs *args)
work = bpf_map_lookup_elem(&hmap, &key);
if (!work)
return 0;
-
- bpf_task_work_schedule_resume_impl(task, &work->tw, &hmap, process_work, NULL);
+ bpf_task_work_schedule_resume(task, &work->tw, &hmap, process_work);
return 0;
}
@@ -80,7 +79,7 @@ int oncpu_array_map(struct pt_regs *args)
work = bpf_map_lookup_elem(&arrmap, &key);
if (!work)
return 0;
- bpf_task_work_schedule_signal_impl(task, &work->tw, &arrmap, process_work, NULL);
+ bpf_task_work_schedule_signal(task, &work->tw, &arrmap, process_work);
return 0;
}
@@ -102,6 +101,6 @@ int oncpu_lru_map(struct pt_regs *args)
work = bpf_map_lookup_elem(&lrumap, &key);
if (!work || work->data[0])
return 0;
- bpf_task_work_schedule_resume_impl(task, &work->tw, &lrumap, process_work, NULL);
+ bpf_task_work_schedule_resume(task, &work->tw, &lrumap, process_work);
return 0;
}
diff --git a/tools/testing/selftests/bpf/progs/task_work_fail.c b/tools/testing/selftests/bpf/progs/task_work_fail.c
index 1270953fd092..82e4b8913333 100644
--- a/tools/testing/selftests/bpf/progs/task_work_fail.c
+++ b/tools/testing/selftests/bpf/progs/task_work_fail.c
@@ -53,7 +53,7 @@ int mismatch_map(struct pt_regs *args)
work = bpf_map_lookup_elem(&arrmap, &key);
if (!work)
return 0;
- bpf_task_work_schedule_resume_impl(task, &work->tw, &hmap, process_work, NULL);
+ bpf_task_work_schedule_resume(task, &work->tw, &hmap, process_work);
return 0;
}
@@ -65,7 +65,7 @@ int no_map_task_work(struct pt_regs *args)
struct bpf_task_work tw;
task = bpf_get_current_task_btf();
- bpf_task_work_schedule_resume_impl(task, &tw, &hmap, process_work, NULL);
+ bpf_task_work_schedule_resume(task, &tw, &hmap, process_work);
return 0;
}
@@ -76,7 +76,7 @@ int task_work_null(struct pt_regs *args)
struct task_struct *task;
task = bpf_get_current_task_btf();
- bpf_task_work_schedule_resume_impl(task, NULL, &hmap, process_work, NULL);
+ bpf_task_work_schedule_resume(task, NULL, &hmap, process_work);
return 0;
}
@@ -91,6 +91,6 @@ int map_null(struct pt_regs *args)
work = bpf_map_lookup_elem(&arrmap, &key);
if (!work)
return 0;
- bpf_task_work_schedule_resume_impl(task, &work->tw, NULL, process_work, NULL);
+ bpf_task_work_schedule_resume(task, &work->tw, NULL, process_work);
return 0;
}
diff --git a/tools/testing/selftests/bpf/progs/task_work_stress.c b/tools/testing/selftests/bpf/progs/task_work_stress.c
index 55e555f7f41b..1d4378f351ef 100644
--- a/tools/testing/selftests/bpf/progs/task_work_stress.c
+++ b/tools/testing/selftests/bpf/progs/task_work_stress.c
@@ -51,8 +51,8 @@ int schedule_task_work(void *ctx)
if (!work)
return 0;
}
- err = bpf_task_work_schedule_signal_impl(bpf_get_current_task_btf(), &work->tw, &hmap,
- process_work, NULL);
+ err = bpf_task_work_schedule_signal(bpf_get_current_task_btf(), &work->tw, &hmap,
+ process_work);
if (err)
__sync_fetch_and_add(&schedule_error, 1);
else
diff --git a/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c b/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
index 7efa9521105e..39aff82549c9 100644
--- a/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
+++ b/tools/testing/selftests/bpf/progs/verifier_async_cb_context.c
@@ -96,7 +96,7 @@ int wq_non_sleepable_prog(void *ctx)
if (bpf_wq_init(&val->w, &wq_map, 0) != 0)
return 0;
- if (bpf_wq_set_callback_impl(&val->w, wq_cb, 0, NULL) != 0)
+ if (bpf_wq_set_callback(&val->w, wq_cb, 0) != 0)
return 0;
return 0;
}
@@ -114,7 +114,7 @@ int wq_sleepable_prog(void *ctx)
if (bpf_wq_init(&val->w, &wq_map, 0) != 0)
return 0;
- if (bpf_wq_set_callback_impl(&val->w, wq_cb, 0, NULL) != 0)
+ if (bpf_wq_set_callback(&val->w, wq_cb, 0) != 0)
return 0;
return 0;
}
@@ -156,7 +156,7 @@ int task_work_non_sleepable_prog(void *ctx)
if (!task)
return 0;
- bpf_task_work_schedule_resume_impl(task, &val->tw, &task_work_map, task_work_cb, NULL);
+ bpf_task_work_schedule_resume(task, &val->tw, &task_work_map, task_work_cb);
return 0;
}
@@ -176,6 +176,6 @@ int task_work_sleepable_prog(void *ctx)
if (!task)
return 0;
- bpf_task_work_schedule_resume_impl(task, &val->tw, &task_work_map, task_work_cb, NULL);
+ bpf_task_work_schedule_resume(task, &val->tw, &task_work_map, task_work_cb);
return 0;
}
diff --git a/tools/testing/selftests/bpf/progs/wq_failures.c b/tools/testing/selftests/bpf/progs/wq_failures.c
index d06f6d40594a..3767f5595bbc 100644
--- a/tools/testing/selftests/bpf/progs/wq_failures.c
+++ b/tools/testing/selftests/bpf/progs/wq_failures.c
@@ -97,7 +97,7 @@ __failure
/* check that the first argument of bpf_wq_set_callback()
* is a correct bpf_wq pointer.
*/
-__msg(": (85) call bpf_wq_set_callback_impl#") /* anchor message */
+__msg(": (85) call bpf_wq_set_callback#") /* anchor message */
__msg("arg#0 doesn't point to a map value")
long test_wrong_wq_pointer(void *ctx)
{
@@ -123,7 +123,7 @@ __failure
/* check that the first argument of bpf_wq_set_callback()
* is a correct bpf_wq pointer.
*/
-__msg(": (85) call bpf_wq_set_callback_impl#") /* anchor message */
+__msg(": (85) call bpf_wq_set_callback#") /* anchor message */
__msg("off 1 doesn't point to 'struct bpf_wq' that is at 0")
long test_wrong_wq_pointer_offset(void *ctx)
{
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index bc07ce9d5477..0d542ba64365 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -1140,7 +1140,11 @@ __bpf_kfunc int bpf_kfunc_st_ops_inc10(struct st_ops_args *args)
}
__bpf_kfunc int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id);
-__bpf_kfunc int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux_prog);
+__bpf_kfunc int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args, struct bpf_prog_aux *aux);
+
+__bpf_kfunc int bpf_kfunc_implicit_arg(int a, struct bpf_prog_aux *aux);
+__bpf_kfunc int bpf_kfunc_implicit_arg_legacy(int a, int b, struct bpf_prog_aux *aux);
+__bpf_kfunc int bpf_kfunc_implicit_arg_legacy_impl(int a, int b, struct bpf_prog_aux *aux);
BTF_KFUNCS_START(bpf_testmod_check_kfunc_ids)
BTF_ID_FLAGS(func, bpf_testmod_test_mod_kfunc)
@@ -1183,7 +1187,10 @@ BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_epilogue, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_st_ops_test_pro_epilogue, KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_kfunc_st_ops_inc10)
BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1)
-BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1_impl)
+BTF_ID_FLAGS(func, bpf_kfunc_multi_st_ops_test_1_assoc, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg_legacy, KF_IMPLICIT_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_implicit_arg_legacy_impl)
BTF_KFUNCS_END(bpf_testmod_check_kfunc_ids)
static int bpf_testmod_ops_init(struct btf *btf)
@@ -1662,19 +1669,37 @@ int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id)
}
/* Call test_1() of the associated struct_ops map */
-int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog)
+int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args, struct bpf_prog_aux *aux)
{
- struct bpf_prog_aux *prog_aux = (struct bpf_prog_aux *)aux__prog;
struct bpf_testmod_multi_st_ops *st_ops;
int ret = -1;
- st_ops = (struct bpf_testmod_multi_st_ops *)bpf_prog_get_assoc_struct_ops(prog_aux);
+ st_ops = (struct bpf_testmod_multi_st_ops *)bpf_prog_get_assoc_struct_ops(aux);
if (st_ops)
ret = st_ops->test_1(args);
return ret;
}
+int bpf_kfunc_implicit_arg(int a, struct bpf_prog_aux *aux)
+{
+ if (aux && a > 0)
+ return a;
+ return -EINVAL;
+}
+
+int bpf_kfunc_implicit_arg_legacy(int a, int b, struct bpf_prog_aux *aux)
+{
+ if (aux)
+ return a + b;
+ return -EINVAL;
+}
+
+int bpf_kfunc_implicit_arg_legacy_impl(int a, int b, struct bpf_prog_aux *aux)
+{
+ return bpf_kfunc_implicit_arg_legacy(a, b, aux);
+}
+
static int multi_st_ops_reg(void *kdata, struct bpf_link *link)
{
struct bpf_testmod_multi_st_ops *st_ops =
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index 2357a0340ffe..225ea30c4e3d 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -161,7 +161,9 @@ void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
struct task_struct *bpf_kfunc_ret_rcu_test(void) __ksym;
int *bpf_kfunc_ret_rcu_test_nostruct(int rdonly_buf_size) __ksym;
-int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __ksym;
-int bpf_kfunc_multi_st_ops_test_1_impl(struct st_ops_args *args, void *aux__prog) __ksym;
+#ifndef __KERNEL__
+extern int bpf_kfunc_multi_st_ops_test_1(struct st_ops_args *args, u32 id) __weak __ksym;
+extern int bpf_kfunc_multi_st_ops_test_1_assoc(struct st_ops_args *args) __weak __ksym;
+#endif
#endif /* _BPF_TESTMOD_KFUNC_H */
diff --git a/tools/testing/selftests/hid/progs/hid_bpf_helpers.h b/tools/testing/selftests/hid/progs/hid_bpf_helpers.h
index 531228b849da..80ab60905865 100644
--- a/tools/testing/selftests/hid/progs/hid_bpf_helpers.h
+++ b/tools/testing/selftests/hid/progs/hid_bpf_helpers.h
@@ -116,10 +116,8 @@ extern int hid_bpf_try_input_report(struct hid_bpf_ctx *ctx,
/* bpf_wq implementation */
extern int bpf_wq_init(struct bpf_wq *wq, void *p__map, unsigned int flags) __weak __ksym;
extern int bpf_wq_start(struct bpf_wq *wq, unsigned int flags) __weak __ksym;
-extern int bpf_wq_set_callback_impl(struct bpf_wq *wq,
- int (callback_fn)(void *map, int *key, void *wq),
- unsigned int flags__k, void *aux__ign) __weak __ksym;
-#define bpf_wq_set_callback(timer, cb, flags) \
- bpf_wq_set_callback_impl(timer, cb, flags, NULL)
+extern int bpf_wq_set_callback(struct bpf_wq *wq,
+ int (*callback_fn)(void *, int *, void *),
+ unsigned int flags) __weak __ksym;
#endif /* __HID_BPF_HELPERS_H */