summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorKumar Kartikeya Dwivedi <memxor@gmail.com>2026-02-04 16:38:53 -0800
committerAlexei Starovoitov <ast@kernel.org>2026-02-04 18:14:26 -0800
commit5000a097f82c7695b7760c5b67c95f0eab4d209b (patch)
tree303cdfc5cc1121aed84878e6c6c9b8b4e0ffb286 /kernel
parent81502d7f20bf862b706f5174979bed88d3ab82b3 (diff)
bpf: Reset prog callback in bpf_async_cancel_and_free()
Replace prog and callback in bpf_async_cb after removing visibility of bpf_async_cb in bpf_async_cancel_and_free() to increase the chances the scheduled async callbacks short-circuit execution and exit early, and not starting a RCU tasks trace section. This improves the overall time spent in running the wq selftest. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260205003853.527571-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/bpf/helpers.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index b7aec34540c2..a4f039cee88b 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1664,6 +1664,7 @@ static void bpf_async_cancel_and_free(struct bpf_async_kern *async)
if (!cb)
return;
+ bpf_async_update_prog_callback(cb, NULL, NULL);
/*
* No refcount_inc_not_zero(&cb->refcnt) here. Dropping the last
* refcnt. Either synchronously or asynchronously in irq_work.