summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorAndrii Nakryiko <andrii@kernel.org>2025-12-05 16:17:58 -0800
committerAndrii Nakryiko <andrii@kernel.org>2025-12-05 16:17:59 -0800
commit5d9fb42f05e5bea386e6648d5699c2beaabe1c6a (patch)
tree0c9f7ea1f63bfa3a178c7d9383fefb6dff62c43d /include
parent81f88f6ab674973d361b6d176aa4d3ebd32253ab (diff)
parent0e841d19263ab6e1ca2b280109832f57624e48d1 (diff)
Merge branch 'support-associating-bpf-programs-with-struct_ops'
Amery Hung says: ==================== Support associating BPF programs with struct_ops Hi, This patchset adds a new BPF command BPF_PROG_ASSOC_STRUCT_OPS to the bpf() syscall to allow associating a BPF program with a struct_ops. The command is introduced to address a emerging need from struct_ops users. As the number of subsystems adopting struct_ops grows, more users are building their struct_ops-based solution with some help from other BPF programs. For example, scx_layer uses a syscall program as a user space trigger to refresh layers [0]. It also uses tracing program to infer whether a task is using GPU and needs to be prioritized [1]. In these use cases, when there are multiple struct_ops instances, the struct_ops kfuncs called from different BPF programs, whether struct_ops or not needs to be able to refer to a specific one, which currently is not possible. The new BPF command will allow users to explicitly associate a BPF program with a struct_ops map. The libbpf wrapper can be called after loading programs and before attaching programs and struct_ops. Internally, it will set prog->aux->st_ops_assoc to the struct_ops map. struct_ops kfuncs can then get the associated struct_ops struct by calling bpf_prog_get_assoc_struct_ops() with prog->aux, which can be acquired from a "__prog" argument. The value of the special argument will be fixed up by the verifier during verification. The command conceptually associates the implementation of BPF programs with struct_ops map, not the attachment. A program associated with the map will take a refcount of it so that st_ops_assoc always points to a valid struct_ops struct. struct_ops implementers can use the helper, bpf_prog_get_assoc_struct_ops to get the pointer. The returned struct_ops if not NULL is guaranteed to be valid and initialized. However, it is not guaranteed that the struct_ops is attached. The struct_ops implementer still need to take steps to track and check the state of the struct_ops in kdata, if the use case demand the struct_ops to be attached. We can also consider support associating struct_ops link with BPF programs, which on one hand make struct_ops implementer's job easier, but might complicate libbpf workflow and does not apply to legacy struct_ops attachment. [0] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c#L557 [1] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c#L754 --- v7 -> v8 - Fix libbpf return (Andrii) - Follow kfunc _impl suffic naming convention in selftest (Alexei) Link: https://lore.kernel.org/bpf/20251121231352.4032020-1-ameryhung@gmail.com/ v6 -> v7 - Drop the guarantee that bpf_prog_get_assoc_struct_ops() will always return an initialized struct_ops (Martin) - Minor misc. changes in selftests Link: https://lore.kernel.org/bpf/20251114221741.317631-1-ameryhung@gmail.com/ v5 -> v6 - Drop refcnt bumping for async callbacks and add RCU annotation (Martin) - Fix libbpf bug and update comments (Andrii) - Fix refcount bug in bpf_prog_assoc_struct_ops() (AI) Link: https://lore.kernel.org/bpf/20251104172652.1746988-1-ameryhung@gmail.com/ v4 -> v5 - Simplify the API for getting associated struct_ops and dont't expose struct_ops map lifecycle management (Andrii, Alexei) Link: https://lore.kernel.org/bpf/20251024212914.1474337-1-ameryhung@gmail.com/ v3 -> v4 - Fix potential dangling pointer in timer callback. Protect st_ops_assoc with RCU. The get helper now needs to be paired with bpf_struct_ops_put() - The command should only increase refcount once for a program (Andrii) - Test a struct_ops program reused in two struct_ops maps - Test getting associated struct_ops in timer callback Link: https://lore.kernel.org/bpf/20251017215627.722338-1-ameryhung@gmail.com/ v2 -> v3 - Change the type of st_ops_assoc from void* (i.e., kdata) to bpf_map (Andrii) - Fix a bug that clears BPF_PTR_POISON when a struct_ops map is freed (Andrii) - Return NULL if the map is not fully initialized (Martin) - Move struct_ops map refcount inc/dec into internal helpers (Martin) - Add libbpf API, bpf_program__assoc_struct_ops (Andrii) Link: https://lore.kernel.org/bpf/20251016204503.3203690-1-ameryhung@gmail.com/ v1 -> v2 - Poison st_ops_assoc when reusing the program in more than one struct_ops maps and add a helper to access the pointer (Andrii) - Minor style and naming changes (Andrii) Link: https://lore.kernel.org/bpf/20251010174953.2884682-1-ameryhung@gmail.com/ --- ==================== Link: https://patch.msgid.link/20251203233748.668365-1-ameryhung@gmail.com Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/bpf.h16
-rw-r--r--include/uapi/linux/bpf.h17
2 files changed, 33 insertions, 0 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 6498be4c44f8..28d8d6b7bb1e 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1739,6 +1739,8 @@ struct bpf_prog_aux {
struct rcu_head rcu;
};
struct bpf_stream stream[2];
+ struct mutex st_ops_assoc_mutex;
+ struct bpf_map __rcu *st_ops_assoc;
};
struct bpf_prog {
@@ -2041,6 +2043,9 @@ static inline void bpf_module_put(const void *data, struct module *owner)
module_put(owner);
}
int bpf_struct_ops_link_create(union bpf_attr *attr);
+int bpf_prog_assoc_struct_ops(struct bpf_prog *prog, struct bpf_map *map);
+void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog);
+void *bpf_prog_get_assoc_struct_ops(const struct bpf_prog_aux *aux);
u32 bpf_struct_ops_id(const void *kdata);
#ifdef CONFIG_NET
@@ -2088,6 +2093,17 @@ static inline int bpf_struct_ops_link_create(union bpf_attr *attr)
{
return -EOPNOTSUPP;
}
+static inline int bpf_prog_assoc_struct_ops(struct bpf_prog *prog, struct bpf_map *map)
+{
+ return -EOPNOTSUPP;
+}
+static inline void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog)
+{
+}
+static inline void *bpf_prog_get_assoc_struct_ops(const struct bpf_prog_aux *aux)
+{
+ return NULL;
+}
static inline void bpf_map_struct_ops_info_fill(struct bpf_map_info *info, struct bpf_map *map)
{
}
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index f8d8513eda27..84ced3ed2d21 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -918,6 +918,16 @@ union bpf_iter_link_info {
* Number of bytes read from the stream on success, or -1 if an
* error occurred (in which case, *errno* is set appropriately).
*
+ * BPF_PROG_ASSOC_STRUCT_OPS
+ * Description
+ * Associate a BPF program with a struct_ops map. The struct_ops
+ * map is identified by *map_fd* and the BPF program is
+ * identified by *prog_fd*.
+ *
+ * Return
+ * 0 on success or -1 if an error occurred (in which case,
+ * *errno* is set appropriately).
+ *
* NOTES
* eBPF objects (maps and programs) can be shared between processes.
*
@@ -974,6 +984,7 @@ enum bpf_cmd {
BPF_PROG_BIND_MAP,
BPF_TOKEN_CREATE,
BPF_PROG_STREAM_READ_BY_FD,
+ BPF_PROG_ASSOC_STRUCT_OPS,
__MAX_BPF_CMD,
};
@@ -1894,6 +1905,12 @@ union bpf_attr {
__u32 prog_fd;
} prog_stream_read;
+ struct {
+ __u32 map_fd;
+ __u32 prog_fd;
+ __u32 flags;
+ } prog_assoc_struct_ops;
+
} __attribute__((aligned(8)));
/* The description below is an attempt at providing documentation to eBPF