diff options
| author | Alexei Starovoitov <ast@kernel.org> | 2026-01-02 14:31:59 -0800 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2026-01-02 14:32:00 -0800 |
| commit | 7694ff8f6ca7f7cdf2e5636ecbe6dacaeb71678d (patch) | |
| tree | 018b2c54148636e944445795b87b9525a517ba28 /kernel/bpf/range_tree.c | |
| parent | e40030a46acc07bb956068e59c614f1a17459a18 (diff) | |
| parent | e66fe1bc6d25d6fbced99de6c377f1b3d961a80e (diff) | |
Merge branch 'memcg-accounting-for-bpf-arena'
Puranjay Mohan says:
====================
memcg accounting for BPF arena
v4: https://lore.kernel.org/all/20260102181333.3033679-1-puranjay@kernel.org/
Changes in v4->v5:
- Remove unused variables from bpf_map_alloc_pages() (CI)
v3: https://lore.kernel.org/all/20260102151852.570285-1-puranjay@kernel.org/
Changes in v3->v4:
- Do memcg set/recover in arena_reserve_pages() rather than
bpf_arena_reserve_pages() for symmetry with other kfuncs (Alexei)
v2: https://lore.kernel.org/all/20251231141434.3416822-1-puranjay@kernel.org/
Changes in v2->v3:
- Remove memcg accounting from bpf_map_alloc_pages() as the caller does
it already. (Alexei)
- Do memcg set/recover in arena_alloc/free_pages() rather than
bpf_arena_alloc/free_pages(), it reduces copy pasting in
sleepable/non_sleepable functions.
v1: https://lore.kernel.org/all/20251230153006.1347742-1-puranjay@kernel.org/
Changes in v1->v2:
- Return both pointers through arguments from bpf_map_memcg_enter and
make it return void. (Alexei)
- Add memcg accounting in arena_free_worker (AI)
This set adds memcg accounting logic into arena kfuncs and other places
that do allocations in arena.c.
====================
Link: https://patch.msgid.link/20260102200230.25168-1-puranjay@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'kernel/bpf/range_tree.c')
| -rw-r--r-- | kernel/bpf/range_tree.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/bpf/range_tree.c b/kernel/bpf/range_tree.c index 99c63d982c5d..2f28886f3ff7 100644 --- a/kernel/bpf/range_tree.c +++ b/kernel/bpf/range_tree.c @@ -149,7 +149,8 @@ int range_tree_clear(struct range_tree *rt, u32 start, u32 len) range_it_insert(rn, rt); /* Add a range */ - new_rn = kmalloc_nolock(sizeof(struct range_node), 0, NUMA_NO_NODE); + new_rn = kmalloc_nolock(sizeof(struct range_node), __GFP_ACCOUNT, + NUMA_NO_NODE); if (!new_rn) return -ENOMEM; new_rn->rn_start = last + 1; @@ -234,7 +235,7 @@ int range_tree_set(struct range_tree *rt, u32 start, u32 len) right->rn_start = start; range_it_insert(right, rt); } else { - left = kmalloc_nolock(sizeof(struct range_node), 0, NUMA_NO_NODE); + left = kmalloc_nolock(sizeof(struct range_node), __GFP_ACCOUNT, NUMA_NO_NODE); if (!left) return -ENOMEM; left->rn_start = start; |
