diff options
| author | Jakub Kicinski <kuba@kernel.org> | 2023-04-24 18:45:11 -0700 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2023-04-24 18:45:12 -0700 |
| commit | ee3392ed16b064594a14ce5886e412efb05ed17b (patch) | |
| tree | 06bb0034c458545727699f043b91e0e058d45d68 /kernel | |
| parent | 9610a8dc0aaaf146630ff5566b5d8804fcd22d15 (diff) | |
| parent | be7dbd275dc6b911a5b9a22c4f9cb71b2c7fd847 (diff) | |
Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
====================
pull-request: bpf-next 2023-04-24
We've added 5 non-merge commits during the last 3 day(s) which contain
a total of 7 files changed, 87 insertions(+), 44 deletions(-).
The main changes are:
1) Workaround for bpf iter selftest due to lack of subprog support
in precision tracking, from Andrii.
2) Disable bpf_refcount_acquire kfunc until races are fixed, from Dave.
3) One more test_verifier test converted from asm macro to asm in C,
from Eduard.
4) Fix build with NETFILTER=y INET=n config, from Florian.
5) Add __rcu_read_{lock,unlock} into deny list, from Yafang.
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next:
selftests/bpf: avoid mark_all_scalars_precise() trigger in one of iter tests
bpf: Add __rcu_read_{lock,unlock} into btf id deny list
bpf: Disable bpf_refcount_acquire kfunc calls until race conditions are fixed
selftests/bpf: verifier/prevent_map_lookup converted to inline assembly
bpf: fix link failure with NETFILTER=y INET=n
====================
Link: https://lore.kernel.org/r/20230425005648.86714-1-alexei.starovoitov@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/bpf/verifier.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 0d73139ee4d8..fbcf5a4e2fcd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -10509,7 +10509,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ verbose(env, "arg#%d doesn't point to a type with bpf_refcount field\n", i); return -EINVAL; } - + if (rec->refcount_off >= 0) { + verbose(env, "bpf_refcount_acquire calls are disabled for now\n"); + return -EINVAL; + } meta->arg_refcount_acquire.btf = reg->btf; meta->arg_refcount_acquire.btf_id = reg->btf_id; break; @@ -18668,6 +18671,10 @@ BTF_ID(func, rcu_read_unlock_strict) BTF_ID(func, preempt_count_add) BTF_ID(func, preempt_count_sub) #endif +#ifdef CONFIG_PREEMPT_RCU +BTF_ID(func, __rcu_read_lock) +BTF_ID(func, __rcu_read_unlock) +#endif BTF_SET_END(btf_id_deny) static bool can_be_sleepable(struct bpf_prog *prog) |
