diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2025-12-03 16:54:54 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2025-12-03 16:54:54 -0800 |
| commit | 015e7b0b0e8e51f7321ec2aafc1d7fc0a8a5536f (patch) | |
| tree | 258f719e59946c733dd03198eba404e85f9d0945 /tools/testing/selftests/bpf/prog_tests/perf_branches.c | |
| parent | b6d993310a65b994f37e3347419d9ed398ee37a3 (diff) | |
| parent | ff34657aa72a4dab9c2fd38e1b31a506951f4b1c (diff) | |
Merge tag 'bpf-next-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf updates from Alexei Starovoitov:
- Convert selftests/bpf/test_tc_edt and test_tc_tunnel from .sh to
test_progs runner (Alexis Lothoré)
- Convert selftests/bpf/test_xsk to test_progs runner (Bastien
Curutchet)
- Replace bpf memory allocator with kmalloc_nolock() in
bpf_local_storage (Amery Hung), and in bpf streams and range tree
(Puranjay Mohan)
- Introduce support for indirect jumps in BPF verifier and x86 JIT
(Anton Protopopov) and arm64 JIT (Puranjay Mohan)
- Remove runqslower bpf tool (Hoyeon Lee)
- Fix corner cases in the verifier to close several syzbot reports
(Eduard Zingerman, KaFai Wan)
- Several improvements in deadlock detection in rqspinlock (Kumar
Kartikeya Dwivedi)
- Implement "jmp" mode for BPF trampoline and corresponding
DYNAMIC_FTRACE_WITH_JMP. It improves "fexit" program type performance
from 80 M/s to 136 M/s. With Steven's Ack. (Menglong Dong)
- Add ability to test non-linear skbs in BPF_PROG_TEST_RUN (Paul
Chaignon)
- Do not let BPF_PROG_TEST_RUN emit invalid GSO types to stack (Daniel
Borkmann)
- Generalize buildid reader into bpf_dynptr (Mykyta Yatsenko)
- Optimize bpf_map_update_elem() for map-in-map types (Ritesh
Oedayrajsingh Varma)
- Introduce overwrite mode for BPF ring buffer (Xu Kuohai)
* tag 'bpf-next-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (169 commits)
bpf: optimize bpf_map_update_elem() for map-in-map types
bpf: make kprobe_multi_link_prog_run always_inline
selftests/bpf: do not hardcode target rate in test_tc_edt BPF program
selftests/bpf: remove test_tc_edt.sh
selftests/bpf: integrate test_tc_edt into test_progs
selftests/bpf: rename test_tc_edt.bpf.c section to expose program type
selftests/bpf: Add success stats to rqspinlock stress test
rqspinlock: Precede non-head waiter queueing with AA check
rqspinlock: Disable spinning for trylock fallback
rqspinlock: Use trylock fallback when per-CPU rqnode is busy
rqspinlock: Perform AA checks immediately
rqspinlock: Enclose lock/unlock within lock entry acquisitions
bpf: Remove runqslower tool
selftests/bpf: Remove usage of lsm/file_alloc_security in selftest
bpf: Disable file_alloc_security hook
bpf: check for insn arrays in check_ptr_alignment
bpf: force BPF_F_RDONLY_PROG on insn array creation
bpf: Fix exclusive map memory leak
selftests/bpf: Make CS length configurable for rqspinlock stress test
selftests/bpf: Add lock wait time stats to rqspinlock stress test
...
Diffstat (limited to 'tools/testing/selftests/bpf/prog_tests/perf_branches.c')
| -rw-r--r-- | tools/testing/selftests/bpf/prog_tests/perf_branches.c | 22 |
1 files changed, 17 insertions, 5 deletions
diff --git a/tools/testing/selftests/bpf/prog_tests/perf_branches.c b/tools/testing/selftests/bpf/prog_tests/perf_branches.c index bc24f83339d6..0a7ef770c487 100644 --- a/tools/testing/selftests/bpf/prog_tests/perf_branches.c +++ b/tools/testing/selftests/bpf/prog_tests/perf_branches.c @@ -15,6 +15,10 @@ static void check_good_sample(struct test_perf_branches *skel) int pbe_size = sizeof(struct perf_branch_entry); int duration = 0; + if (CHECK(!skel->bss->run_cnt, "invalid run_cnt", + "checked sample validity before prog run")) + return; + if (CHECK(!skel->bss->valid, "output not valid", "no valid sample from prog")) return; @@ -45,6 +49,10 @@ static void check_bad_sample(struct test_perf_branches *skel) int written_stack = skel->bss->written_stack_out; int duration = 0; + if (CHECK(!skel->bss->run_cnt, "invalid run_cnt", + "checked sample validity before prog run")) + return; + if (CHECK(!skel->bss->valid, "output not valid", "no valid sample from prog")) return; @@ -83,8 +91,12 @@ static void test_perf_branches_common(int perf_fd, err = pthread_setaffinity_np(pthread_self(), sizeof(cpu_set), &cpu_set); if (CHECK(err, "set_affinity", "cpu #0, err %d\n", err)) goto out_destroy; - /* spin the loop for a while (random high number) */ - for (i = 0; i < 1000000; ++i) + + /* Spin the loop for a while by using a high iteration count, and by + * checking whether the specific run count marker has been explicitly + * incremented at least once by the backing perf_event BPF program. + */ + for (i = 0; i < 100000000 && !*(volatile int *)&skel->bss->run_cnt; ++i) ++j; test_perf_branches__detach(skel); @@ -116,11 +128,11 @@ static void test_perf_branches_hw(void) pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC); /* - * Some setups don't support branch records (virtual machines, !x86), - * so skip test in this case. + * Some setups don't support LBR (virtual machines, !x86, AMD Milan Zen + * 3 which only supports BRS), so skip test in this case. */ if (pfd < 0) { - if (errno == ENOENT || errno == EOPNOTSUPP) { + if (errno == ENOENT || errno == EOPNOTSUPP || errno == EINVAL) { printf("%s:SKIP:no PERF_SAMPLE_BRANCH_STACK\n", __func__); test__skip(); |
