diff options
| author | Paolo Bonzini <pbonzini@redhat.com> | 2022-12-06 12:29:06 -0500 |
|---|---|---|
| committer | Paolo Bonzini <pbonzini@redhat.com> | 2022-12-12 15:54:07 -0500 |
| commit | 9352e7470a1b4edd2fa9d235420ecc7bc3971bdc (patch) | |
| tree | 141ccdb777f2ee36764f2067ab43f23da114e08a /tools/perf/util/mmap.c | |
| parent | 2afc1fbbdab2aee831561f09f859989dcd5ed648 (diff) | |
| parent | 5656374b168c98377b6feee8d7500993eebda230 (diff) | |
Merge remote-tracking branch 'kvm/queue' into HEAD
x86 Xen-for-KVM:
* Allow the Xen runstate information to cross a page boundary
* Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured
* add support for 32-bit guests in SCHEDOP_poll
x86 fixes:
* One-off fixes for various emulation flows (SGX, VMXON, NRIPS=0).
* Reinstate IBPB on emulated VM-Exit that was incorrectly dropped a few
years back when eliminating unnecessary barriers when switching between
vmcs01 and vmcs02.
* Clean up the MSR filter docs.
* Clean up vmread_error_trampoline() to make it more obvious that params
must be passed on the stack, even for x86-64.
* Let userspace set all supported bits in MSR_IA32_FEAT_CTL irrespective
of the current guest CPUID.
* Fudge around a race with TSC refinement that results in KVM incorrectly
thinking a guest needs TSC scaling when running on a CPU with a
constant TSC, but no hardware-enumerated TSC frequency.
* Advertise (on AMD) that the SMM_CTL MSR is not supported
* Remove unnecessary exports
Selftests:
* Fix an inverted check in the access tracking perf test, and restore
support for asserting that there aren't too many idle pages when
running on bare metal.
* Fix an ordering issue in the AMX test introduced by recent conversions
to use kvm_cpu_has(), and harden the code to guard against similar bugs
in the future. Anything that tiggers caching of KVM's supported CPUID,
kvm_cpu_has() in this case, effectively hides opt-in XSAVE features if
the caching occurs before the test opts in via prctl().
* Fix build errors that occur in certain setups (unsure exactly what is
unique about the problematic setup) due to glibc overriding
static_assert() to a variant that requires a custom message.
* Introduce actual atomics for clear/set_bit() in selftests
Documentation:
* Remove deleted ioctls from documentation
* Various fixes
Diffstat (limited to 'tools/perf/util/mmap.c')
| -rw-r--r-- | tools/perf/util/mmap.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c index a4dff881be39..49093b21ee2d 100644 --- a/tools/perf/util/mmap.c +++ b/tools/perf/util/mmap.c @@ -111,7 +111,7 @@ static int perf_mmap__aio_bind(struct mmap *map, int idx, struct perf_cpu cpu, i pr_err("Failed to allocate node mask for mbind: error %m\n"); return -1; } - set_bit(node_index, node_mask); + __set_bit(node_index, node_mask); if (mbind(data, mmap_len, MPOL_BIND, node_mask, node_index + 1 + 1, 0)) { pr_err("Failed to bind [%p-%p] AIO buffer to node %lu: error %m\n", data, data + mmap_len, node_index); @@ -256,7 +256,7 @@ static void build_node_mask(int node, struct mmap_cpu_mask *mask) for (idx = 0; idx < nr_cpus; idx++) { cpu = perf_cpu_map__cpu(cpu_map, idx); /* map c index to online cpu index */ if (cpu__get_node(cpu) == node) - set_bit(cpu.cpu, mask->bits); + __set_bit(cpu.cpu, mask->bits); } } @@ -270,7 +270,7 @@ static int perf_mmap__setup_affinity_mask(struct mmap *map, struct mmap_params * if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) build_node_mask(cpu__get_node(map->core.cpu), &map->affinity_mask); else if (mp->affinity == PERF_AFFINITY_CPU) - set_bit(map->core.cpu.cpu, map->affinity_mask.bits); + __set_bit(map->core.cpu.cpu, map->affinity_mask.bits); return 0; } |
