diff options
author | Sean Christopherson <seanjc@google.com> | 2025-08-05 12:05:20 -0700 |
---|---|---|
committer | Sean Christopherson <seanjc@google.com> | 2025-08-19 11:59:39 -0700 |
commit | 6b6f1adc43320494197ddbefbe933faea21e2d10 (patch) | |
tree | ed22180324c4960f2bc851d444fd684956a27601 /arch/x86/kvm/pmu.c | |
parent | 5dfd498bad5f07b7419b5f834d95f7b61e767048 (diff) |
KVM: x86/pmu: Rename pmc_speculative_in_use() to pmc_is_locally_enabled()
Rename pmc_speculative_in_use() to pmc_is_locally_enabled() to better
capture what it actually tracks, and to show its relationship to
pmc_is_globally_enabled(). While neither AMD nor Intel refer to event
selectors or the fixed counter control MSR as "local", it's the obvious
name to pair with "global".
As for "speculative", there's absolutely nothing speculative about the
checks. E.g. for PMUs without PERF_GLOBAL_CTRL, from the guest's
perspective, the counters are "in use" without any qualifications.
No functional change intended.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250805190526.1453366-13-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Diffstat (limited to 'arch/x86/kvm/pmu.c')
-rw-r--r-- | arch/x86/kvm/pmu.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 8070097c5c4a..55b995c3ed08 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -493,7 +493,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) static bool pmc_event_is_allowed(struct kvm_pmc *pmc) { - return pmc_is_globally_enabled(pmc) && pmc_speculative_in_use(pmc) && + return pmc_is_globally_enabled(pmc) && pmc_is_locally_enabled(pmc) && check_pmu_event_filter(pmc); } @@ -572,7 +572,7 @@ void kvm_pmu_recalc_pmc_emulation(struct kvm_pmu *pmu, struct kvm_pmc *pmc) * omitting a PMC from a bitmap could result in a missed event if the * filter is changed to allow counting the event. */ - if (!pmc_speculative_in_use(pmc)) + if (!pmc_is_locally_enabled(pmc)) return; if (pmc_is_event_match(pmc, kvm_pmu_eventsel.INSTRUCTIONS_RETIRED)) @@ -912,7 +912,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) pmu->pmc_in_use, X86_PMC_IDX_MAX); kvm_for_each_pmc(pmu, pmc, i, bitmask) { - if (pmc->perf_event && !pmc_speculative_in_use(pmc)) + if (pmc->perf_event && !pmc_is_locally_enabled(pmc)) pmc_stop_counter(pmc); } |