summaryrefslogtreecommitdiff
path: root/arch/x86/events/intel
AgeCommit message (Collapse)Author
2026-03-12perf/x86/intel: Fix OMR snoop information parsing issuesDapeng Mi
When omr_source is 0x2, the omr_snoop (bit[6]) and omr_promoted (bit[7]) fields are combined to represent the snoop information. However, the omr_promoted field was not left-shifted by 1 bit, resulting in incorrect snoop information. Besides, the snoop information parsing is not accurate for some OMR sources, like the snoop information should be SNOOP_NONE for these memory access (omr_source >= 7) instead of SNOOP_HIT. Fix these issues. Closes: https://lore.kernel.org/all/CAP-5=fW4zLWFw1v38zCzB9-cseNSTTCtup=p2SDxZq7dPayVww@mail.gmail.com/ Fixes: d2bdcde9626c ("perf/x86/intel: Add support for PEBS memory auxiliary info field in DMR") Reported-by: Ian Rogers <irogers@google.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://patch.msgid.link/20260311075201.2951073-1-dapeng1.mi@linux.intel.com
2026-03-12perf/x86/intel: Add missing branch counters constraint applyDapeng Mi
When running the command: 'perf record -e "{instructions,instructions:p}" -j any,counter sleep 1', a "shift-out-of-bounds" warning is reported on CWF. UBSAN: shift-out-of-bounds in /kbuild/src/consumer/arch/x86/events/intel/lbr.c:970:15 shift exponent 64 is too large for 64-bit type 'long long unsigned int' ...... intel_pmu_lbr_counters_reorder.isra.0.cold+0x2a/0xa7 intel_pmu_lbr_save_brstack+0xc0/0x4c0 setup_arch_pebs_sample_data+0x114b/0x2400 The warning occurs because the second "instructions:p" event, which involves branch counters sampling, is incorrectly programmed to fixed counter 0 instead of the general-purpose (GP) counters 0-3 that support branch counters sampling. Currently only GP counters 0-3 support branch counters sampling on CWF, any event involving branch counters sampling should be programed on GP counters 0-3. Since the counter index of fixed counter 0 is 32, it leads to the "src" value in below code is right shifted 64 bits and trigger the "shift-out-of-bounds" warning. cnt = (src >> (order[j] * LBR_INFO_BR_CNTR_BITS)) & LBR_INFO_BR_CNTR_MASK; The root cause is the loss of the branch counters constraint for the new event in the branch counters sampling event group. Since it isn't yet part of the sibling list. This results in the second "instructions:p" event being programmed on fixed counter 0 incorrectly instead of the appropriate GP counters 0-3. To address this, we apply the missing branch counters constraint for the last event in the group. Additionally, we introduce a new function, `intel_set_branch_counter_constr()`, to apply the branch counters constraint and avoid code duplication. Fixes: 33744916196b ("perf/x86/intel: Support branch counters logging") Reported-by: Xudong Hao <xudong.hao@intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260228053320.140406-2-dapeng1.mi@linux.intel.com Cc: stable@vger.kernel.org
2026-02-23perf/x86/intel/uncore: Add per-scheduler IMC CAS count eventsZide Chen
IMC on SPR and EMR does not support sub-channels. In contrast, CPUs that use gnr_uncores[] (e.g. Granite Rapids and Sierra Forest) implement two command schedulers (SCH0/SCH1) per memory channel, providing logically independent command and data paths. Do not reuse the spr_uncore_imc[] configuration for these CPUs. Instead, introduce a dedicated gnr_uncore_imc[] with per-scheduler events, so userspace can monitor SCH0 and SCH1 independently. On these CPUs, replace cas_count_{read,write} with cas_count_{read,write}_sch{0,1}. This may break existing userspace that relies on cas_count_{read,write}, prompting it to switch to the per-scheduler events, as the legacy event reports only partial traffic (SCH0). Fixes: 632c4bf6d007 ("perf/x86/intel/uncore: Support Granite Rapids") Fixes: cb4a6ccf3583 ("perf/x86/intel/uncore: Support Sierra Forest and Grand Ridge") Reported-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260210005225.20311-1-zide.chen@intel.com
2026-02-22Convert remaining multi-line kmalloc_obj/flex GFP_KERNEL usesKees Cook
Conversion performed via this Coccinelle script: // SPDX-License-Identifier: GPL-2.0-only // Options: --include-headers-for-types --all-includes --include-headers --keep-comments virtual patch @gfp depends on patch && !(file in "tools") && !(file in "samples")@ identifier ALLOC = {kmalloc_obj,kmalloc_objs,kmalloc_flex, kzalloc_obj,kzalloc_objs,kzalloc_flex, kvmalloc_obj,kvmalloc_objs,kvmalloc_flex, kvzalloc_obj,kvzalloc_objs,kvzalloc_flex}; @@ ALLOC(... - , GFP_KERNEL ) $ make coccicheck MODE=patch COCCI=gfp.cocci Build and boot tested x86_64 with Fedora 42's GCC and Clang: Linux version 6.19.0+ (user@host) (gcc (GCC) 15.2.1 20260123 (Red Hat 15.2.1-7), GNU ld version 2.44-12.fc42) #1 SMP PREEMPT_DYNAMIC 1970-01-01 Linux version 6.19.0+ (user@host) (clang version 20.1.8 (Fedora 20.1.8-4.fc42), LLD 20.1.8) #1 SMP PREEMPT_DYNAMIC 1970-01-01 Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21Convert more 'alloc_obj' cases to default GFP_KERNEL argumentsLinus Torvalds
This converts some of the visually simpler cases that have been split over multiple lines. I only did the ones that are easy to verify the resulting diff by having just that final GFP_KERNEL argument on the next line. Somebody should probably do a proper coccinelle script for this, but for me the trivial script actually resulted in an assertion failure in the middle of the script. I probably had made it a bit _too_ trivial. So after fighting that far a while I decided to just do some of the syntactically simpler cases with variations of the previous 'sed' scripts. The more syntactically complex multi-line cases would mostly really want whitespace cleanup anyway. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21Convert 'alloc_flex' family to use the new default GFP_KERNEL argumentLinus Torvalds
This is the exact same thing as the 'alloc_obj()' version, only much smaller because there are a lot fewer users of the *alloc_flex() interface. As with alloc_obj() version, this was done entirely with mindless brute force, using the same script, except using 'flex' in the pattern rather than 'objs*'. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21Convert 'alloc_obj' family to use the new default GFP_KERNEL argumentLinus Torvalds
This was done entirely with mindless brute force, using git grep -l '\<k[vmz]*alloc_objs*(.*, GFP_KERNEL)' | xargs sed -i 's/\(alloc_objs*(.*\), GFP_KERNEL)/\1)/' to convert the new alloc_obj() users that had a simple GFP_KERNEL argument to just drop that argument. Note that due to the extreme simplicity of the scripting, any slightly more complex cases spread over multiple lines would not be triggered: they definitely exist, but this covers the vast bulk of the cases, and the resulting diff is also then easier to check automatically. For the same reason the 'flex' versions will be done as a separate conversion. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21treewide: Replace kmalloc with kmalloc_obj for non-scalar typesKees Cook
This is the result of running the Coccinelle script from scripts/coccinelle/api/kmalloc_objs.cocci. The script is designed to avoid scalar types (which need careful case-by-case checking), and instead replace kmalloc-family calls that allocate struct or union object instances: Single allocations: kmalloc(sizeof(TYPE), ...) are replaced with: kmalloc_obj(TYPE, ...) Array allocations: kmalloc_array(COUNT, sizeof(TYPE), ...) are replaced with: kmalloc_objs(TYPE, COUNT, ...) Flex array allocations: kmalloc(struct_size(PTR, FAM, COUNT), ...) are replaced with: kmalloc_flex(*PTR, FAM, COUNT, ...) (where TYPE may also be *VAR) The resulting allocations no longer return "void *", instead returning "TYPE *". Signed-off-by: Kees Cook <kees@kernel.org>
2026-01-15perf/x86/intel/uncore: Convert comma to semicolonChen Ni
Replace comma between expressions with semicolons. Using a ',' in place of a ';' can have unintended side effects. Although that is not the case here, it is seems best to use ';' unless ',' is intended. Found by inspection. No functional change intended. Compile tested only. Fixes: e7d5f2ea0923 ("perf/x86/intel/uncore: Add Nova Lake support") Signed-off-by: Chen Ni <nichen@iscas.ac.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20260114023652.3926117-1-nichen@iscas.ac.cn
2026-01-15perf/x86/intel: Add support for rdpmc user disable featureDapeng Mi
Starting with Panther Cove, the rdpmc user disable feature is supported. This feature allows the perf system to disable user space rdpmc reads at the counter level. Currently, when a global counter is active, any user with rdpmc rights can read it, even if perf access permissions forbid it (e.g., disallow reading ring 0 counters). The rdpmc user disable feature mitigates this security concern. Details: - A new RDPMC_USR_DISABLE bit (bit 37) in each EVNTSELx MSR indicates that the GP counter cannot be read by RDPMC in ring 3. - New RDPMC_USR_DISABLE bits in IA32_FIXED_CTR_CTRL MSR (bits 33, 37, 41, 45, etc.) for fixed counters 0, 1, 2, 3, etc. - When calling rdpmc instruction for counter x, the following pseudo code demonstrates how the counter value is obtained: If (!CPL0 && RDPMC_USR_DISABLE[x] == 1) ? 0 : counter_value; - RDPMC_USR_DISABLE is enumerated by CPUID.0x23.0.EBX[2]. This patch extends the current global user space rdpmc control logic via the sysfs interface (/sys/devices/cpu/rdpmc) as follows: - rdpmc = 0: Global user space rdpmc and counter-level user space rdpmc for all counters are both disabled. - rdpmc = 1: Global user space rdpmc is enabled during the mmap-enabled time window, and counter-level user space rdpmc is enabled only for non-system-wide events. This prevents counter data leaks as count data is cleared during context switches. - rdpmc = 2: Global user space rdpmc and counter-level user space rdpmc for all counters are enabled unconditionally. The new rdpmc settings only affect newly activated perf events; currently active perf events remain unaffected. This simplifies and cleans up the code. The default value of rdpmc remains unchanged at 1. For more details about rdpmc user disable, please refer to chapter 15 "RDPMC USER DISABLE" in ISE documentation. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260114011750.350569-8-dapeng1.mi@linux.intel.com
2026-01-15perf/x86: Use macros to replace magic numbers in attr_rdpmcDapeng Mi
Replace magic numbers in attr_rdpmc with macros to improve readability and make their meanings clearer for users. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260114011750.350569-7-dapeng1.mi@linux.intel.com
2026-01-15perf/x86/intel: Add core PMU support for NovalakeDapeng Mi
This patch enables core PMU support for Novalake, covering both P-core and E-core. It includes Arctic Wolf-specific counters and PEBS constraints, and the model-specific OMR extra registers table. Since Coyote Cove shares the same PMU capabilities as Panther Cove, the existing Panther Cove PMU enabling functions are reused for Coyote Cove. For detailed information about counter constraints, please refer to section 16.3 "COUNTER RESTRICTIONS" in the ISE documentation. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260114011750.350569-6-dapeng1.mi@linux.intel.com
2026-01-15perf/x86/intel: Add support for PEBS memory auxiliary info field in NVLDapeng Mi
Similar to DMR (Panther Cove uarch), both P-core (Coyote Cove uarch) and E-core (Arctic Wolf uarch) of NVL adopt the new PEBS memory auxiliary info layout. Coyote Cove microarchitecture shares the same PMU capabilities, including the memory auxiliary info layout, with Panther Cove. Arctic Wolf microarchitecture has a similar layout to Panther Cove, with the only difference being specific data source encoding for L2 hit cases (up to the L2 cache level). The OMR encoding remains the same as in Panther Cove. For detailed information on the memory auxiliary info encoding, please refer to section 16.2 "PEBS LOAD LATENCY AND STORE LATENCY FACILITY" in the latest ISE documentation. This patch defines Arctic Wolf specific data source encoding and then supports PEBS memory auxiliary info field for NVL. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260114011750.350569-5-dapeng1.mi@linux.intel.com
2026-01-15perf/x86/intel: Add core PMU support for DMRDapeng Mi
This patch enables core PMU features for Diamond Rapids (Panther Cove microarchitecture), including Panther Cove specific counter and PEBS constraints, a new cache events ID table, and the model-specific OMR events extra registers table. For detailed information about counter constraints, please refer to section 16.3 "COUNTER RESTRICTIONS" in the ISE documentation. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260114011750.350569-4-dapeng1.mi@linux.intel.com
2026-01-15perf/x86/intel: Add support for PEBS memory auxiliary info field in DMRDapeng Mi
With the introduction of the OMR feature, the PEBS memory auxiliary info field for load and store latency events has been restructured for DMR. The memory auxiliary info field's bit[8] indicates whether a L2 cache miss occurred for a memory load or store instruction. If bit[8] is 0, it signifies no L2 cache miss, and bits[7:0] specify the exact cache data source (up to the L2 cache level). If bit[8] is 1, bits[7:0] represent the OMR encoding, indicating the specific L3 cache or memory region involved in the memory access. A significant enhancement is OMR encoding provides up to 8 fine-grained memory regions besides the cache region. A significant enhancement for OMR encoding is the ability to provide up to 8 fine-grained memory regions in addition to the cache region, offering more detailed insights into memory access regions. For detailed information on the memory auxiliary info encoding, please refer to section 16.2 "PEBS LOAD LATENCY AND STORE LATENCY FACILITY" in the ISE documentation. This patch ensures that the PEBS memory auxiliary info field is correctly interpreted and utilized in DMR. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260114011750.350569-3-dapeng1.mi@linux.intel.com
2026-01-15perf/x86/intel: Support the 4 new OMR MSRs introduced in DMR and NVLDapeng Mi
Diamond Rapids (DMR) and Nova Lake (NVL) introduce an enhanced Off-Module Response (OMR) facility, replacing the Off-Core Response (OCR) Performance Monitoring of previous processors. Legacy microarchitectures used the OCR facility to evaluate off-core and multi-core off-module transactions. The newly named OMR facility improves OCR capabilities for scalable coverage of new memory systems in multi-core module systems. Similar to OCR, 4 additional off-module configuration MSRs (OFFMODULE_RSP_0 to OFFMODULE_RSP_3) are introduced to specify attributes of off-module transactions. When multiple identical OMR events are created, they need to occupy the same OFFMODULE_RSP_x MSR. To ensure these multiple identical OMR events can work simultaneously, the intel_alt_er() and intel_fixup_er() helpers are enhanced to rotate these OMR events across different OFFMODULE_RSP_* MSRs, similar to previous OCR events. For more details about OMR, please refer to section 16.1 "OFF-MODULE RESPONSE (OMR) FACILITY" in ISE documentation. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260114011750.350569-2-dapeng1.mi@linux.intel.com
2026-01-06perf/x86/intel/uncore: Add Nova Lake supportZide Chen
Nova Lake uncore PMON largely follows Panther Lake and supports CBOX, iMC, cNCU, SANTA, sNCU, and HBO units. As with Panther Lake, CBOX, cNCU, and SANTA are not enumerated via discovery tables. Their programming model matches Panther Lake, with differences limited to MSR addresses and the number of boxes or counters per box. The remaining units are enumerated via discovery tables using a new base MSR (0x711) and otherwise reuse the Panther Lake implementation. Nova Lake also supports iMC free-running counters. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-14-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Add missing PMON units for Panther LakeZide Chen
Besides CBOX, Panther Lake includes several legacy uncore PMON units not enumerated via discovery tables, including cNCU, SANTA, and ia_core_bridge. The cNCU PMON is similar to Meteor Lake but has two boxes with two counters each. SANTA and IA Core Bridge PMON units follow the legacy model used on Lunar Lake, Meteor Lake, and others. Panther Lake implements the Global Control Register; the freeze_all bit must be cleared before programming counters. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-13-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Update DMR uncore constraints preliminarilyZide Chen
Update event constraints base on the latest DMR uncore event list. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-11-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Support uncore constraint rangesZide Chen
Add UNCORE_EVENT_CONSTRAINT_RANGE macro for uncore constraints, similar to INTEL_EVENT_CONSTRAINT_RANGE, to reduce duplication when defining consecutive uncore event constraints. No functional change intended. Suggested-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-10-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Support IIO free-running counters on DMRZide Chen
The free-running counters for IIO uncore blocks on Diamond Rapids are similar to Sapphire Rapids IMC freecounters, with the following differences: - The counters are MMIO based. - Only a subset of IP blocks implement free-running counters: HIOP0 (IP Base Addr: 2E7000h) HIOP1 (IP Base Addr: 2EF000h) HIOP3 (IP Base Addr: 2FF000h) HIOP4 (IP Base Addr: 307000h) - IMH2 (Secondary IMH) does not provide free-running counters. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-9-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Add freerunning event descriptor helper macroZide Chen
Freerunning counter events are repetitive: the event code is fixed to 0xff, the unit is always "MiB", and the scale is identical across all counters on a given PMON unit. Introduce a new helper macro, INTEL_UNCORE_FR_EVENT_DESC(), to populate the event, scale, and unit descriptor triplet. This reduces duplicated lines and improves readability. No functional change intended. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-8-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Add domain global init callbackZide Chen
In the Intel uncore self-describing mechanism, the Global Control Register freeze_all bit is SoC-wide and propagates to all uncore PMUs. On Diamond Rapids, this bit is set at power-on, unlike some prior platforms. Add a global_init callback to unfreeze all PMON units. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-7-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Add CBB PMON support for Diamond RapidsZide Chen
On DMR, PMON units inside the Core Building Block (CBB) are enumerated separately from those in the Integrated Memory and I/O Hub (IMH). A new per-CBB MSR (0x710) is introduced for discovery table enumeration. For counter control registers, the tid_en bit (bit 16) exists on CBO, SBO, and Santa, but it is not used by any events. Mark this bit as reserved. Similarly, disallow extended umask (bits 32–63) on Santa and sNCU. Additionally, ignore broken SB2UCIE unit. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-6-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Add IMH PMON support for Diamond RapidsZide Chen
DMR supports IMH PMON units for PCU, UBox, iMC, and CXL: - PCU and UBox are same with SPR. - iMC is similar to SPR but uses different offsets for fixed registers. - CXL introduces a new port_enable field and changes the position of the threshold field. DMR also introduces additional PMON units: SCA, HAMVF, D2D_ULA, UBR, PCIE4, CRS, CPC, ITC, OTC, CMS, and PCIE6. Among these, PCIE4 and PCIE6 use different unit types, but share the same config register layout, and the generic PCIe PMON events apply to both. Additionally, ignore the broken MSE unit. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251231224233.113839-5-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Remove has_generic_discovery_table()Zide Chen
In the !x86_match_cpu() fallback path, has_generic_discovery_table() is removed because it does not handle multiple PCI devices. Instead, use PCI_ANY_ID in generic_uncore_init[] to probe all PCI devices. For MSR portals, only probe MSR 0x201e to keep the fallback simple, as this path is best-effort only. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-4-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Support per-platform discovery base devicesZide Chen
On DMR platforms, IMH discovery tables are enumerated via PCI, while CBB domains use MSRs, unlike earlier platforms which relied on either PCI or MSR exclusively. DMR also uses different MSRs and PCI devices, requiring support for multiple, platform-specific discovery bases. Introduce struct uncore_discovery_domain to hold the discovery base and other domain-specific configuration. Move uncore_units_ignore into uncore_discovery_domain so a single structure can be passed to uncore_discovery_[pci/msr]. No functional change intended. Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-3-zide.chen@intel.com
2026-01-06perf/x86/intel/uncore: Move uncore discovery init struct to headerZide Chen
The discovery base MSR or PCI device is platform-specific and must be defined statically in the per-platform init table and passed to the discovery code. Move the definition of struct intel_uncore_init_fun to uncore.h so it can be accessed by discovery code, and rename it to reflect that it now carries more than just init callbacks. Shorten intel_uncore_has_discovery_tables[_pci/msr] to uncore_discovery[_pci/msr] for improved readability and alignment. Drop the `intel_` prefix from new names since the code is under the intel directory and long identifiers make alignment harder. Further cleanups will continue removing `intel_` prefixes. No functional change intended. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251231224233.113839-2-zide.chen@intel.com
2026-01-06perf/x86/uncore: clean up const mismatchGreg Kroah-Hartman
In some cmp functions, a const pointer is cast out to a non-const pointer by using container_of() which is not correct. Fix this up by properly marking the pointers as const, which preserves the correct type of the pointer passed into the functions. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/2025121741-headstand-stratus-f5eb@gregkh
2025-12-17perf/x86/cstate: Add Airmont NPMartin Schiller
From the perspective of Intel cstate residency counters, the Airmont NP (aka Lightning Mountain) is identical to the Airmont. Signed-off-by: Martin Schiller <ms@dev.tdt.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251124074846.9653-4-ms@dev.tdt.de
2025-12-17perf/x86/intel: Add Airmont NPMartin Schiller
The Intel / MaxLinear Airmont NP (aka Lightning Mountain) supports the same architectual and non-architecural events as Airmont. Signed-off-by: Martin Schiller <ms@dev.tdt.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251124074846.9653-3-ms@dev.tdt.de
2025-12-17perf/x86/intel: Support PERF_PMU_CAP_MEDIATED_VPMUKan Liang
Apply the PERF_PMU_CAP_MEDIATED_VPMU for Intel core PMU. It only indicates that the perf side of core PMU is ready to support the mediated vPMU. Besides the capability, the hypervisor, a.k.a. KVM, still needs to check the PMU version and other PMU features/capabilities to decide whether to enable support mediated vPMUs. [sean: massage changelog] Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Mingwei Zhang <mizhang@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Xudong Hao <xudong.hao@intel.com> Link: https://patch.msgid.link/20251206001720.468579-13-seanjc@google.com
2025-12-16perf/x86/intel/cstate: Add Diamond Rapids supportZide Chen
From a C-state residency profiling perspective, Diamond Rapids is similar to SRF and GNR, supporting core C1/C6, module C6, and package C2/C6 residency counters. Similar to CWF, the C1E residency can be accessed via PMT only. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251215182520.115822-3-zide.chen@intel.com
2025-12-16perf/x86/intel/cstate: Add Nova Lake supportZide Chen
Similar to Lunar Lake and Panther Lake, Nova Lake supports CC1/CC6/CC7 and PC2/PC6/PC10 residency counters; it also adds support for MC6. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251215182520.115822-2-zide.chen@intel.com
2025-12-16perf/x86/intel/cstate: Add Wildcat Lake supportZide Chen
Wildcat Lake (WCL) is a low-power variant of Panther Lake. From a C-state profiling perspective, it supports the same residency counters: CC1/CC6/CC7 and PC2/PC6/PC10. Signed-off-by: Zide Chen <zide.chen@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://patch.msgid.link/20251215182520.115822-1-zide.chen@intel.com
2025-12-12perf/x86/intel: Fix NULL event dereference crash in handle_pmi_common()Evan Li
handle_pmi_common() may observe an active bit set in cpuc->active_mask while the corresponding cpuc->events[] entry has already been cleared, which leads to a NULL pointer dereference. This can happen when interrupt throttling stops all events in a group while PEBS processing is still in progress. perf_event_overflow() can trigger perf_event_throttle_group(), which stops the group and clears the cpuc->events[] entry, but the active bit may still be set when handle_pmi_common() iterates over the events. The following recent fix: 7e772a93eb61 ("perf/x86: Fix NULL event access and potential PEBS record loss") moved the cpuc->events[] clearing from x86_pmu_stop() to x86_pmu_del() and relied on cpuc->active_mask/pebs_enabled checks. However, handle_pmi_common() can still encounter a NULL cpuc->events[] entry despite the active bit being set. Add an explicit NULL check on the event pointer before using it, to cover this legitimate scenario and avoid the NULL dereference crash. Fixes: 7e772a93eb61 ("perf/x86: Fix NULL event access and potential PEBS record loss") Reported-by: kitta <kitta@linux.alibaba.com> Co-developed-by: kitta <kitta@linux.alibaba.com> Signed-off-by: Evan Li <evan.li@linux.alibaba.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://patch.msgid.link/20251212084943.2124787-1-evan.li@linux.alibaba.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220855
2025-12-02Merge tag 'x86_misc_for_6.19-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull misc x86 updates from Dave Hansen: "The most significant are some changes to ensure that symbols exported for KVM are used only by KVM modules themselves, along with some related cleanups. In true x86/misc fashion, the other patch is completely unrelated and just enhances an existing pr_warn() to make it clear to users how they have tainted their kernel when something is mucking with MSRs. Summary: - Make MSR-induced taint easier for users to track down - Restrict KVM-specific exports to KVM itself" * tag 'x86_misc_for_6.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: Restrict KVM-induced symbol exports to KVM modules where obvious/possible x86/mm: Drop unnecessary export of "ptdump_walk_pgd_level_debugfs" x86/mtrr: Drop unnecessary export of "mtrr_state" x86/bugs: Drop unnecessary export of "x86_spec_ctrl_base" x86/msr: Add CPU_OUT_OF_SPEC taint name to "unrecognized" pr_warn(msg)
2025-12-01Merge tag 'perf-core-2025-12-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull performance events updates from Ingo Molnar: "Callchain support: - Add support for deferred user-space stack unwinding for perf, enabled on x86. (Peter Zijlstra, Steven Rostedt) - unwind_user/x86: Enable frame pointer unwinding on x86 (Josh Poimboeuf) x86 PMU support and infrastructure: - x86/insn: Simplify for_each_insn_prefix() (Peter Zijlstra) - x86/insn,uprobes,alternative: Unify insn_is_nop() (Peter Zijlstra) Intel PMU driver: - Large series to prepare for and implement architectural PEBS support for Intel platforms such as Clearwater Forest (CWF) and Panther Lake (PTL). (Dapeng Mi, Kan Liang) - Check dynamic constraints (Kan Liang) - Optimize PEBS extended config (Peter Zijlstra) - cstates: - Remove PC3 support from LunarLake (Zhang Rui) - Add Pantherlake support (Zhang Rui) - Clearwater Forest support (Zide Chen) AMD PMU driver: - x86/amd: Check event before enable to avoid GPF (George Kennedy) Fixes and cleanups: - task_work: Fix NMI race condition (Peter Zijlstra) - perf/x86: Fix NULL event access and potential PEBS record loss (Dapeng Mi) - Misc other fixes and cleanups (Dapeng Mi, Ingo Molnar, Peter Zijlstra)" * tag 'perf-core-2025-12-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (38 commits) perf/x86/intel: Fix and clean up intel_pmu_drain_arch_pebs() type use perf/x86/intel: Optimize PEBS extended config perf/x86/intel: Check PEBS dyn_constraints perf/x86/intel: Add a check for dynamic constraints perf/x86/intel: Add counter group support for arch-PEBS perf/x86/intel: Setup PEBS data configuration and enable legacy groups perf/x86/intel: Update dyn_constraint base on PEBS event precise level perf/x86/intel: Allocate arch-PEBS buffer and initialize PEBS_BASE MSR perf/x86/intel: Process arch-PEBS records or record fragments perf/x86/intel/ds: Factor out PEBS group processing code to functions perf/x86/intel/ds: Factor out PEBS record processing code to functions perf/x86/intel: Initialize architectural PEBS perf/x86/intel: Correct large PEBS flag check perf/x86/intel: Replace x86_pmu.drain_pebs calling with static call perf/x86: Fix NULL event access and potential PEBS record loss perf/x86: Remove redundant is_x86_event() prototype entry,unwind/deferred: Fix unwind_reset_info() placement unwind_user/x86: Fix arch=um build perf: Support deferred user unwind unwind_user/x86: Teach FP unwind about start of function ...
2025-11-19perf/x86/intel/uncore: Remove superfluous checkJiri Slaby (SUSE)
The 'pmu' pointer cannot be NULL, as it is taken as a pointer to an array. Remove the superfluous NULL check. Found by Coverity: CID#1497507. Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Liang Kan <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://patch.msgid.link/20251119091538.825307-1-jirislaby@kernel.org
2025-11-12x86: Restrict KVM-induced symbol exports to KVM modules where obvious/possibleSean Christopherson
Extend KVM's export macro framework to provide EXPORT_SYMBOL_FOR_KVM(), and use the helper macro to export symbols for KVM throughout x86 if and only if KVM will build one or more modules, and only for those modules. To avoid unnecessary exports when CONFIG_KVM=m but kvm.ko will not be built (because no vendor modules are selected), let arch code #define EXPORT_SYMBOL_FOR_KVM to suppress/override the exports. Note, the set of symbols to restrict to KVM was generated by manual search and audit; any "misses" are due to human error, not some grand plan. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Kai Huang <kai.huang@intel.com> Tested-by: Kai Huang <kai.huang@intel.com> Link: https://patch.msgid.link/20251112173944.1380633-5-seanjc%40google.com
2025-11-12perf/x86/intel: Fix and clean up intel_pmu_drain_arch_pebs() type useIngo Molnar
The following commit introduced a build failure on x86-32: 21954c8a0ff ("perf/x86/intel: Process arch-PEBS records or record fragments") ... arch/x86/events/intel/ds.c:2983:24: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast] The forced type conversion to 'u64' and 'void *' are not 32-bit clean, but they are also entirely unnecessary: ->pebs_vaddr is 'void *' already, and integer-compatible pointer arithmetics will work just fine on it. Fix & simplify the code. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Fixes: d21954c8a0ff ("perf/x86/intel: Process arch-PEBS records or record fragments") Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Link: https://patch.msgid.link/20251029102136.61364-10-dapeng1.mi@linux.intel.com
2025-11-07perf/x86/intel: Optimize PEBS extended configPeter Zijlstra
Similar to enable_acr_event, avoid the branch. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2025-11-07perf/x86/intel: Check PEBS dyn_constraintsPeter Zijlstra
Handle the interaction between ("perf/x86/intel: Update dyn_constraint base on PEBS event precise level") and ("perf/x86/intel: Add a check for dynamic constraints"). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2025-11-07perf/x86/intel: Add a check for dynamic constraintsKan Liang
The current event scheduler has a limit. If the counter constraint of an event is not a subset of any other counter constraint with an equal or higher weight. The counters may not be fully utilized. To workaround it, the commit bc1738f6ee83 ("perf, x86: Fix event scheduler for constraints with overlapping counters") introduced an overlap flag, which is hardcoded to the event constraint that may trigger the limit. It only works for static constraints. Many features on and after Intel PMON v6 require dynamic constraints. An event constraint is decided by both static and dynamic constraints at runtime. See commit 4dfe3232cc04 ("perf/x86: Add dynamic constraint"). The dynamic constraints are from CPUID enumeration. It's impossible to hardcode it in advance. It's not practical to set the overlap flag to all events. It's harmful to the scheduler. For the existing Intel platforms, the dynamic constraints don't trigger the limit. A real fix is not required. However, for virtualization, VMM may give a weird CPUID enumeration to a guest. It's impossible to indicate what the weird enumeration is. A check is introduced, which can list the possible breaks if a weird enumeration is used. Check the dynamic constraints enumerated for normal, branch counters logging, and auto-counter reload. Check both PEBS and non-PEBS constratins. Closes: https://lore.kernel.org/lkml/20250416195610.GC38216@noisy.programming.kicks-ass.net/ Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20250512175542.2000708-1-kan.liang@linux.intel.com
2025-11-07perf/x86/intel: Add counter group support for arch-PEBSDapeng Mi
Base on previous adaptive PEBS counter snapshot support, add counter group support for architectural PEBS. Since arch-PEBS shares same counter group layout with adaptive PEBS, directly reuse __setup_pebs_counter_group() helper to process arch-PEBS counter group. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251029102136.61364-13-dapeng1.mi@linux.intel.com
2025-11-07perf/x86/intel: Setup PEBS data configuration and enable legacy groupsDapeng Mi
Different with legacy PEBS, arch-PEBS provides per-counter PEBS data configuration by programing MSR IA32_PMC_GPx/FXx_CFG_C MSRs. This patch obtains PEBS data configuration from event attribute and then writes the PEBS data configuration to MSR IA32_PMC_GPx/FXx_CFG_C and enable corresponding PEBS groups. Please notice this patch only enables XMM SIMD regs sampling for arch-PEBS, the other SIMD regs (OPMASK/YMM/ZMM) sampling on arch-PEBS would be supported after PMI based SIMD regs (OPMASK/YMM/ZMM) sampling is supported. Co-developed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251029102136.61364-12-dapeng1.mi@linux.intel.com
2025-11-07perf/x86/intel: Update dyn_constraint base on PEBS event precise levelDapeng Mi
arch-PEBS provides CPUIDs to enumerate which counters support PEBS sampling and precise distribution PEBS sampling. Thus PEBS constraints should be dynamically configured base on these counter and precise distribution bitmap instead of defining them statically. Update event dyn_constraint base on PEBS event precise level. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251029102136.61364-11-dapeng1.mi@linux.intel.com
2025-11-07perf/x86/intel: Allocate arch-PEBS buffer and initialize PEBS_BASE MSRDapeng Mi
Arch-PEBS introduces a new MSR IA32_PEBS_BASE to store the arch-PEBS buffer physical address. This patch allocates arch-PEBS buffer and then initialize IA32_PEBS_BASE MSR with the buffer physical address. Co-developed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251029102136.61364-10-dapeng1.mi@linux.intel.com
2025-11-07perf/x86/intel: Process arch-PEBS records or record fragmentsDapeng Mi
A significant difference with adaptive PEBS is that arch-PEBS record supports fragments which means an arch-PEBS record could be split into several independent fragments which have its own arch-PEBS header in each fragment. This patch defines architectural PEBS record layout structures and add helpers to process arch-PEBS records or fragments. Only legacy PEBS groups like basic, GPR, XMM and LBR groups are supported in this patch, the new added YMM/ZMM/OPMASK vector registers capturing would be supported in the future. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251029102136.61364-9-dapeng1.mi@linux.intel.com
2025-11-07perf/x86/intel/ds: Factor out PEBS group processing code to functionsDapeng Mi
Adaptive PEBS and arch-PEBS share lots of same code to process these PEBS groups, like basic, GPR and meminfo groups. Extract these shared code to generic functions to avoid duplicated code. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251029102136.61364-8-dapeng1.mi@linux.intel.com