summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-04-30arm64: Explicitly document boot requirements for SVEMark Brown
We do not currently document the requirements for configuration of the SVE system registers when booting the kernel, let's do so for completeness. We don't have a hard requirement that the vector lengths configured on different CPUs on initial boot be consistent since we have logic to constrain to the minimum supported value but we will reject any late CPUs which can't support the current maximum and introducing the concept of late CPUs seemed more complex than was useful so we require that all CPUs use the same value. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210412151955.16078-4-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-30arm64: Explicitly require that FPSIMD instructions do not trapMark Brown
We do not explicitly require that systems with FPSIMD support and EL3 have disabled EL3 traps when the kernel is started, while it is unlikely that systems will get this wrong for the sake of completeness let's spell it out. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210412151955.16078-3-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-30arm64: Relax booting requirements for configuration of trapsMark Brown
Currently we require that a number of system registers be configured to disable traps when starting the kernel. Add an explicit note that the requirement is that the system behave as if the traps are disabled so transparent handling of the traps is fine, this should be implicit for people familiar with working with standards documents but it doesn't hurt to be explicit. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210412151955.16078-2-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-30arm64: cpufeatures: use min and maxkernel test robot
Use min and max to make the effect more clear. Generated by: scripts/coccinelle/misc/minmax.cocci CC: Denis Efremov <efremov@linux.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: kernel test robot <lkp@intel.com> Signed-off-by: Julia Lawall <julia.lawall@inria.fr> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/alpine.DEB.2.22.394.2104292246300.16899@hadrien [catalin.marinas@arm.com: include <linux/minmax.h> explicitly] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-30arm64: stacktrace: restore terminal recordsMark Rutland
We removed the terminal frame records in commit: 6106e1112cc69a36 ("arm64: remove EL0 exception frame record") ... on the assumption that as we no longer used them to find the pt_regs at exception boundaries, they were no longer necessary. However, Leo reports that as an unintended side-effect, this causes traces which cross secondary_start_kernel to terminate one entry too late, with a spurious "0" entry. There are a few ways we could sovle this, but as we're planning to use terminal records for RELIABLE_STACKTRACE, let's revert the logic change for now, keeping the update comments and accounting for the changes in commit: 3c02600144bdb0a1 ("arm64: stacktrace: Report when we reach the end of the stack") This is effectively a partial revert of commit: 6106e1112cc69a36 ("arm64: remove EL0 exception frame record") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: 6106e1112cc6 ("arm64: remove EL0 exception frame record") Reported-by: Leo Yan <leo.yan@linaro.org> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> Link: https://lore.kernel.org/r/20210429104813.GA33550@C02TD0UTHF1T.local Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-30arm64/vdso: Discard .note.gnu.property sections in vDSOBill Wendling
The arm64 assembler in binutils 2.32 and above generates a program property note in a note section, .note.gnu.property, to encode used x86 ISAs and features. But the kernel linker script only contains a single NOTE segment: PHDRS { text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */ note PT_NOTE FLAGS(4); /* PF_R */ } The NOTE segment generated by the vDSO linker script is aligned to 4 bytes. But the .note.gnu.property section must be aligned to 8 bytes on arm64. $ readelf -n vdso64.so Displaying notes found in: .note Owner Data size Description Linux 0x00000004 Unknown note type: (0x00000000) description data: 06 00 00 00 readelf: Warning: note with invalid namesz and/or descsz found at offset 0x20 readelf: Warning: type: 0x78, namesize: 0x00000100, descsize: 0x756e694c, alignment: 8 Since the note.gnu.property section in the vDSO is not checked by the dynamic linker, discard the .note.gnu.property sections in the vDSO. Similar to commit 4caffe6a28d31 ("x86/vdso: Discard .note.gnu.property sections in vDSO"), but for arm64. Signed-off-by: Bill Wendling <morbo@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210423205159.830854-1-morbo@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-29arm64: doc: Add brk/mmap/mremap() to the Tagged Address ABI ExceptionsCatalin Marinas
Prior to commit dcde237319e6 ("mm: Avoid creating virtual address aliases in brk()/mmap()/mremap()"), the kernel allowed tagged addresses to be passed to the brk/mmap/mremap() syscalls. This relaxation was tightened in 5.6 (backported to stable 5.4) but the tagged-address-abi.rst document was only partially updated. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Fixes: dcde237319e6 ("mm: Avoid creating virtual address aliases in brk()/mmap()/mremap()") Reported-by: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210423175134.14838-1-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-29psci: Remove unneeded semicolonYang Li
Eliminate the following coccicheck warning: ./drivers/firmware/psci/psci.c:141:2-3: Unneeded semicolon Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/1619659589-4775-1-git-send-email-yang.lee@linux.alibaba.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-23ACPI: irq: Prevent unregistering of GIC SGIsMarc Zyngier
When using ACPI on arm64, which implies the GIC IRQ model, no table should ever provide a GSI number in the range [0:15], as these are reserved for IPIs. However, drivers tend to call acpi_unregister_gsi() with any random GSI number provided by half baked tables, which results in an exploding kernel when its IPIs have been unconfigured. In order to catch this, check for the silly case early, warn that something is going wrong and avoid the above disaster. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Tested-by: dann frazier <dann.frazier@canonical.com> Tested-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20210421164317.1718831-3-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-23ACPI: GTDT: Don't corrupt interrupt mappings on watchdow probe failureMarc Zyngier
When failing the driver probe because of invalid firmware properties, the GTDT driver unmaps the interrupt that it mapped earlier. However, it never checks whether the mapping of the interrupt actially succeeded. Even more, should the firmware report an illegal interrupt number that overlaps with the GIC SGI range, this can result in an IPI being unmapped, and subsequent fireworks (as reported by Dann Frazier). Rework the driver to have a slightly saner behaviour and actually check whether the interrupt has been mapped before unmapping things. Reported-by: dann frazier <dann.frazier@canonical.com> Fixes: ca9ae5ec4ef0 ("acpi/arm64: Add SBSA Generic Watchdog support in GTDT driver") Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/YH87dtTfwYgavusz@xps13.dannf Cc: <stable@vger.kernel.org> Cc: Fu Wei <wefu@redhat.com> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Tested-by: dann frazier <dann.frazier@canonical.com> Tested-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20210421164317.1718831-2-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-23arm64: Show three registers per lineMatthew Wilcox (Oracle)
Displaying two registers per line takes 15 lines. That improves to just 10 lines if we display three registers per line, which reduces the amount of information lost when oopses are cut off. It stays within 80 columns and matches x86-64. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210420172245.3679077-1-willy@infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-23arm64: remove HAVE_DEBUG_BUGVERBOSEJisheng Zhang
After commit 9fb7410f955f ("arm64/BUG: Use BRK instruction for generic BUG traps"), arm64 has switched to generic BUG implementation, so there's no need to select HAVE_DEBUG_BUGVERBOSE. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210418215231.563d4b72@xhacker Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-23arm64: alternative: simplify passing alt_regionMark Rutland
In __apply_alternatives() we take a pointer to void which we later assign to a pointer to struct alt_region. As all callers are passing a pointer to struct alt_region to begin with, it's simpler and more robust to take a pointer to struct alt region, so let's do so and avoid the need for a temporary variable. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210416163032.10857-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-23arm64: Force SPARSEMEM_VMEMMAP as the only memory management modelCatalin Marinas
Currently arm64 allows a choice of FLATMEM, SPARSEMEM and SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM does not seem to boot in certain configurations (guest under KVM with Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K pages) or 29 (64K page), there's little argument against the memory wasted by the mem_map array with SPARSEMEM. Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and remove the corresponding #ifdefs under arch/arm64/. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20210420093559.23168-1-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-23arm64: vdso32: drop -no-integrated-as flagNick Desaulniers
Clang can assemble these files just fine; this is a relic from the top level Makefile conditionally adding this. We no longer need --prefix, --gcc-toolchain, or -Qunused-arguments flags either with this change, so remove those too. To test building: $ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \ CROSS_COMPILE_COMPAT=arm-linux-gnueabi- make LLVM=1 LLVM_IAS=1 \ defconfig arch/arm64/kernel/vdso32/ Suggested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Stephen Boyd <swboyd@chromium.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210420174427.230228-1-ndesaulniers@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-15Merge branch 'for-next/pac-set-get-enabled-keys' into for-next/coreCatalin Marinas
* for-next/pac-set-get-enabled-keys: : Introduce arm64 prctl(PR_PAC_{SET,GET}_ENABLED_KEYS). arm64: pac: Optimize kernel entry/exit key installation code paths arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS) arm64: mte: make the per-task SCTLR_EL1 field usable elsewhere
2021-04-15Merge branch 'for-next/mte-async-kernel-mode' into for-next/coreCatalin Marinas
* for-next/mte-async-kernel-mode: : Add MTE asynchronous kernel mode support kasan, arm64: tests supports for HW_TAGS async mode arm64: mte: Report async tag faults before suspend arm64: mte: Enable async tag check fault arm64: mte: Conditionally compile mte_enable_kernel_*() arm64: mte: Enable TCO in functions that can read beyond buffer limits kasan: Add report for async mode arm64: mte: Drop arch_enable_tagging() kasan: Add KASAN mode kernel parameter arm64: mte: Add asynchronous mode support
2021-04-15Merge branches 'for-next/misc', 'for-next/kselftest', 'for-next/xntable', ↵Catalin Marinas
'for-next/vdso', 'for-next/fiq', 'for-next/epan', 'for-next/kasan-vmalloc', 'for-next/fgt-boot-init', 'for-next/vhe-only' and 'for-next/neon-softirqs-disabled', remote-tracking branch 'arm64/for-next/perf' into for-next/core * for-next/misc: : Miscellaneous patches arm64/sve: Add compile time checks for SVE hooks in generic functions arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG. arm64/sve: Remove redundant system_supports_sve() tests arm64: mte: Remove unused mte_assign_mem_tag_range() arm64: Add __init section marker to some functions arm64/sve: Rework SVE access trap to convert state in registers docs: arm64: Fix a grammar error arm64: smp: Add missing prototype for some smp.c functions arm64: setup: name `tcr` register arm64: setup: name `mair` register arm64: stacktrace: Move start_backtrace() out of the header arm64: barrier: Remove spec_bar() macro arm64: entry: remove test_irqs_unmasked macro ARM64: enable GENERIC_FIND_FIRST_BIT arm64: defconfig: Use DEBUG_INFO_REDUCED * for-next/kselftest: : Various kselftests for arm64 kselftest: arm64: Add BTI tests kselftest/arm64: mte: Report filename on failing temp file creation kselftest/arm64: mte: Fix clang warning kselftest/arm64: mte: Makefile: Fix clang compilation kselftest/arm64: mte: Output warning about failing compiler kselftest/arm64: mte: Use cross-compiler if specified kselftest/arm64: mte: Fix MTE feature detection kselftest/arm64: mte: common: Fix write() warnings kselftest/arm64: mte: user_mem: Fix write() warning kselftest/arm64: mte: ksm_options: Fix fscanf warning kselftest/arm64: mte: Fix pthread linking kselftest/arm64: mte: Fix compilation with native compiler * for-next/xntable: : Add hierarchical XN permissions for all page tables arm64: mm: use XN table mapping attributes for user/kernel mappings arm64: mm: use XN table mapping attributes for the linear region arm64: mm: add missing P4D definitions and use them consistently * for-next/vdso: : Minor improvements to the compat vdso and sigpage arm64: compat: Poison the compat sigpage arm64: vdso: Avoid ISB after reading from cntvct_el0 arm64: compat: Allow signal page to be remapped arm64: vdso: Remove redundant calls to flush_dcache_page() arm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pages * for-next/fiq: : Support arm64 FIQ controller registration arm64: irq: allow FIQs to be handled arm64: Always keep DAIF.[IF] in sync arm64: entry: factor irq triage logic into macros arm64: irq: rework root IRQ handler registration arm64: don't use GENERIC_IRQ_MULTI_HANDLER genirq: Allow architectures to override set_handle_irq() fallback * for-next/epan: : Support for Enhanced PAN (execute-only permissions) arm64: Support execute-only permissions with Enhanced PAN * for-next/kasan-vmalloc: : Support CONFIG_KASAN_VMALLOC on arm64 arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled arm64: kaslr: support randomized module area with KASAN_VMALLOC arm64: Kconfig: support CONFIG_KASAN_VMALLOC arm64: kasan: abstract _text and _end to KERNEL_START/END arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC * for-next/fgt-boot-init: : Booting clarifications and fine grained traps setup arm64: Require that system registers at all visible ELs be initialized arm64: Disable fine grained traps on boot arm64: Document requirements for fine grained traps at boot * for-next/vhe-only: : Dealing with VHE-only CPUs (a.k.a. M1) arm64: Get rid of CONFIG_ARM64_VHE arm64: Cope with CPUs stuck in VHE mode arm64: cpufeature: Allow early filtering of feature override * arm64/for-next/perf: arm64: perf: Remove redundant initialization in perf_event.c perf/arm_pmu_platform: Clean up with dev_printk perf/arm_pmu_platform: Fix error handling perf/arm_pmu_platform: Use dev_err_probe() for IRQ errors docs: perf: Address some html build warnings docs: perf: Add new description on HiSilicon uncore PMU v2 drivers/perf: hisi: Add support for HiSilicon PA PMU driver drivers/perf: hisi: Add support for HiSilicon SLLC PMU driver drivers/perf: hisi: Update DDRC PMU for programmable counter drivers/perf: hisi: Add new functions for HHA PMU drivers/perf: hisi: Add new functions for L3C PMU drivers/perf: hisi: Add PMU version for uncore PMU drivers. drivers/perf: hisi: Refactor code for more uncore PMUs drivers/perf: hisi: Remove unnecessary check of counter index drivers/perf: Simplify the SMMUv3 PMU event attributes drivers/perf: convert sysfs sprintf family to sysfs_emit drivers/perf: convert sysfs scnprintf family to sysfs_emit_at() and sysfs_emit() drivers/perf: convert sysfs snprintf family to sysfs_emit * for-next/neon-softirqs-disabled: : Run kernel mode SIMD with softirqs disabled arm64: fpsimd: run kernel mode NEON with softirqs disabled arm64: assembler: introduce wxN aliases for wN registers arm64: assembler: remove conditional NEON yield macros
2021-04-15arm64/sve: Add compile time checks for SVE hooks in generic functionsMark Brown
The FPSIMD code was relying on IS_ENABLED() checks in system_suppors_sve() to cause the compiler to delete references to SVE functions in some places, add explicit IS_ENABLED() checks back. Fixes: ef9c5d09797d ("arm64/sve: Remove redundant system_supports_sve() tests") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210415121742.36628-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG.zhouchuangao
It can be optimized at compile time. Signed-off-by: zhouchuangao <zhouchuangao@vivo.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/1617105472-6081-1-git-send-email-zhouchuangao@vivo.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64: pac: Optimize kernel entry/exit key installation code pathsPeter Collingbourne
The kernel does not use any keys besides IA so we don't need to install IB/DA/DB/GA on kernel exit if we arrange to install them on task switch instead, which we can expect to happen an order of magnitude less often. Furthermore we can avoid installing the user IA in the case where the user task has IA disabled and just leave the kernel IA installed. This also lets us avoid needing to install IA on kernel entry. On an Apple M1 under a hypervisor, the overhead of kernel entry/exit has been measured to be reduced by 15.6ns in the case where IA is enabled, and 31.9ns in the case where IA is disabled. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ieddf6b580d23c9e0bed45a822dabe72d2ffc9a8e Link: https://lore.kernel.org/r/2d653d055f38f779937f2b92f8ddd5cf9e4af4f4.1616123271.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)Peter Collingbourne
This change introduces a prctl that allows the user program to control which PAC keys are enabled in a particular task. The main reason why this is useful is to enable a userspace ABI that uses PAC to sign and authenticate function pointers and other pointers exposed outside of the function, while still allowing binaries conforming to the ABI to interoperate with legacy binaries that do not sign or authenticate pointers. The idea is that a dynamic loader or early startup code would issue this prctl very early after establishing that a process may load legacy binaries, but before executing any PAC instructions. This change adds a small amount of overhead to kernel entry and exit due to additional required instruction sequences. On a DragonBoard 845c (Cortex-A75) with the powersave governor, the overhead of similar instruction sequences was measured as 4.9ns when simulating the common case where IA is left enabled, or 43.7ns when simulating the uncommon case where IA is disabled. These numbers can be seen as the worst case scenario, since in more realistic scenarios a better performing governor would be used and a newer chip would be used that would support PAC unlike Cortex-A75 and would be expected to be faster than Cortex-A75. On an Apple M1 under a hypervisor, the overhead of the entry/exit instruction sequences introduced by this patch was measured as 0.3ns in the case where IA is left enabled, and 33.0ns in the case where IA is disabled. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5 Link: https://lore.kernel.org/r/d6609065f8f40397a4124654eb68c9f490b4d477.1616123271.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64: mte: make the per-task SCTLR_EL1 field usable elsewherePeter Collingbourne
In an upcoming change we are going to introduce per-task SCTLR_EL1 bits for PAC. Move the existing per-task SCTLR_EL1 field out of the MTE-specific code so that we will be able to use it from both the PAC and MTE code paths and make the task switching code more efficient. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ic65fac78a7926168fa68f9e8da591c9e04ff7278 Link: https://lore.kernel.org/r/13d725cb8e741950fb9d6e64b2cd9bd54ff7c3f9.1616123271.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64/sve: Remove redundant system_supports_sve() testsMark Brown
Currently there are a number of places in the SVE code where we check both system_supports_sve() and TIF_SVE. This is a bit redundant given that we should never get into a situation where we have set TIF_SVE without having SVE support and it is not clear that silently ignoring a mistakenly set TIF_SVE flag is the most sensible error handling approach. For now let's just drop the system_supports_sve() checks since this will at least reduce overhead a little. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210412172320.3315-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: fpsimd: run kernel mode NEON with softirqs disabledArd Biesheuvel
Kernel mode NEON can be used in task or softirq context, but only in a non-nesting manner, i.e., softirq context is only permitted if the interrupt was not taken at a point where the kernel was using the NEON in task context. This means all users of kernel mode NEON have to be aware of this limitation, and either need to provide scalar fallbacks that may be much slower (up to 20x for AES instructions) and potentially less safe, or use an asynchronous interface that defers processing to a later time when the NEON is guaranteed to be available. Given that grabbing and releasing the NEON is cheap, we can relax this restriction, by increasing the granularity of kernel mode NEON code, and always disabling softirq processing while the NEON is being used in task context. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: introduce wxN aliases for wN registersArd Biesheuvel
The AArch64 asm syntax has this slightly tedious property that the names used in mnemonics to refer to registers depend on whether the opcode in question targets the entire 64-bits (xN), or only the least significant 8, 16 or 32 bits (wN). When writing parameterized code such as macros, this can be annoying, as macro arguments don't lend themselves to indexed lookups, and so generating a reference to wN in a macro that receives xN as an argument is problematic. For instance, an upcoming patch that modifies the implementation of the cond_yield macro to be able to refer to 32-bit registers would need to modify invocations such as cond_yield 3f, x8 to cond_yield 3f, 8 so that the second argument can be token pasted after x or w to emit the correct register reference. Unfortunately, this interferes with the self documenting nature of the first example, where the second argument is obviously a register, whereas in the second example, one would need to go and look at the code to find out what '8' means. So let's fix this by defining wxN aliases for all xN registers, which resolve to the 32-bit alias of each respective 64-bit register. This allows the macro implementation to paste the xN reference after a w to obtain the correct register name. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: remove conditional NEON yield macrosArd Biesheuvel
The users of the conditional NEON yield macros have all been switched to the simplified cond_yield macro, and so the NEON specific ones can be removed. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11kasan, arm64: tests supports for HW_TAGS async modeAndrey Konovalov
This change adds KASAN-KUnit tests support for the async HW_TAGS mode. In async mode, tag fault aren't being generated synchronously when a bad access happens, but are instead explicitly checked for by the kernel. As each KASAN-KUnit test expect a fault to happen before the test is over, check for faults as a part of the test handler. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-10-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Report async tag faults before suspendVincenzo Frascino
When MTE async mode is enabled TFSR_EL1 contains the accumulative asynchronous tag check faults for EL1 and EL0. During the suspend/resume operations the firmware might perform some operations that could change the state of the register resulting in a spurious tag check fault report. Report asynchronous tag faults before suspend and clear the TFSR_EL1 register after resume to prevent this to happen. Cc: Will Deacon <will@kernel.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-9-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Enable async tag check faultVincenzo Frascino
MTE provides a mode that asynchronously updates the TFSR_EL1 register when a tag check exception is detected. To take advantage of this mode the kernel has to verify the status of the register at: 1. Context switching 2. Return to user/EL0 (Not required in entry from EL0 since the kernel did not run) 3. Kernel entry from EL1 4. Kernel exit to EL1 If the register is non-zero a trace is reported. Add the required features for EL1 detection and reporting. Note: ITFSB bit is set in the SCTLR_EL1 register hence it guaranties that the indirect writes to TFSR_EL1 are synchronized at exception entry to EL1. On the context switch path the synchronization is guarantied by the dsb() in __switch_to(). The dsb(nsh) in mte_check_tfsr_exit() is provisional pending confirmation by the architects. Cc: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-8-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Conditionally compile mte_enable_kernel_*()Vincenzo Frascino
mte_enable_kernel_*() are not needed if KASAN_HW is disabled. Add ash defines around the functions to conditionally compile the functions. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-7-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Enable TCO in functions that can read beyond buffer limitsVincenzo Frascino
load_unaligned_zeropad() and __get/put_kernel_nofault() functions can read past some buffer limits which may include some MTE granule with a different tag. When MTE async mode is enabled, the load operation crosses the boundaries and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit as if an asynchronous tag fault is happened. Enable Tag Check Override (TCO) in these functions before the load and disable it afterwards to prevent this to happen. Note: The same condition can be hit in MTE sync mode but we deal with it through the exception handling. In the current implementation, mte_async_mode flag is set only at boot time but in future kasan might acquire some runtime features that that change the mode dynamically, hence we disable it when sync mode is selected for future proof. Cc: Will Deacon <will@kernel.org> Reported-by: Branislav Rankov <Branislav.Rankov@arm.com> Tested-by: Branislav Rankov <Branislav.Rankov@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-6-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11kasan: Add report for async modeVincenzo Frascino
KASAN provides an asynchronous mode of execution. Add reporting functionality for this mode. Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Link: https://lore.kernel.org/r/20210315132019.33202-5-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Drop arch_enable_tagging()Vincenzo Frascino
arch_enable_tagging() was left in memory.h after the introduction of async mode to not break the bysectability of the KASAN KUNIT tests. Remove the function now that KASAN has been fully converted. Cc: Will Deacon <will@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-4-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11kasan: Add KASAN mode kernel parameterVincenzo Frascino
Architectures supported by KASAN_HW_TAGS can provide a sync or async mode of execution. On an MTE enabled arm64 hw for example this can be identified with the synchronous or asynchronous tagging mode of execution. In synchronous mode, an exception is triggered if a tag check fault occurs. In asynchronous mode, if a tag check fault occurs, the TFSR_EL1 register is updated asynchronously. The kernel checks the corresponding bits periodically. KASAN requires a specific kernel command line parameter to make use of this hw features. Add KASAN HW execution mode kernel command line parameter. Note: This patch adds the kasan.mode kernel parameter and the sync/async kernel command line options to enable the described features. [ Add a new var instead of exposing kasan_arg_mode to be consistent with flags for other command line arguments. ] Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Link: https://lore.kernel.org/r/20210315132019.33202-3-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Add asynchronous mode supportVincenzo Frascino
MTE provides an asynchronous mode for detecting tag exceptions. In particular instead of triggering a fault the arm64 core updates a register which is checked by the kernel after the asynchronous tag check fault has occurred. Add support for MTE asynchronous mode. The exception handling mechanism will be added with a future patch. Note: KASAN HW activates async mode via kasan.mode kernel parameter. The default mode is set to synchronous. The code that verifies the status of TFSR_EL1 will be added with a future patch. Cc: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-2-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Get rid of CONFIG_ARM64_VHEMarc Zyngier
CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago), and has been enabled by default for almost all that time. Given that newer systems that are VHE capable are finally becoming available, and that some systems are even incapable of not running VHE, drop the configuration altogether. Anyone willing to stick to non-VHE on VHE hardware for obscure reasons should use the 'kvm-arm.mode=nvhe' command-line option. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Cope with CPUs stuck in VHE modeMarc Zyngier
It seems that the CPUs part of the SoC known as Apple M1 have the terrible habit of being stuck with HCR_EL2.E2H==1, in violation of the architecture. Try and work around this deplorable state of affairs by detecting the stuck bit early and short-circuit the nVHE dance. Additional filtering code ensures that attempts at switching to nVHE from the command-line are also ignored. It is still unknown whether there are many more such nuggets to be found... Reported-by: Hector Martin <marcan@marcan.st> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-3-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: cpufeature: Allow early filtering of feature overrideMarc Zyngier
Some CPUs are broken enough that some overrides need to be rejected at the earliest opportunity. In some cases, that's right at cpu feature override time. Provide the necessary infrastructure to filter out overrides, and to report such filtered out overrides to the core cpufeature code. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-2-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Require that system registers at all visible ELs be initializedMark Brown
Currently we require that software at a higher exception level initialise all registers at the exception level the kernel will be entered prior to starting the kernel in order to ensure that there is nothing uninitialised which could result in an UNKNOWN state while running the kernel. The expectation is that the software running at the highest exception levels will be tightly coupled to the system and can ensure that all available features are appropriately initialised and that the kernel can initialise anything else. There is a gap here in the case where new registers are added to lower exception levels that require initialisation but the kernel does not yet understand them. Extend the requirement to also include exception levels below the one where the kernel is entered to cover this. Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210401180942.35815-4-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Disable fine grained traps on bootMark Brown
The arm64 FEAT_FGT extension introduces a set of traps to EL2 for accesses to small sets of registers and instructions from EL1 and EL0. Currently Linux makes no use of this feature, ensure that it is not active at boot by disabling the traps during EL2 setup. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210401180942.35815-3-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Document requirements for fine grained traps at bootMark Brown
The arm64 FEAT_FGT extension introduces a set of traps to EL2 for accesses to small sets of registers and instructions from EL1 and EL0, access to which is controlled by EL3. Require access to it so that it is available to us in future and so that we can ensure these traps are disabled during boot. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210401180942.35815-2-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: mte: Remove unused mte_assign_mem_tag_range()Vincenzo Frascino
mte_assign_mem_tag_range() was added in commit 85f49cae4dfc ("arm64: mte: add in-kernel MTE helpers") in 5.11 but moved out of mte.S by commit 2cb34276427a ("arm64: kasan: simplify and inline MTE functions") in 5.12 and renamed to mte_set_mem_tag_range(). 2cb34276427a did not delete the old function prototypes in mte.h. Remove the unused prototype from mte.h. Cc: Will Deacon <will@kernel.org> Reported-by: Derrick McKee <derrick.mckee@gmail.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210407133817.23053-1-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Add __init section marker to some functionsJisheng Zhang
They are not needed after booting, so mark them as __init to move them to the .init section. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20210330135449.4dcffd7f@xhacker.debian Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64/sve: Rework SVE access trap to convert state in registersMark Brown
When we enable SVE usage in userspace after taking a SVE access trap we need to ensure that the portions of the register state that are not shared with the FPSIMD registers are zeroed. Currently we do this by forcing the FPSIMD registers to be saved to the task struct and converting them there. This is wasteful in the common case where the task state is loaded into the registers and we will immediately return to userspace since we can initialise the SVE state directly in registers instead of accessing multiple copies of the register state in memory. Instead in that common case do the conversion in the registers and update the task metadata so that we can return to userspace without spilling the register state to memory unless there is some other reason to do so. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210312190313.24598-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-01arm64: perf: Remove redundant initialization in perf_event.cQi Liu
The initialization of value in function armv8pmu_read_hw_counter() and armv8pmu_read_counter() seem redundant, as they are soon updated. So, We can remove them. Signed-off-by: Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1617275801-1980-1-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-30perf/arm_pmu_platform: Clean up with dev_printkRobin Murphy
Nearly all of the messages we can log from the platform device code relate to the specific PMU device and the properties we're parsing from its DT node. In some cases we use %pOF to point at where something was wrong, but even that is inconsistent. Let's convert these logs to the appropriate dev_printk variants, so that every issue specific to the device and/or its DT description is clearly and instantly attributable, particularly if there is more than one PMU node present in the DT. The local refactoring in a couple of functions invites some extra cleanup in the process - the init_fn matching can be streamlined, and the PMU registration failure message moved to the appropriate place and log level. CC: Tian Tao <tiantao6@hisilicon.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/10a4aacdf071d0c03d061c408a5899e5b32cc0a6.1616774562.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-30perf/arm_pmu_platform: Fix error handlingRobin Murphy
If we're aborting after failing to register the PMU device, we probably don't want to leak the IRQs that we've claimed. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/53031a607fc8412a60024bfb3bb8cd7141f998f5.1616774562.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-30perf/arm_pmu_platform: Use dev_err_probe() for IRQ errorsRobin Murphy
By virtue of using platform_irq_get_optional() under the covers, platform_irq_count() needs the target interrupt controller to be available and may return -EPROBE_DEFER if it isn't. Let's use dev_err_probe() to avoid a spurious error log (and help debug any deferral issues) in that case. Reported-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/073d5e0d3ed1f040592cb47ca6fe3759f40cc7d1.1616774562.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-30docs: perf: Address some html build warningsQi Liu
Fix following html build warnings: Documentation/admin-guide/perf/hisi-pmu.rst:61: WARNING: Unexpected indentation. Documentation/admin-guide/perf/hisi-pmu.rst:62: WARNING: Block quote ends without a blank line; unexpected unindent. Documentation/admin-guide/perf/hisi-pmu.rst:69: WARNING: Unexpected indentation. Documentation/admin-guide/perf/hisi-pmu.rst:70: WARNING: Block quote ends without a blank line; unexpected unindent. Documentation/admin-guide/perf/hisi-pmu.rst:83: WARNING: Unexpected indentation. Fixes: 9b86b1b41e0f ("docs: perf: Add new description on HiSilicon uncore PMU v2") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1617021121-31450-1-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>