summaryrefslogtreecommitdiff
path: root/arch/arm/include/asm
AgeCommit message (Collapse)Author
2020-01-29ARM: 8847/1: pm: fix HYP/SVC mode mismatch when MCPM is usedMarek Szyprowski
[ Upstream commit ca70ea43f80c98582f5ffbbd1e6f4da2742da0c4 ] MCPM does a soft reset of the CPUs and uses common cpu_resume() routine to perform low-level platform initialization. This results in a try to install HYP stubs for the second time for each CPU and results in false HYP/SVC mode mismatch detection. The HYP stubs are already installed at the beginning of the kernel initialization on the boot CPU (head.S) or in the secondary_startup() for other CPUs. To fix this issue MCPM code should use a cpu_resume() routine without HYP stubs installation. This change fixes HYP/SVC mode mismatch on Samsung Exynos5422-based Odroid XU3/XU4/HC1 boards. Fixes: 3721924c8154 ("ARM: 8081/1: MCPM: provide infrastructure to allow for MCPM loopback") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Anand Moon <linux.amoon@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-12-21ARM: 8813/1: Make aligned 2-byte getuser()/putuser() atomic on ARMv6+Vincent Whitchurch
[ Upstream commit 344eb5539abf3e0b6ce22568c03e86450073e097 ] getuser() and putuser() (and there underscored variants) use two strb[t]/ldrb[t] instructions when they are asked to get/put 16-bits. This means that the read/write is not atomic even when performed to a 16-bit-aligned address. This leads to problems with vhost: vhost uses __getuser() to read the vring's 16-bit avail.index field, and if it happens to observe a partial update of the index, wrong descriptors will be used which will lead to a breakdown of the virtio communication. A similar problem exists for __putuser() which is used to write to the vring's used.index field. The reason these functions use strb[t]/ldrb[t] is because strht/ldrht instructions did not exist until ARMv6T2/ARMv7. So we should be easily able to fix this on ARMv7. Also, since all ARMv6 processors also don't actually use the unprivileged instructions anymore for uaccess (since CONFIG_CPU_USE_DOMAINS is not used) we can easily fix them too. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-22ARM: prevent tracing IPI_CPU_BACKTRACEArnd Bergmann
[ Upstream commit be167862ae7dd85c56d385209a4890678e1b0488 ] Patch series "compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING", v3. This patch (of 11): When function tracing for IPIs is enabled, we get a warning for an overflow of the ipi_types array with the IPI_CPU_BACKTRACE type as triggered by raise_nmi(): arch/arm/kernel/smp.c: In function 'raise_nmi': arch/arm/kernel/smp.c:489:2: error: array subscript is above array bounds [-Werror=array-bounds] trace_ipi_raise(target, ipi_types[ipinr]); This is a correct warning as we actually overflow the array here. This patch raise_nmi() to call __smp_cross_call() instead of smp_cross_call(), to avoid calling into ftrace. For clarification, I'm also adding a two new code comments describing how this one is special. The warning appears to have shown up after commit e7273ff49acf ("ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI"), which changed the number assignment from '15' to '8', but as far as I can tell has existed since the IPI tracepoints were first introduced. If we decide to backport this patch to stable kernels, we probably need to backport e7273ff49acf as well. [yamada.masahiro@socionext.com: rebase on v5.1-rc1] Link: http://lkml.kernel.org/r/20190423034959.13525-2-yamada.masahiro@socionext.com Fixes: e7273ff49acf ("ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI") Fixes: 365ec7b17327 ("ARM: add IPI tracepoints") # v3.17 Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Mathieu Malaterre <malat@debian.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Stefan Agner <stefan@agner.ch> Cc: Boris Brezillon <bbrezillon@kernel.org> Cc: Miquel Raynal <miquel.raynal@bootlin.com> Cc: Richard Weinberger <richard@nod.at> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Brian Norris <computersforpeace@gmail.com> Cc: Marek Vasut <marek.vasut@gmail.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Borislav Petkov <bp@suse.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-05-31ARM: vdso: Remove dependency with the arch_timer driver internalsMarc Zyngier
[ Upstream commit 1f5b62f09f6b314c8d70b9de5182dae4de1f94da ] The VDSO code uses the kernel helper that was originally designed to abstract the access between 32 and 64bit systems. It worked so far because this function is declared as 'inline'. As we're about to revamp that part of the code, the VDSO would break. Let's fix it by doing what should have been done from the start, a proper system register access. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-04-05ARM: avoid Cortex-A9 livelock on tight dmb loopsRussell King
[ Upstream commit 5388a5b82199facacd3d7ac0d05aca6e8f902fed ] machine_crash_nonpanic_core() does this: while (1) cpu_relax(); because the kernel has crashed, and we have no known safe way to deal with the CPU. So, we place the CPU into an infinite loop which we expect it to never exit - at least not until the system as a whole is reset by some method. In the absence of erratum 754327, this code assembles to: b . In other words, an infinite loop. When erratum 754327 is enabled, this becomes: 1: dmb b 1b It has been observed that on some systems (eg, OMAP4) where, if a crash is triggered, the system tries to kexec into the panic kernel, but fails after taking the secondary CPU down - placing it into one of these loops. This causes the system to livelock, and the most noticable effect is the system stops after issuing: Loading crashdump kernel... to the system console. The tested as working solution I came up with was to add wfe() to these infinite loops thusly: while (1) { cpu_relax(); wfe(); } which, without 754327 builds to: 1: wfe b 1b or with 754327 is enabled: 1: dmb wfe b 1b Adding "wfe" does two things depending on the environment we're running under: - where we're running on bare metal, and the processor implements "wfe", it stops us spinning endlessly in a loop where we're never going to do any useful work. - if we're running in a VM, it allows the CPU to be given back to the hypervisor and rescheduled for other purposes (maybe a different VM) rather than wasting CPU cycles inside a crashed VM. However, in light of erratum 794072, Will Deacon wanted to see 10 nops as well - which is reasonable to cover the case where we have erratum 754327 enabled _and_ we have a processor that doesn't implement the wfe hint. So, we now end up with: 1: wfe b 1b when erratum 754327 is disabled, or: 1: dmb nop nop nop nop nop nop nop nop nop nop wfe b 1b when erratum 754327 is enabled. We also get the dmb + 10 nop sequence elsewhere in the kernel, in terminating loops. This is reasonable - it means we get the workaround for erratum 794072 when erratum 754327 is enabled, but still relinquish the dead processor - either by placing it in a lower power mode when wfe is implemented as such or by returning it to the hypervisior, or in the case where wfe is a no-op, we use the workaround specified in erratum 794072 to avoid the problem. These as two entirely orthogonal problems - the 10 nops addresses erratum 794072, and the wfe is an optimisation that makes the system more efficient when crashed either in terms of power consumption or by allowing the host/other VMs to make use of the CPU. I don't see any reason not to use kexec() inside a VM - it has the potential to provide automated recovery from a failure of the VMs kernel with the opportunity for saving a crashdump of the failure. A panic() with a reboot timeout won't do that, and reading the libvirt documentation, setting on_reboot to "preserve" won't either (the documentation states "The preserve action for an on_reboot event is treated as a destroy".) Surely it has to be a good thing to avoiding having CPUs spinning inside a VM that is doing no useful work. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-03-23ARM: 8824/1: fix a migrating irq bug when hotplug cpuDietmar Eggemann
[ Upstream commit 1b5ba350784242eb1f899bcffd95d2c7cff61e84 ] Arm TC2 fails cpu hotplug stress test. This issue was tracked down to a missing copy of the new affinity cpumask for the vexpress-spc interrupt into struct irq_common_data.affinity when the interrupt is migrated in migrate_one_irq(). Fix it by replacing the arm specific hotplug cpu migration with the generic irq code. This is the counterpart implementation to commit 217d453d473c ("arm64: fix a migrating irq bug when hotplug cpu"). Tested with cpu hotplug stress test on Arm TC2 (multi_v7_defconfig plus CONFIG_ARM_BIG_LITTLE_CPUFREQ=y and CONFIG_ARM_VEXPRESS_SPC_CPUFREQ=y). The vexpress-spc interrupt (irq=22) on this board is affine to CPU0. Its affinity cpumask now changes correctly e.g. from 0 to 1-4 when CPU0 is hotplugged out. Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: spectre-v2: per-CPU vtables to work around big.Little systemsRussell King
Commit 383fb3ee8024d596f488d2dbaf45e572897acbdb upstream. In big.Little systems, some CPUs require the Spectre workarounds in paths such as the context switch, but other CPUs do not. In order to handle these differences, we need per-CPU vtables. We are unable to use the kernel's per-CPU variables to support this as per-CPU is not initialised at times when we need access to the vtables, so we have to use an array indexed by logical CPU number. We use an array-of-pointers to avoid having function pointers in the kernel's read/write .data section. Note: Added include of linux/slab.h in arch/arm/smp.c. Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: add PROC_VTABLE and PROC_TABLE macrosRussell King
Commit e209950fdd065d2cc46e6338e47e52841b830cba upstream. Allow the way we access members of the processor vtable to be changed at compile time. We will need to move to per-CPU vtables to fix the Spectre variant 2 issues on big.Little systems. However, we have a couple of calls that do not need the vtable treatment, and indeed cause a kernel warning due to the (later) use of smp_processor_id(), so also introduce the PROC_TABLE macro for these which always use CPU 0's function pointers. Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: clean up per-processor check_bugs method callRussell King
Commit 945aceb1db8885d3a35790cf2e810f681db52756 upstream. Call the per-processor type check_bugs() method in the same way as we do other per-processor functions - move the "processor." detail into proc-fns.h. Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: split out processor lookupRussell King
Commit 65987a8553061515b5851b472081aedb9837a391 upstream. Split out the lookup of the processor type and associated error handling from the rest of setup_processor() - we will need to use this in the secondary CPU bringup path for big.Little Spectre variant 2 mitigation. Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitizationJulien Thierry
Commit afaf6838f4bc896a711180b702b388b8cfa638fc upstream. Introduce C and asm helpers to sanitize user address, taking the address range they target into account. Use asm helper for existing sanitization in __copy_from_user(). Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: 8795/1: spectre-v1.1: use put_user() for __put_user()Julien Thierry
Commit e3aa6243434fd9a82e84bb79ab1abd14f2d9a5a7 upstream. When Spectre mitigation is required, __put_user() needs to include check_uaccess. This is already the case for put_user(), so just make __put_user() an alias of put_user(). Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: 8794/1: uaccess: Prevent speculative use of the current addr_limitJulien Thierry
Commit 621afc677465db231662ed126ae1f355bf8eac47 upstream. A mispredicted conditional call to set_fs could result in the wrong addr_limit being forwarded under speculation to a subsequent access_ok check, potentially forming part of a spectre-v1 attack using uaccess routines. This patch prevents this forwarding from taking place, but putting heavy barriers in set_fs after writing the addr_limit. Porting commit c2f0ad4fc089cff8 ("arm64: uaccess: Prevent speculative use of the current addr_limit"). Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-20ARM: 8791/1: vfp: use __copy_to_user() when saving VFP stateJulien Thierry
Commit 3aa2df6ec2ca6bc143a65351cca4266d03a8bc41 upstream. Use __copy_to_user() rather than __put_user_error() for individual members when saving VFP state. This has the benefit of disabling/enabling PAN once per copied struct intead of once per write. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-11-23ARM: spectre-v1: mitigate user accessesRussell King
Commit a3c0f84765bb429ba0fd23de1c57b5e1591c9389 upstream. Spectre variant 1 attacks are about this sequence of pseudo-code: index = load(user-manipulated pointer); access(base + index * stride); In order for the cache side-channel to work, the access() must me made to memory which userspace can detect whether cache lines have been loaded. On 32-bit ARM, this must be either user accessible memory, or a kernel mapping of that same user accessible memory. The problem occurs when the load() speculatively loads privileged data, and the subsequent access() is made to user accessible memory. Any load() which makes use of a user-maniplated pointer is a potential problem if the data it has loaded is used in a subsequent access. This also applies for the access() if the data loaded by that access is used by a subsequent access. Harden the get_user() accessors against Spectre attacks by forcing out of bounds addresses to a NULL pointer. This prevents get_user() being used as the load() step above. As a side effect, put_user() will also be affected even though it isn't implicated. Also harden copy_from_user() by redoing the bounds check within the arm_copy_from_user() code, and NULLing the pointer if out of bounds. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: spectre-v1: use get_user() for __get_user()Russell King
Commit b1cd0a14806321721aae45f5446ed83a3647c914 upstream. Fixing __get_user() for spectre variant 1 is not sane: we would have to add address space bounds checking in order to validate that the location should be accessed, and then zero the address if found to be invalid. Since __get_user() is supposed to avoid the bounds check, and this is exactly what get_user() does, there's no point having two different implementations that are doing the same thing. So, when the Spectre workarounds are required, make __get_user() an alias of get_user(). Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: use __inttype() in get_user()Russell King
Commit d09fbb327d670737ab40fd8bbb0765ae06b8b739 upstream. Borrow the x86 implementation of __inttype() to use in get_user() to select an integer type suitable to temporarily hold the result value. This is necessary to avoid propagating the volatile nature of the result argument, which can cause the following warning: lib/iov_iter.c:413:5: warning: optimization may eliminate reads and/or writes to register variables [-Wvolatile-register-var] Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: vfp: use __copy_from_user() when restoring VFP stateRussell King
Commit 42019fc50dfadb219f9e6ddf4c354f3837057d80 upstream. __get_user_error() is used as a fast accessor to make copying structure members in the signal handling path as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors. In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access. Use __copy_from_user() rather than __get_user_err() for individual members when restoring VFP state. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: spectre-v1: add array_index_mask_nospec() implementationRussell King
Commit 1d4238c56f9816ce0f9c8dbe42d7f2ad81cb6613 upstream. Add an implementation of the array_index_mask_nospec() function for mitigating Spectre variant 1 throughout the kernel. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Mark Rutland <mark.rutland@arm.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: spectre-v1: add speculation barrier (csdb) macrosRussell King
Commit a78d156587931a2c3b354534aa772febf6c9e855 upstream. Add assembly and C macros for the new CSDB instruction. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Mark Rutland <mark.rutland@arm.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1Russell King
Commit add5609877c6785cc002c6ed7e008b1d61064439 upstream. Report support for SMCCC_ARCH_WORKAROUND_1 to KVM guests for affected CPUs. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15Russell King
Commit 3c908e16396d130608e831b7fac4b167a2ede6ba upstream. Include Brahma B15 in the Spectre v2 KVM workarounds. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: KVM: invalidate icache on guest exit for Cortex-A15Marc Zyngier
Commit 0c47ac8cd157727e7a532d665d6fb1b5fd333977 upstream. In order to avoid aliasing attacks against the branch predictor on Cortex-A15, let's invalidate the BTB on guest exit, which can only be done by invalidating the icache (with ACTLR[0] being set). We use the same hack as for A12/A17 to perform the vector decoding. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17Marc Zyngier
Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream. In order to avoid aliasing attacks against the branch predictor, let's invalidate the BTB on guest exit. This is made complicated by the fact that we cannot take a branch before invalidating the BTB. We only apply this to A12 and A17, which are the only two ARM cores on which this useful. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: spectre-v2: harden user aborts in kernel spaceRussell King
Commit f5fe12b1eaee220ce62ff9afb8b90929c396595f upstream. In order to prevent aliasing attacks on the branch predictor, invalidate the BTB or instruction cache on CPUs that are known to be affected when taking an abort on a address that is outside of a user task limit: Cortex A8, A9, A12, A17, A73, A75: flush BTB. Cortex A15, Brahma B15: invalidate icache. If the IBE bit is not set, then there is little point to enabling the workaround. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: bugs: add support for per-processor bug checkingRussell King
Commit 9d3a04925deeabb97c8e26d940b501a2873e8af3 upstream. Add support for per-processor bug checking - each processor function descriptor gains a function pointer for this check, which must not be an __init function. If non-NULL, this will be called whenever a CPU enters the kernel via which ever path (boot CPU, secondary CPU startup, CPU resuming, etc.) This allows processor specific bug checks to validate that workaround bits are properly enabled by firmware via all entry paths to the kernel. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: bugs: hook processor bug checking into SMP and suspend pathsRussell King
Commit 26602161b5ba795928a5a719fe1d5d9f2ab5c3ef upstream. Check for CPU bugs when secondary processors are being brought online, and also when CPUs are resuming from a low power mode. This gives an opportunity to check that processor specific bug workarounds are correctly enabled for all paths that a CPU re-enters the kernel. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: bugs: prepare processor bug infrastructureRussell King
Commit a5b9177f69329314721aa7022b7e69dab23fa1f0 upstream. Prepare the processor bug infrastructure so that it can be expanded to check for per-processor bugs. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-23ARM: add more CPU part numbers for Cortex and Brahma B15 CPUsRussell King
Commit f5683e76f35b4ec5891031b6a29036efe0a1ff84 upstream. Add CPU part numbers for Cortex A53, A57, A72, A73, A75 and the Broadcom Brahma B15 CPU. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Boot-tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Tony Lindgren <tony@atomide.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_IDMarc Zyngier
commit 5d81f7dc9bca4f4963092433e27b508cbe524a32 upstream. Now that all our infrastructure is in place, let's expose the availability of ARCH_WORKAROUND_2 to guests. We take this opportunity to tidy up a couple of SMCCC constants. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: KVM: Add ARCH_WORKAROUND_2 support for guestsMarc Zyngier
commit 55e3748e8902ff641e334226bdcb432f9a5d78d3 upstream. In order to offer ARCH_WORKAROUND_2 support to guests, we need a bit of infrastructure. Let's add a flag indicating whether or not the guest uses SSBD mitigation. Depending on the state of this flag, allow KVM to disable ARCH_WORKAROUND_2 before entering the guest, and enable it when exiting it. Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_stateMarc Zyngier
Commit 44a497abd621a71c645f06d3d545ae2f46448830 upstream. kvm_vgic_global_state is part of the read-only section, and is usually accessed using a PC-relative address generation (adrp + add). It is thus useless to use kern_hyp_va() on it, and actively problematic if kern_hyp_va() becomes non-idempotent. On the other hand, there is no way that the compiler is going to guarantee that such access is always PC relative. So let's bite the bullet and provide our own accessor. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-03ARM: 8764/1: kgdb: fix NUMREGBYTES so that gdb_regs[] is the correct sizeDavid Rivshin
commit 76ed0b803a2ab793a1b27d1dfe0de7955282cd34 upstream. NUMREGBYTES (which is used as the size for gdb_regs[]) is incorrectly based on DBG_MAX_REG_NUM instead of GDB_MAX_REGS. DBG_MAX_REG_NUM is the number of total registers, while GDB_MAX_REGS is the number of 'unsigned longs' it takes to serialize those registers. Since FP registers require 3 'unsigned longs' each, DBG_MAX_REG_NUM is smaller than GDB_MAX_REGS. This causes GDB 8.0 give the following error on connect: "Truncated register 19 in remote 'g' packet" This also causes the register serialization/deserialization logic to overflow gdb_regs[], overwriting whatever follows. Fixes: 834b2964b7ab ("kgdb,arm: fix register dump") Cc: <stable@vger.kernel.org> # 2.6.37+ Signed-off-by: David Rivshin <drivshin@allworx.com> Acked-by: Rabin Vincent <rabin@rab.in> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-30ARM: 8748/1: mm: Define vdso_start, vdso_end as arrayJinbum Park
[ Upstream commit 73b9160d0dfe44dfdaffd6465dc1224c38a4a73c ] Define vdso_start, vdso_end as array to avoid compile-time analysis error for the case of built with CONFIG_FORTIFY_SOURCE. and, since vdso_start, vdso_end are used in vdso.c only, move extern-declaration from vdso.h to vdso.c. If kernel is built with CONFIG_FORTIFY_SOURCE, compile-time error happens at this code. - if (memcmp(&vdso_start, "177ELF", 4)) The size of "&vdso_start" is recognized as 1 byte, but n is 4, So that compile-time error is reported. Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jinbum Park <jinb.park7@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-22ARM: 8772/1: kprobes: Prohibit kprobes on get_user functionsMasami Hiramatsu
commit 0d73c3f8e7f6ee2aab1bb350f60c180f5ae21a2c upstream. Since do_undefinstr() uses get_user to get the undefined instruction, it can be called before kprobes processes recursive check. This can cause an infinit recursive exception. Prohibit probing on get_user functions. Fixes: 24ba613c9d6c ("ARM kprobes: core code") Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-22KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lockAndre Przywara
commit bf308242ab98b5d1648c3663e753556bef9bec01 upstream. kvm_read_guest() will eventually look up in kvm_memslots(), which requires either to hold the kvm->slots_lock or to be inside a kvm->srcu critical section. In contrast to x86 and s390 we don't take the SRCU lock on every guest exit, so we have to do it individually for each kvm_read_guest() call. Provide a wrapper which does that and use that everywhere. Note that ending the SRCU critical section before returning from the kvm_read_guest() wrapper is safe, because the data has been *copied*, so we don't need to rely on valid references to the memslot anymore. Cc: Stable <stable@vger.kernel.org> # 4.8+ Reported-by: Jan Glauber <jan.glauber@caviumnetworks.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-19futex: Remove duplicated code and fix undefined behaviourJiri Slaby
commit 30d6e0a4190d37740e9447e4e4815f06992dd8c3 upstream. There is code duplicated over all architecture's headers for futex_atomic_op_inuser. Namely op decoding, access_ok check for uaddr, and comparison of the result. Remove this duplication and leave up to the arches only the needed assembly which is now in arch_futex_atomic_op_inuser. This effectively distributes the Will Deacon's arm64 fix for undefined behaviour reported by UBSAN to all architectures. The fix was done in commit 5f16a046f8e1 (arm64: futex: Fix undefined behaviour with FUTEX_OP_OPARG_SHIFT usage). Look there for an example dump. And as suggested by Thomas, check for negative oparg too, because it was also reported to cause undefined behaviour report. Note that s390 removed access_ok check in d12a29703 ("s390/uaccess: remove pointless access_ok() checks") as access_ok there returns true. We introduce it back to the helper for the sake of simplicity (it gets optimized away anyway). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> [s390] Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile] Reviewed-by: Darren Hart (VMware) <dvhart@infradead.org> Reviewed-by: Will Deacon <will.deacon@arm.com> [core/arm64] Cc: linux-mips@linux-mips.org Cc: Rich Felker <dalias@libc.org> Cc: linux-ia64@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: peterz@infradead.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: sparclinux@vger.kernel.org Cc: Jonas Bonn <jonas@southpole.se> Cc: linux-s390@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: linux-hexagon@vger.kernel.org Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-snps-arc@lists.infradead.org Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: linux-xtensa@linux-xtensa.org Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: openrisc@lists.librecores.org Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Stafford Horne <shorne@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Richard Henderson <rth@twiddle.net> Cc: Chris Zankel <chris@zankel.net> Cc: Michal Simek <monstr@monstr.eu> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-parisc@vger.kernel.org Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: linux-alpha@vger.kernel.org Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: "David S. Miller" <davem@davemloft.net> Link: http://lkml.kernel.org/r/20170824073105.3901-1-jslaby@suse.cz Cc: Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-09arm/arm64: KVM: Add PSCI version selection APIMarc Zyngier
commit 85bd0ba1ff9875798fad94218b627ea9f768f3c3 upstream. Although we've implemented PSCI 0.1, 0.2 and 1.0, we expose either 0.1 or 1.0 to a guest, defaulting to the latest version of the PSCI implementation that is compatible with the requested version. This is no different from doing a firmware upgrade on KVM. But in order to give a chance to hypothetical badly implemented guests that would have a fit by discovering something other than PSCI 0.2, let's provide a new API that allows userspace to pick one particular version of the API. This is implemented as a new class of "firmware" registers, where we expose the PSCI version. This allows the PSCI version to be save/restored as part of a guest migration, and also set to any supported version if the guest requires it. Cc: stable@vger.kernel.org #4.16 Reviewed-by: Christoffer Dall <cdall@kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening supportMark Rutland
From: Marc Zyngier <marc.zyngier@arm.com> commit 6167ec5c9145cdf493722dfd80a5d48bafc4a18a upstream. A new feature of SMCCC 1.1 is that it offers firmware-based CPU workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides BP hardening for CVE-2017-5715. If the host has some mitigation for this issue, report that we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the host workaround on every guest exit. Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [v4.9: account for files moved to virt/ upstream] Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm/arm64: KVM: Consolidate the PSCI include filesMark Rutland
From: Marc Zyngier <marc.zyngier@arm.com> commit 1a2fb94e6a771ff94f4afa22497a4695187b820c upstream. As we're about to update the PSCI support, and because I'm lazy, let's move the PSCI include file to include/kvm so that both ARM architectures can find it. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [v4.9: account for files moved to virt/ upstream] Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: KVM: Use per-CPU vector when BP hardening is enabledMark Rutland
From: Marc Zyngier <marc.zyngier@arm.com> commit 6840bdd73d07216ab4bc46f5a8768c37ea519038 upstream. Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [v4.9: account for files moved to virt/ upstream, use cpus_have_cap()] Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-13xen: avoid type warning in xchg_xen_ulongArnd Bergmann
[ Upstream commit 9cc91f212111cdcbefa02dcdb7dd443f224bf52c ] The improved type-checking version of container_of() triggers a warning for xchg_xen_ulong, pointing out that 'xen_ulong_t' is unsigned, but atomic64_t contains a signed value: drivers/xen/events/events_2l.c: In function 'evtchn_2l_handle_events': drivers/xen/events/events_2l.c:187:1020: error: call to '__compiletime_assert_187' declared with attribute error: pointer type mismatch in container_of() This adds a cast to work around the warning. Cc: Ian Abbott <abbotti@mev.co.uk> Fixes: 85323a991d40 ("xen: arm: mandate EABI and use generic atomic operations.") Fixes: daa2ac80834d ("kernel.h: handle pointers to arrays better in container_of()") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Ian Abbott <abbotti@mev.co.uk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-14arm: KVM: Survive unknown traps from guestsMark Rutland
[ Upstream commit f050fe7a9164945dd1c28be05bf00e8cfb082ccf ] Currently we BUG() if we see a HSR.EC value we don't recognise. As configurable disables/enables are added to the architecture (controlled by RES1/RES0 bits respectively), with associated synchronous exceptions, it may be possible for a guest to trigger exceptions with classes that we don't recognise. While we can't service these exceptions in a manner useful to the guest, we can avoid bringing down the host. Per ARM DDI 0406C.c, all currently unallocated HSR EC encodings are reserved, and per ARM DDI 0487A.k_iss10775, page G6-4395, EC values within the range 0x00 - 0x2c are reserved for future use with synchronous exceptions, and EC values within the range 0x2d - 0x3f may be used for either synchronous or asynchronous exceptions. The patch makes KVM handle any unknown EC by injecting an UNDEFINED exception into the guest, with a corresponding (ratelimited) warning in the host dmesg. We could later improve on this with with a new (opt-in) exit to the host userspace. Cc: Dave Martin <dave.martin@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-14ARM: 8657/1: uaccess: consistently check object sizesKees Cook
[ Upstream commit 32b143637e8180f5d5cea54320c769210dea4f19 ] In commit 76624175dcae ("arm64: uaccess: consistently check object sizes"), the object size checks are moved outside the access_ok() so that bad destinations are detected before hitting the "memset(dest, 0, size)" in the copy_from_user() failure path. This makes the same change for arm, with attention given to possibly extracting the uaccess routines into a common header file for all architectures in the future. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-14ARM: BUG if jumping to usermode address in kernel modeRussell King
commit 8bafae202c82dc257f649ea3c275a0f35ee15113 upstream. Detect if we are returning to usermode via the normal kernel exit paths but the saved PSR value indicates that we are in kernel mode. This could occur due to corrupted stack state, which has been observed with "ftracetest". This ensures that we catch the problem case before we get to user code. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Cc: Alex Shi <alex.shi@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-14arm: KVM: Fix VTTBR_BADDR_MASK BUG_ON off-by-oneMarc Zyngier
commit 5553b142be11e794ebc0805950b2e8313f93d718 upstream. VTTBR_BADDR_MASK is used to sanity check the size and alignment of the VTTBR address. It seems to currently be off by one, thereby only allowing up to 39-bit addresses (instead of 40-bit) and also insufficiently checking the alignment. This patch fixes it. This patch is the 32bit pendent of Kristina's arm64 fix, and she deserves the actual kudos for pinpointing that one. Fixes: f7ed45be3ba52 ("KVM: ARM: World-switch implementation") Reported-by: Kristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ARM: 8715/1: add a private asm/unaligned.hArnd Bergmann
commit 1cce91dfc8f7990ca3aea896bfb148f240b12860 upstream. The asm-generic/unaligned.h header provides two different implementations for accessing unaligned variables: the access_ok.h version used when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers are in fact aligned, while the le_struct.h version convinces gcc that the alignment of a pointer is '1', to make it issue the correct load/store instructions depending on the architecture flags. On ARMv5 and older, we always use the second version, to let the compiler use byte accesses. On ARMv6 and newer, we currently use the access_ok.h version, so the compiler can use any instruction including stm/ldm and ldrd/strd that will cause an alignment trap. This trap can significantly impact performance when we have to do a lot of fixups and, worse, has led to crashes in the LZ4 decompressor code that does not have a trap handler. This adds an ARM specific version of asm/unaligned.h that uses the le_struct.h/be_struct.h implementation unconditionally. This should lead to essentially the same code on ARMv6+ as before, with the exception of using regular load/store instructions instead of the trapping instructions multi-register variants. The crash in the LZ4 decompressor code was probably introduced by the patch replacing the LZ4 implementation, commit 4e1a33b105dd ("lib: update LZ4 compressor module"), so linux-4.11 and higher would be affected most. However, we probably want to have this backported to all older stable kernels as well, to help with the performance issues. There are two follow-ups that I think we should also work on, but not backport to stable kernels, first to change the asm-generic version of the header to remove the ARM special case, and second to review all other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they might be affected by the same problem on ARM. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-11ARM: 8632/1: ftrace: fix syscall name matchingRabin Vincent
[ Upstream commit 270c8cf1cacc69cb8d99dea812f06067a45e4609 ] ARM has a few system calls (most notably mmap) for which the names of the functions which are referenced in the syscall table do not match the names of the syscall tracepoints. As a consequence of this, these tracepoints are not made available. Implement arch_syscall_match_sym_name to fix this and allow tracing even these system calls. Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-21arm: move ELF_ET_DYN_BASE to 4MBKees Cook
commit 6a9af90a3bcde217a1c053e135f5f43e5d5fafbd upstream. Now that explicitly executed loaders are loaded in the mmap region, we have more freedom to decide where we position PIE binaries in the address space to avoid possible collisions with mmap or stack regions. 4MB is chosen here mainly to have parity with x86, where this is the traditional minimum load location, likely to avoid historically requiring a 4MB page table entry when only a portion of the first 4MB would be used (since the NULL address is avoided). For ARM the position could be 0x8000, the standard ET_EXEC load address, but that is needlessly close to the NULL address, and anyone running PIE on 32-bit ARM will have an MMU, so the tight mapping is not needed. Link: http://lkml.kernel.org/r/1498154792-49952-2-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Pratyush Anand <panand@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Dmitry Safonov <dsafonov@virtuozzo.com> Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Qualys Security Advisory <qsa@qualys.com> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25ARM: 8662/1: module: split core and init PLT sectionsArd Biesheuvel
commit b7ede5a1f5905ac394cc8e61712a13e3c5cb7b8f upstream. Since commit 35fa91eed817 ("ARM: kernel: merge core and init PLTs"), the ARM module PLT code allocates all PLT entries in a single core section, since the overhead of having a separate init PLT section is not justified by the small number of PLT entries usually required for init code. However, the core and init module regions are allocated independently, and there is a corner case where the core region may be allocated from the VMALLOC region if the dedicated module region is exhausted, but the init region, being much smaller, can still be allocated from the module region. This puts the PLT entries out of reach of the relocated branch instructions, defeating the whole purpose of PLTs. So split the core and init PLT regions, and name the latter ".init.plt" so it gets allocated along with (and sufficiently close to) the .init sections that it serves. Also, given that init PLT entries may need to be emitted for branches that target the core module, modify the logic that disregards defined symbols to only disregard symbols that are defined in the same section. Fixes: 35fa91eed817 ("ARM: kernel: merge core and init PLTs") Reported-by: Angus Clark <angus@angusclark.org> Tested-by: Angus Clark <angus@angusclark.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>