summaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2017-02-09crypto: arm64/aes-blk - honour iv_out requirement in CBC and CTR modesArd Biesheuvel
commit 11e3b725cfc282efe9d4a354153e99d86a16af08 upstream. Update the ARMv8 Crypto Extensions and the plain NEON AES implementations in CBC and CTR modes to return the next IV back to the skcipher API client. This is necessary for chaining to work correctly. Note that for CTR, this is only done if the request is a round multiple of the block size, since otherwise, chaining is impossible anyway. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64: avoid returning from bad_modeMark Rutland
commit 7d9e8f71b989230bc613d121ca38507d34ada849 upstream. Generally, taking an unexpected exception should be a fatal event, and bad_mode is intended to cater for this. However, it should be possible to contain unexpected synchronous exceptions from EL0 without bringing the kernel down, by sending a SIGILL to the task. We tried to apply this approach in commit 9955ac47f4ba1c95 ("arm64: don't kill the kernel on a bad esr from el0"), by sending a signal for any bad_mode call resulting from an EL0 exception. However, this also applies to other unexpected exceptions, such as SError and FIQ. The entry paths for these exceptions branch to bad_mode without configuring the link register, and have no kernel_exit. Thus, if we take one of these exceptions from EL0, bad_mode will eventually return to the original user link register value. This patch fixes this by introducing a new bad_el0_sync handler to cater for the recoverable case, and restoring bad_mode to its original state, whereby it calls panic() and never returns. The recoverable case branches to bad_el0_sync with a bl, and returns to userspace via the usual ret_to_user mechanism. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: 9955ac47f4ba1c95 ("arm64: don't kill the kernel on a bad esr from el0") Reported-by: Mark Salter <msalter@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Reject attempts to set incomplete hardware breakpoint fieldsDave Martin
commit ad9e202aa1ce571b1d7fed969d06f66067f8a086 upstream. We cannot preserve partial fields for hardware breakpoints, because the values written by userspace to the hardware breakpoint registers can't subsequently be recovered intact from the hardware. So, just reject attempts to write incomplete fields with -EINVAL. Fixes: 478fcb2cdb23 ("arm64: Debugging support") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Avoid uninitialised struct padding in fpr_set()Dave Martin
commit aeb1f39d814b2e21e5e5706a48834bfd553d0059 upstream. This patch adds an explicit __reserved[] field to user_fpsimd_state to replace what was previously unnamed padding. This ensures that data in this region are propagated across assignment rather than being left possibly uninitialised at the destination. Fixes: 60ffc30d5652 ("arm64: Exception handling") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Preserve previous registers for short regset write - 3Dave Martin
commit a672401c00f82e4e19704aff361d9bad18003714 upstream. Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Fixes: 5d220ff9420f ("arm64: Better native ptrace support for compat tasks") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Preserve previous registers for short regset write - 2Dave Martin
commit 9dd73f72f218320c6c90da5f834996e7360dc227 upstream. Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Fixes: 766a85d7bc5d ("arm64: ptrace: add NT_ARM_SYSTEM_CALL regset") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-26arm64/ptrace: Preserve previous registers for short regset writeDave Martin
commit 9a17b876b573441bfb3387ad55d98bf7184daf9d upstream. Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Fixes: 478fcb2cdb23 ("arm64: Debugging support") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <Will.Deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-ce - fix for big endianArd Biesheuvel
commit 1803b9a52c4e5a5dbb8a27126f6bc06939359753 upstream. The core AES cipher implementation that uses ARMv8 Crypto Extensions instructions erroneously loads the round keys as 64-bit quantities, which causes the algorithm to fail when built for big endian. In addition, the key schedule generation routine fails to take endianness into account as well, when loading the combining the input key with the round constants. So fix both issues. Fixes: 12ac3efe74f8 ("arm64/crypto: use crypto instructions to generate AES key schedule") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-xts-ce: fix for big endianArd Biesheuvel
commit caf4b9e2b326cc2a5005a5c557274306536ace61 upstream. Emit the XTS tweak literal constants in the appropriate order for a single 128-bit scalar literal load. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/sha1-ce - fix for big endianArd Biesheuvel
commit ee71e5f1e7d25543ee63a80451871f8985b8d431 upstream. The SHA1 digest is an array of 5 32-bit quantities, so we should refer to them as such in order for this code to work correctly when built for big endian. So replace 16 byte scalar loads and stores with 4x4 vector ones where appropriate. Fixes: 2c98833a42cd ("arm64/crypto: SHA-1 using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-neon - fix for big endianArd Biesheuvel
commit a2c435cc99862fd3d165e1b66bf48ac72c839c62 upstream. The AES implementation using pure NEON instructions relies on the generic AES key schedule generation routines, which store the round keys as arrays of 32-bit quantities stored in memory using native endianness. This means we should refer to these round keys using 4x4 loads rather than 16x1 loads. In addition, the ShiftRows tables are loading using a single scalar load, which is also affected by endianness, so emit these tables in the correct order depending on whether we are building for big endian or not. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/aes-ccm-ce: fix for big endianArd Biesheuvel
commit 56e4e76c68fcb51547b5299e5b66a135935ff414 upstream. The AES-CCM implementation that uses ARMv8 Crypto Extensions instructions refers to the AES round keys as pairs of 64-bit quantities, which causes failures when building the code for big endian. In addition, it byte swaps the input counter unconditionally, while this is only required for little endian builds. So fix both issues. Fixes: 12ac3efe74f8 ("arm64/crypto: use crypto instructions to generate AES key schedule") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/ghash-ce - fix for big endianArd Biesheuvel
commit 9c433ad5083fd4a4a3c721d86cbfbd0b2a2326a5 upstream. The GHASH key and digest are both pairs of 64-bit quantities, but the GHASH code does not always refer to them as such, causing failures when built for big endian. So replace the 16x1 loads and stores with 2x8 ones. Fixes: b913a6404ce2 ("arm64/crypto: improve performance of GHASH algorithm") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-12crypto: arm64/sha2-ce - fix for big endianArd Biesheuvel
commit 174122c39c369ed924d2608fc0be0171997ce800 upstream. The SHA256 digest is an array of 8 32-bit quantities, so we should refer to them as such in order for this code to work correctly when built for big endian. So replace 16 byte scalar loads and stores with 4x32 vector ones where appropriate. Fixes: 6ba6c74dfc6b ("arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-12-15arm64: futex.h: Add missing PAN togglingJames Morse
commit 811d61e384e24759372bb3f01772f3744b0a8327 upstream. futex.h's futex_atomic_cmpxchg_inatomic() does not use the __futex_atomic_op() macro and needs its own PAN toggling. This was missed when the feature was implemented. Fixes: 338d4f49d6f ("arm64: kernel: Add support for Privileged Access Never") Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Mian Yousaf Kaukab <yousaf.kaukab@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-12-08arm64: suspend: Reconfigure PSTATE after resume from idleJames Morse
commit d08544127d9fb4505635e3cb6871fd50a42947bd upstream. The suspend/resume path in kernel/sleep.S, as used by cpu-idle, does not save/restore PSTATE. As a result of this cpufeatures that were detected and have bits in PSTATE get lost when we resume from idle. UAO gets set appropriately on the next context switch. PAN will be re-enabled next time we return from user-space, but on a preemptible kernel we may run work accessing user space before this point. Add code to re-enable theses two features in __cpu_suspend_exit(). We re-use uao_thread_switch() passing current. Signed-off-by: James Morse <james.morse@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [Removed UAO hooks and commit-message references: this feature is not present in v4.4] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-12-08arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() callJames Morse
commit 7209c868600bd8926e37c10b9aae83124ccc1dd8 upstream. Commit 338d4f49d6f7 ("arm64: kernel: Add support for Privileged Access Never") enabled PAN by enabling the 'SPAN' feature-bit in SCTLR_EL1. This means the PSTATE.PAN bit won't be set until the next return to the kernel from userspace. On a preemptible kernel we may schedule work that accesses userspace on a CPU before it has done this. Now that cpufeature enable() calls are scheduled via stop_machine(), we can set PSTATE.PAN from the cpu_enable_pan() call. Add WARN_ON_ONCE(in_interrupt()) to check the PSTATE value we updated is not immediately discarded. Reported-by: Tony Thompson <anthony.thompson@arm.com> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: James Morse <james.morse@arm.com> [will: fixed typo in comment] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-12-08arm64: cpufeature: Schedule enable() calls instead of calling them via IPIJames Morse
commit 2a6dcb2b5f3e21592ca8dfa198dcce7bec09b020 upstream. The enable() call for a cpufeature/errata is called using on_each_cpu(). This issues a cross-call IPI to get the work done. Implicitly, this stashes the running PSTATE in SPSR when the CPU receives the IPI, and restores it when we return. This means an enable() call can never modify PSTATE. To allow PAN to do this, change the on_each_cpu() call to use stop_machine(). This schedules the work on each CPU which allows us to modify PSTATE. This involves changing the protype of all the enable() functions. enable_cpu_capabilities() is called during boot and enables the feature on all online CPUs. This path now uses stop_machine(). CPU features for hotplug'd CPUs are enabled by verify_local_cpu_features() which only acts on the local CPU, and can already modify the running PSTATE as it is called from secondary_start_kernel(). Reported-by: Tony Thompson <anthony.thompson@arm.com> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [Removed enable() hunks for features/errata v4.4. doesn't have. Changed caps->enable arg in enable_cpu_capabilities()] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-10-28arm64: kernel: Init MDCR_EL2 even in the absence of a PMUMarc Zyngier
commit 850540351bb1a4fa5f192e5ce55b89928cc57f42 upstream. Commit f436b2ac90a0 ("arm64: kernel: fix architected PMU registers unconditional access") made sure we wouldn't access unimplemented PMU registers, but also left MDCR_EL2 uninitialized in that case, leading to trap bits being potentially left set. Make sure we always write something in that register. Fixes: f436b2ac90a0 ("arm64: kernel: fix architected PMU registers unconditional access") Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-10-28arm64: percpu: rewrite ll/sc loops in assemblyWill Deacon
commit 1e6e57d9b34a9075d5f9e2048ea7b09756590d11 upstream. Writing the outer loop of an LL/SC sequence using do {...} while constructs potentially allows the compiler to hoist memory accesses between the STXR and the branch back to the LDXR. On CPUs that do not guarantee forward progress of LL/SC loops when faced with memory accesses to the same ERG (up to 2k) between the failed STXR and the branch back, we may end up livelocking. This patch avoids this issue in our percpu atomics by rewriting the outer loop as part of the LL/SC inline assembly block. Fixes: f97fc810798c ("arm64: percpu: Implement this_cpu operations") Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-10-07arm64: debug: avoid resetting stepping state machine when TIF_SINGLESTEPWill Deacon
commit 3a402a709500c5a3faca2111668c33d96555e35a upstream. When TIF_SINGLESTEP is set for a task, the single-step state machine is enabled and we must take care not to reset it to the active-not-pending state if it is already in the active-pending state. Unfortunately, that's exactly what user_enable_single_step does, by unconditionally setting the SS bit in the SPSR for the current task. This causes failures in the GDB testsuite, where GDB ends up missing expected step traps if the instruction being stepped generates another trap, e.g. PTRACE_EVENT_FORK from an SVC instruction. This patch fixes the problem by preserving the current state of the stepping state machine when TIF_SINGLESTEP is set on the current thread. Cc: <stable@vger.kernel.org> Reported-by: Yao Qi <yao.qi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30crypto: arm64/aes-ctr - fix NULL dereference in tail processingArd Biesheuvel
commit 2db34e78f126c6001d79d3b66ab1abb482dc7caa upstream. The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Reported-by: xiakaixu <xiakaixu@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()Will Deacon
commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream. smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation to a full barrier, such that prior stores are ordered with respect to loads and stores occuring inside the critical section. Unfortunately, the core code defines the barrier as smp_wmb(), which is insufficient to provide the required ordering guarantees when used in conjunction with our load-acquire-based spinlock implementation. This patch overrides the arm64 definition of smp_mb__before_spinlock() to map to a full smp_mb(). Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-15irqchip/gicv3-its: numa: Enable workaround for Cavium thunderx erratum 23144Ganapatrao Kulkarni
[ Upstream commit fbf8f40e1658cb2f17452dbd3c708e329c5d27e0 ] The erratum fixes the hang of ITS SYNC command by avoiding inter node io and collections/cpu mapping on thunderx dual-socket platform. This fix is only applicable for Cavium's ThunderX dual-socket platform. Reviewed-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-15arm64: Add workaround for Cavium erratum 27456Andrew Pinski
[ Upstream commit 104a0c02e8b1936c049e18a6d4e4ab040fb61213 ] On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI instructions may cause the icache to become corrupted if it contains data for a non-current ASID. This patch implements the workaround (which invalidates the local icache when switching the mm) by using code patching. Signed-off-by: Andrew Pinski <apinski@cavium.com> Signed-off-by: David Daney <david.daney@cavium.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-15arm64: KVM: Configure TCR_EL2.PS at runtimeTirumalesh Chalamarla
[ Upstream commit 3c5b1d92b3b02be07873d611a27950addff544d3 ] Setting TCR_EL2.PS to 40 bits is wrong on systems with less that less than 40 bits of physical addresses. and breaks KVM on systems where the RAM is above 40 bits. This patch uses ID_AA64MMFR0_EL1.PARange to set TCR_EL2.PS dynamically, just like we already do for VTCR_EL2.PS. [Marc: rewrote commit message, patch tidy up] Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Tirumalesh Chalamarla <tchalamarla@caviumnetworks.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-15irqchip/gic-v3: Make sure read from ICC_IAR1_EL1 is visible on redestributorTirumalesh Chalamarla
[ Upstream commit 1a1ebd5fb1e203ee8cc73508cc7a38ac4b804596 ] The ARM GICv3 specification mentions the need for dsb after a read from the ICC_IAR1_EL1 register: 4.1.1 Physical CPU Interface: The effects of reading ICC_IAR0_EL1 and ICC_IAR1_EL1 on the state of a returned INTID are not guaranteed to be visible until after the execution of a DSB. Not having this could result in missed interrupts, so let's add the required barrier. [Marc: fixed commit message] Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Tirumalesh Chalamarla <tchalamarla@caviumnetworks.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-07arm64: dts: rockchip: add reset saradc node for rk3368 SoCsCaesar Wang
commit 78ec79bfd59e126e1cb394302bfa531a420b3ecd upstream. SARADC controller needs to be reset before programming it, otherwise it will not function properly. Signed-off-by: Caesar Wang <wxt@rock-chips.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-07arm64: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFOJames Hogan
commit 3146bc64d12377a74dbda12b96ea32da3774ae07 upstream. AT_VECTOR_SIZE_ARCH should be defined with the maximum number of NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined for arm64 at all even though ARCH_DLINFO will contain one NEW_AUX_ENT for the VDSO address. This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for AT_BASE_PLATFORM which arm64 doesn't use, but lets define it now and add the comment above ARCH_DLINFO as found in several other architectures to remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to date. Fixes: f668cd1673aa ("arm64: ELF definitions") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20arm64: mm: avoid fdt_check_header() before the FDT is fully mappedArd Biesheuvel
commit 04a848106193b134741672f7e4e444b50c70b631 upstream. As reported by Zijun, the fdt_check_header() call in __fixmap_remap_fdt() is not safe since it is not guaranteed that the FDT header is mapped completely. Due to the minimum alignment of 8 bytes, the only fields we can assume to be mapped are 'magic' and 'totalsize'. Since the OF layer is in charge of validating the FDT image, and we are only interested in making reasonably sure that the size field contains a meaningful value, replace the fdt_check_header() call with an explicit comparison of the magic field's value against the expected value. Reported-by: Zijun Hu <zijun_hu@htc.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20arm64: dts: rockchip: fixes the gic400 2nd region size for rk3368Caesar Wang
commit ad1cfdf518976447e6b0d31517bad4e3ebbce6bb upstream. The 2nd additional region is the GIC virtual cpu interface register base and size. As the gic400 of rk3368 says, the cpu interface register map as below : -0x0000 GICC_CTRL . . . -0x00fc GICC_IIDR -0x1000 GICC_IDR Obviously, the region size should be greater than 0x1000. So we should make sure to include the GICC_IDR since the kernel will access it in some cases. Fixes: b790c2cab5ca ("arm64: dts: add Rockchip rk3368 core dtsi and board dts for the r88 board") Signed-off-by: Caesar Wang <wxt@rock-chips.com> Reviewed-by: Shawn Lin <shawn.lin@rock-chips.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [added Fixes and stable-cc] Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2016-08-20arm64: Fix incorrect per-cpu usage for boot CPUSuzuki K Poulose
commit 9113c2aa05e9848cd4f1154abee17d4f265f012d upstream. In smp_prepare_boot_cpu(), we invoke cpuinfo_store_boot_cpu to store the cpuinfo in a per-cpu ptr, before initialising the per-cpu offset for the boot CPU. This patch reorders the sequence to make sure we initialise the per-cpu offset before accessing the per-cpu area. Commit 4b998ff1885eec ("arm64: Delay cpuinfo_store_boot_cpu") fixed the issue where we modified the per-cpu area even before the kernel initialises the per-cpu areas, but failed to wait until the boot cpu updated it's offset. Fixes: 4b998ff1885e ("arm64: Delay cpuinfo_store_boot_cpu") Cc: <stable@vger.kernel.org> # 4.4+ Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20arm64: debug: unmask PSTATE.D earlierWill Deacon
commit 2ce39ad15182604beb6c8fa8bed5e46b59fd1082 upstream. Clearing PSTATE.D is one of the requirements for generating a debug exception. The arm64 booting protocol requires that PSTATE.D is set, since many of the debug registers (for example, the hw_breakpoint registers) are UNKNOWN out of reset and could potentially generate spurious, fatal debug exceptions in early boot code if PSTATE.D was clear. Once the debug registers have been safely initialised, PSTATE.D is cleared, however this is currently broken for two reasons: (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall runs after SMP (and the scheduler) have been initialised, there is no guarantee that it is actually running on the boot CPU. In this case, the boot CPU is left with PSTATE.D set and is not capable of generating debug exceptions. (2) In a preemptible kernel, we may explicitly schedule on the IRQ return path to EL1. If an IRQ occurs with PSTATE.D set in the idle thread, then we may schedule the kthread_init thread, run the postcore_initcall to clear PSTATE.D and then context switch back to the idle thread before returning from the IRQ. The exception return path will then restore PSTATE.D from the stack, and set it again. This patch fixes the problem by moving the clearing of PSTATE.D earlier to proc.S. This has the desirable effect of clearing it in one place for all CPUs, long before we have to worry about the scheduler or any exception handling. We ensure that the previous reset of MDSCR_EL1 has completed before unmasking the exception, so that any spurious exceptions resulting from UNKNOWN debug registers are not generated. Without this patch applied, the kprobes selftests have been seen to fail under KVM, where we end up attempting to step the OOL instruction buffer with PSTATE.D set and therefore fail to complete the step. Acked-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-08-20arm64: kernel: Save and restore UAO and addr_limit on exception entryJames Morse
commit e19a6ee2460bdd0d0055a6029383422773f9999a upstream. If we take an exception while at EL1, the exception handler inherits the original context's addr_limit and PSTATE.UAO values. To be consistent always reset addr_limit and PSTATE.UAO on (re-)entry to EL1. This prevents accidental re-use of the original context's addr_limit. Based on a similar patch for arm from Russell King. Cc: <stable@vger.kernel.org> # 4.6- Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [ backport to stop perf misusing inherited addr_limit. Removed code interacting with UAO and the irqstack ] Link: https://bugs.chromium.org/p/project-zero/issues/detail?id=822 Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-07-27arm64: Rework valid_user_regsMark Rutland
commit dbd4d7ca563fd0a8949718d35ce197e5642d5d9d upstream. We validate pstate using PSR_MODE32_BIT, which is part of the user-provided pstate (and cannot be trusted). Also, we conflate validation of AArch32 and AArch64 pstate values, making the code difficult to reason about. Instead, validate the pstate value based on the associated task. The task may or may not be current (e.g. when using ptrace), so this must be passed explicitly by callers. To avoid circular header dependencies via sched.h, is_compat_task is pulled out of asm/ptrace.h. To make the code possible to reason about, the AArch64 and AArch32 validation is split into separate functions. Software must respect the RES0 policy for SPSR bits, and thus the kernel mirrors the hardware policy (RAZ/WI) for bits as-yet unallocated. When these acquire an architected meaning writes may be permitted (potentially with additional validation). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ rebased for v4.1+ This avoids a user-triggerable Oops() if a task is switched to a mode not supported by the kernel (e.g. switching a 64-bit task to AArch32). ] Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> [backport] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-24arm64: mm: always take dirty state from new pte in ptep_set_access_flagsWill Deacon
commit 0106d456c4cb1770253fefc0ab23c9ca760b43f7 upstream. Commit 66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM") ensured that pte flags are updated atomically in the face of potential concurrent, hardware-assisted updates. However, Alex reports that: | This patch breaks swapping for me. | In the broken case, you'll see either systemd cpu time spike (because | it's stuck in a page fault loop) or the system hang (because the | application owning the screen is stuck in a page fault loop). It turns out that this is because the 'dirty' argument to ptep_set_access_flags is always 0 for read faults, and so we can't use it to set PTE_RDONLY. The failing sequence is: 1. We put down a PTE_WRITE | PTE_DIRTY | PTE_AF pte 2. Memory pressure -> pte_mkold(pte) -> clear PTE_AF 3. A read faults due to the missing access flag 4. ptep_set_access_flags is called with dirty = 0, due to the read fault 5. pte is then made PTE_WRITE | PTE_DIRTY | PTE_AF | PTE_RDONLY (!) 6. A write faults, but pte_write is true so we get stuck The solution is to check the new page table entry (as would be done by the generic, non-atomic definition of ptep_set_access_flags that just calls set_pte_at) to establish the dirty state. Fixes: 66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM") Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Alexander Graf <agraf@suse.de> Tested-by: Alexander Graf <agraf@suse.de> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-24arm64: Provide "model name" in /proc/cpuinfo for PER_LINUX32 tasksCatalin Marinas
commit e47b020a323d1b2a7b1e9aac86e99eae19463630 upstream. This patch brings the PER_LINUX32 /proc/cpuinfo format more in line with the 32-bit ARM one by providing an additional line: model name : ARMv8 Processor rev X (v8l) Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-01kvm: arm64: Fix EC field in inject_abt64Matt Evans
commit e4fe9e7dc3828bf6a5714eb3c55aef6260d823a2 upstream. The EC field of the constructed ESR is conditionally modified by ORing in ESR_ELx_EC_DABT_LOW for a data abort. However, ESR_ELx_EC_SHIFT is missing from this condition. Signed-off-by: Matt Evans <matt.evans@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-01arm64: cpuinfo: Missing NULL terminator in compat_hwcap_strJulien Grall
commit f228b494e56d949be8d8ea09d4f973d1979201bf upstream. The loop that browses the array compat_hwcap_str will stop when a NULL is encountered, however NULL is missing at the end of array. This will lead to overrun until a NULL is found somewhere in the following memory. In reality, this works out because the compat_hwcap2_str array tends to follow immediately in memory, and that *is* terminated correctly. Furthermore, the unsigned int compat_elf_hwcap is checked before printing each capability, so we end up doing the right thing because the size of the two arrays is less than 32. Still, this is an obvious mistake and should be fixed. Note for backporting: commit 12d11817eaafa414 ("arm64: Move /proc/cpuinfo handling code") moved this code in v4.4. Prior to that commit, the same change should be made in arch/arm64/kernel/setup.c. Fixes: 44b82b7700d0 "arm64: Fix up /proc/cpuinfo" Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-01arm64: Implement pmdp_set_access_flags() for hardware AF/DBMCatalin Marinas
commit 282aa7051b0169991b34716f0f22d9c2f59c46c4 upstream. The update to the accessed or dirty states for block mappings must be done atomically on hardware with support for automatic AF/DBM. The ptep_set_access_flags() function has been fixed as part of commit 66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM"). This patch brings pmdp_set_access_flags() in line with the pte counterpart. Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-01arm64: Implement ptep_set_access_flags() for hardware AF/DBMCatalin Marinas
commit 66dbd6e61a526ae7d11a208238ae2c17e5cacb6b upstream. When hardware updates of the access and dirty states are enabled, the default ptep_set_access_flags() implementation based on calling set_pte_at() directly is potentially racy. This triggers the "racy dirty state clearing" warning in set_pte_at() because an existing writable PTE is overridden with a clean entry. There are two main scenarios for this situation: 1. The CPU getting an access fault does not support hardware updates of the access/dirty flags. However, a different agent in the system (e.g. SMMU) can do this, therefore overriding a writable entry with a clean one could potentially lose the automatically updated dirty status 2. A more complex situation is possible when all CPUs support hardware AF/DBM: a) Initial state: shareable + writable vma and pte_none(pte) b) Read fault taken by two threads of the same process on different CPUs c) CPU0 takes the mmap_sem and proceeds to handling the fault. It eventually reaches do_set_pte() which sets a writable + clean pte. CPU0 releases the mmap_sem d) CPU1 acquires the mmap_sem and proceeds to handle_pte_fault(). The pte entry it reads is present, writable and clean and it continues to pte_mkyoung() e) CPU1 calls ptep_set_access_flags() If between (d) and (e) the hardware (another CPU) updates the dirty state (clears PTE_RDONLY), CPU1 will override the PTR_RDONLY bit marking the entry clean again. This patch implements an arm64-specific ptep_set_access_flags() function to perform an atomic update of the PTE flags. Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Ming Lei <tom.leiming@gmail.com> Tested-by: Julien Grall <julien.grall@arm.com> Cc: Will Deacon <will.deacon@arm.com> [will: reworded comment] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-01arm64: Ensure pmd_present() returns false after pmd_mknotpresent()Catalin Marinas
commit 5bb1cc0ff9a6b68871970737e6c4c16919928d8b upstream. Currently, pmd_present() only checks for a non-zero value, returning true even after pmd_mknotpresent() (which only clears the type bits). This patch converts pmd_present() to using pte_present(), similar to the other pmd_*() checks. As a side effect, it will return true for PROT_NONE mappings, though they are not yet used by the kernel with transparent huge pages. For consistency, also change pmd_mknotpresent() to only clear the PMD_SECT_VALID bit, even though the PMD_TABLE_BIT is already 0 for block mappings (no functional change). The unused PMD_SECT_PROT_NONE definition is removed as transparent huge pages use the pte page prot values. Fixes: 9c7e535fcc17 ("arm64: mm: Route pmd thp functions through pte equivalents") Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-06-01arm64: Fix typo in the pmdp_huge_get_and_clear() definitionCatalin Marinas
commit 911f56eeb87ee378f5e215469268a7a2f68a5a8a upstream. With hardware AF/DBM support, pmd modifications (transparent huge pages) should be performed atomically using load/store exclusive. The initial patches defined the get-and-clear function and __HAVE_ARCH_* macro without the "huge" word, leaving the pmdp_huge_get_and_clear() to the default, non-atomic implementation. Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-05-04arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permissionCatalin Marinas
commit fdc69e7df3cb24f18a93192641786e5b7ecd1dfe upstream. The set_pte_at() function must update the hardware PTE_RDONLY bit depending on the state of the PTE_WRITE and PTE_DIRTY bits of the given entry value. However, it currently only performs this for pte_valid() entries, ignoring PTE_PROT_NONE. The side-effect is that PROT_NONE mappings would not have the PTE_RDONLY bit set. Without CONFIG_ARM64_HW_AFDBM, this is not an issue since such PROT_NONE pages are not accessible anyway. With commit 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits"), the ptep_set_wrprotect() function was re-written to cope with automatic hardware updates of the dirty state. As an optimisation, only PTE_RDONLY is checked to assess the "dirty" status. Since set_pte_at() does not set this bit for PROT_NONE mappings, such pages may be considered "dirty" as a result of ptep_set_wrprotect(). This patch updates the pte_valid() check to pte_present() in set_pte_at(). It also adds PTE_PROT_NONE to the swap entry bits comment. Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Tested-by: Ganapatrao Kulkarni <gkulkarni@cavium.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-05-04arm64: Honour !PTE_WRITE in set_pte_at() for kernel mappingsCatalin Marinas
commit ac15bd63bbb24238f763ec5b24ee175ec301e8cd upstream. Currently, set_pte_at() only checks the software PTE_WRITE bit for user mappings when it sets or clears the hardware PTE_RDONLY accordingly. The kernel ptes are written directly without any modification, relying solely on the protection bits in macros like PAGE_KERNEL. However, modifying kernel pte attributes via pte_wrprotect() would be ignored by set_pte_at(). Since pte_wrprotect() does not set PTE_RDONLY (it only clears PTE_WRITE), the new permission is not taken into account. This patch changes set_pte_at() to adjust the read-only permission for kernel ptes as well. As a side effect, existing PROT_* definitions used for kernel ioremap*() need to include PTE_DIRTY | PTE_WRITE. (additionally, white space fix for PTE_KERNEL_ROX) Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-04-20arm64: replace read_lock to rcu lock in call_step_hookYang Shi
commit cf0a25436f05753aca5151891aea4fd130556e2a upstream. BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917 in_atomic(): 1, irqs_disabled(): 128, pid: 383, name: sh Preemption disabled at:[<ffff800000124c18>] kgdb_cpu_enter+0x158/0x6b8 CPU: 3 PID: 383 Comm: sh Tainted: G W 4.1.13-rt13 #2 Hardware name: Freescale Layerscape 2085a RDB Board (DT) Call trace: [<ffff8000000885e8>] dump_backtrace+0x0/0x128 [<ffff800000088734>] show_stack+0x24/0x30 [<ffff80000079a7c4>] dump_stack+0x80/0xa0 [<ffff8000000bd324>] ___might_sleep+0x18c/0x1a0 [<ffff8000007a20ac>] __rt_spin_lock+0x2c/0x40 [<ffff8000007a2268>] rt_read_lock+0x40/0x58 [<ffff800000085328>] single_step_handler+0x38/0xd8 [<ffff800000082368>] do_debug_exception+0x58/0xb8 Exception stack(0xffff80834a1e7c80 to 0xffff80834a1e7da0) 7c80: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7e40 ffff8083 001bfcc4 ffff8000 7ca0: f2000400 00000000 00000000 00000000 4a1e7d80 ffff8083 0049501c ffff8000 7cc0: 00005402 00000000 00aaa210 ffff8000 4a1e7ea0 ffff8083 000833f4 ffff8000 7ce0: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7ea0 ffff8083 001bfcc0 ffff8000 7d00: 4a0fc400 ffff8083 00005402 00000000 4a1e7d40 ffff8083 00490324 ffff8000 7d20: ffffff9c 00000000 92c23ba0 0000ffff 000a0000 00000000 00000000 00000000 7d40: 00000008 00000000 00080000 00000000 92c23b8b 0000ffff 92c23b8e 0000ffff 7d60: 00000038 00000000 00001cb2 00000000 00000005 00000000 92d7b498 0000ffff 7d80: 01010101 01010101 92be9000 0000ffff 00000000 00000000 00000030 00000000 [<ffff8000000833f4>] el1_dbg+0x18/0x6c This issue is similar with 62c6c61("arm64: replace read_lock to rcu lock in call_break_hook"), but comes to single_step_handler. This also solves kgdbts boot test silent hang issue on 4.4 -rt kernel. Signed-off-by: Yang Shi <yang.shi@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-04-20arm64: opcodes.h: Add arm big-endian config options before including arm headerJames Morse
commit a6002ec5a8c68e69706b2efd6db6d682d0ab672c upstream. arm and arm64 use different config options to specify big endian. This needs taking into account when including code/headers between the two architectures. A case in point is PAN, which uses the __instr_arm() macro to output instructions. The macro comes from opcodes.h, which lives under arch/arm. On a big-endian build the mismatched config options mean the instruction isn't byte swapped correctly, resulting in undefined instruction exceptions during boot: | alternatives: patching kernel code | kdevtmpfs[87]: undefined instruction: pc=ffffffc0004505b4 | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | kdevtmpfs[87]: undefined instruction: pc=ffffffc00076231c | Internal error: Oops - undefined instruction: 0 [#1] SMP | Modules linked in: | CPU: 0 PID: 87 Comm: kdevtmpfs Not tainted 4.1.16+ #5 | Hardware name: Hisilicon PhosphorHi1382 EVB (DT) | task: ffffffc336591700 ti: ffffffc3365a4000 task.ti: ffffffc3365a4000 | PC is at dump_instr+0x68/0x100 | LR is at do_undefinstr+0x1d4/0x2a4 | pc : [<ffffffc00076231c>] lr : [<ffffffc0000811d4>] pstate: 604001c5 | sp : ffffffc3365a6450 Reported-by: Hanjun Guo <guohanjun@huawei.com> Tested-by: Xuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-03-16arm64: account for sparsemem section alignment when choosing vmemmap offsetArd Biesheuvel
commit 36e5cd6b897e17d03008f81e075625d8e43e52d0 upstream. Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region") fixed an issue where the struct page array would overflow into the adjacent virtual memory region if system RAM was placed so high up in physical memory that its addresses were not representable in the build time configured virtual address size. However, the fix failed to take into account that the vmemmap region needs to be relatively aligned with respect to the sparsemem section size, so that a sequence of page structs corresponding with a sparsemem section in the linear region appears naturally aligned in the vmemmap region. So round up vmemmap to sparsemem section size. Since this essentially moves the projection of the linear region up in memory, also revert the reduction of the size of the vmemmap region. Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region") Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: David Daney <david.daney@cavium.com> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-03-09arm64: vmemmap: use virtual projection of linear regionArd Biesheuvel
commit dfd55ad85e4a7fbaa82df12467515ac3c81e8a3e upstream. Commit dd006da21646 ("arm64: mm: increase VA range of identity map") made some changes to the memory mapping code to allow physical memory to reside at an offset that exceeds the size of the virtual mapping. However, since the size of the vmemmap area is proportional to the size of the VA area, but it is populated relative to the physical space, we may end up with the struct page array being mapped outside of the vmemmap region. For instance, on my Seattle A0 box, I can see the following output in the dmesg log. vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbfc0000000 - 0xffffffbfd0000000 ( 256 MB actual) We can fix this by deciding that the vmemmap region is not a projection of the physical space, but of the virtual space above PAGE_OFFSET, i.e., the linear region. This way, we are guaranteed that the vmemmap region is of sufficient size, and we can even reduce the size by half. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-03-09arm/arm64: KVM: Fix ioctl error handlingMichael S. Tsirkin
commit 4cad67fca3fc952d6f2ed9e799621f07666a560f upstream. Calling return copy_to_user(...) in an ioctl will not do the right thing if there's a pagefault: copy_to_user returns the number of bytes not copied in this case. Fix up kvm to do return copy_to_user(...)) ? -EFAULT : 0; everywhere. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>