summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-08-05Merge branches 'fixes' and 'misc' into for-nextRussell King
Conflicts: arch/arm/kernel/iwmmxt.S arch/arm/mm/cache-l2x0.c arch/arm/mm/mmu.c
2014-08-02ARM: 8124/1: don't enter kgdb when userspace executes a kgdb break instructionOmar Sandoval
The kgdb breakpoint hooks (kgdb_brk_fn and kgdb_compiled_brk_fn) should only be entered when a kgdb break instruction is executed from the kernel. Otherwise, if kgdb is enabled, a userspace program can cause the kernel to drop into the debugger by executing either KGDB_BREAKINST or KGDB_COMPILED_BREAK. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Omar Sandoval <osandov@osandov.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: idmap: add identity mapping usage noteRussell King
Add a note about the usage of the identity mapping; we do not support accesses outside of the identity map region and kernel image while a CPU is using the identity map. This is because the identity mapping may overwrite vmalloc space, IO mappings, the vectors pages, etc. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: add comments to the early page table remap codeRussell King
Add further comments to the early page table remap code to explain what the code is doing, why it is doing it, but more importantly to explain that the code is not architecturally compliant and is squarely in "UNPREDICTABLE" behaviour territory. Add a warning and tainting of the kernel too. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: 8122/1: smp_scu: enable SCU standby supportShawn Guo
With SCU standby enabled, SCU CLK will be turned off when all processors are in WFI mode. And the clock will be turned on when any processor leaves WFI mode. This behavior should be preferable in terms of power efficiency of system idle. So let's set the SCU standby bit to enable the support in function scu_enable(). Cortex-A9 earlier than r2p0 has no standby bit in SCU, so we need to skip setting the bit for those. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: 8121/1: smp_scu: use macro for SCU enable bitShawn Guo
Use macro instead of magic number for SCU enable bit. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: 8120/1: crypto: sha512: add ARM NEON implementationJussi Kivilinna
This patch adds ARM NEON assembly implementation of SHA-512 and SHA-384 algorithms. tcrypt benchmark results on Cortex-A8, sha512-generic vs sha512-neon-asm: block-size bytes/update old-vs-new 16 16 2.99x 64 16 2.67x 64 64 3.00x 256 16 2.64x 256 64 3.06x 256 256 3.33x 1024 16 2.53x 1024 256 3.39x 1024 1024 3.52x 2048 16 2.50x 2048 256 3.41x 2048 1024 3.54x 2048 2048 3.57x 4096 16 2.49x 4096 256 3.42x 4096 1024 3.56x 4096 4096 3.59x 8192 16 2.48x 8192 256 3.42x 8192 1024 3.56x 8192 4096 3.60x 8192 8192 3.60x Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: 8119/1: crypto: sha1: add ARM NEON implementationJussi Kivilinna
This patch adds ARM NEON assembly implementation of SHA-1 algorithm. tcrypt benchmark results on Cortex-A8, sha1-arm-asm vs sha1-neon-asm: block-size bytes/update old-vs-new 16 16 1.04x 64 16 1.02x 64 64 1.05x 256 16 1.03x 256 64 1.04x 256 256 1.30x 1024 16 1.03x 1024 256 1.36x 1024 1024 1.52x 2048 16 1.03x 2048 256 1.39x 2048 1024 1.55x 2048 2048 1.59x 4096 16 1.03x 4096 256 1.40x 4096 1024 1.57x 4096 4096 1.62x 8192 16 1.03x 8192 256 1.40x 8192 1024 1.58x 8192 4096 1.63x 8192 8192 1.63x Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: 8118/1: crypto: sha1/make use of common SHA-1 structuresJussi Kivilinna
Common SHA-1 structures are defined in <crypto/sha.h> for code sharing. This patch changes SHA-1/ARM glue code to use these structures. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-29ARM: 8113/1: remove remaining definitions of PLAT_PHYS_OFFSET from ↵Uwe Kleine-König
<mach/memory.h> The platforms selecting NEED_MACH_MEMORY_H defined the start address of their physical memory in the respective <mach/memory.h>. With ARM_PATCH_PHYS_VIRT=y (which is quite common today) this is useless though because the definition isn't used but determined dynamically. So remove the definitions from all <mach/memory.h> and provide the Kconfig symbol PHYS_OFFSET with the respective defaults in case ARM_PATCH_PHYS_VIRT isn't enabled. This allows to drop the dependency of PHYS_OFFSET on !NEED_MACH_MEMORY_H which prevents compiling an integrator nommu-kernel. (CONFIG_PAGE_OFFSET which has "default PHYS_OFFSET if !MMU" expanded to "0x" because CONFIG_PHYS_OFFSET doesn't exist as INTEGRATOR selects NEED_MACH_MEMORY_H.) Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-29ARM: 8115/1: LPAE: reduce damage caused by idmap to virtual memory layoutKonstantin Khlebnikov
On LPAE, each level 1 (pgd) page table entry maps 1GiB, and the level 2 (pmd) entries map 2MiB. When the identity mapping is created on LPAE, the pgd pointers are copied from the swapper_pg_dir. If we find that we need to modify the contents of a pmd, we allocate a new empty pmd table and insert it into the appropriate 1GB slot, before then filling it with the identity mapping. However, if the 1GB slot covers the kernel lowmem mappings, we obliterate those mappings. When replacing a PMD, first copy the old PMD contents to the new PMD, so that we preserve the existing mappings, particularly the mappings of the kernel itself. [rewrote commit message and added code comment -- rmk] Fixes: ae2de101739c ("ARM: LPAE: Add identity mapping support for the 3-level page table format") Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-29ARM: fix alignment of keystone page table fixupRussell King
If init_mm.brk is not section aligned, the LPAE fixup code will miss updating the final PMD. Fix this by aligning map_end. Fixes: a77e0c7b2774 ("ARM: mm: Recreate kernel mappings in early_paging_init()") Cc: <stable@vger.kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-24ARM: 8111/1: Enable erratum 798181 for Broadcom Brahma-B15Gregory Fong
Broadcom Brahma-B15 (r0p0..r0p2) is also affected by Cortex-A15 erratum 798181, so enable the workaround for Brahma-B15. Signed-off-by: Gregory Fong <gregory.0xf0@gmail.com> Acked-by: Marc Carino <marc.ceeeee@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Brian Norris <computersforpeace@gmail.com> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-24ARM: 8112/1: only select ARM_PATCH_PHYS_VIRT if MMU is enabledUwe Kleine-König
This fixes the following warning: warning: (ARCH_MULTIPLATFORM && ARCH_INTEGRATOR && ARCH_SHMOBILE_LEGACY) selects ARM_PATCH_PHYS_VIRT which has unmet direct dependencies (!XIP_KERNEL && MMU && (!ARCH_REALVIEW || !SPARSEMEM)) Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-24ARM: 8110/1: do CPU-specific init for Broadcom Brahma15 coresMarc Carino
Perform any CPU-specific initialization required on the Broadcom Brahma-15 core. Signed-off-by: Marc Carino <marc.ceeeee@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Brian Norris <computersforpeace@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-24ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAESteven Capper
For LPAE, we have the following means for encoding writable or dirty ptes: L_PTE_DIRTY L_PTE_RDONLY !pte_dirty && !pte_write 0 1 !pte_dirty && pte_write 0 1 pte_dirty && !pte_write 1 1 pte_dirty && pte_write 1 0 So we can't distinguish between writeable clean ptes and read only ptes. This can cause problems with ptes being incorrectly flagged as read only when they are writeable but not dirty. This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58, and adds additional logic to set AP[2] whenever the pte is read only or not dirty. That way we can distinguish between clean writeable ptes and read only ptes. HugeTLB pages will use this new logic automatically. We need to add some logic to Transparent HugePages to ensure that they correctly interpret the revised pgprot permissions (L_PTE_RDONLY has moved and no longer matches PMD_SECT_AP2). In the process of revising THP, the names of the PMD software bits have been prefixed with L_ to make them easier to distinguish from their hardware bit counterparts. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-24ARM: 8108/1: mm: Introduce {pte,pmd}_isset and {pte,pmd}_isclearSteven Capper
Long descriptors on ARM are 64 bits, and some pte functions such as pte_dirty return a bitwise-and of a flag with the pte value. If the flag to be tested resides in the upper 32 bits of the pte, then we run into the danger of the result being dropped if downcast. For example: gather_stats(page, md, pte_dirty(*pte), 1); where pte_dirty(*pte) is downcast to an int. This patch introduces a new macro pte_isset which performs the bitwise and, then performs a double logical invert (where needed) to ensure predictable downcasting. The logical inverse pte_isclear is also introduced. Equivalent pmd functions for Transparent HugePages have also been added. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8100/1: Fix preemption disable in iwmmxt_task_enable()Sebastian Hesselbarth
commit 431a84b1a4f7d1a0085d5b91330c5053cc8e8b12 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()") introduced macros {inc,dec}_preempt_count to iwmmxt_task_enable to make it run with preemption disabled. Unfortunately, other functions in iwmmxt.S also use concan_{save,dump,load} sections located in iwmmxt_task_enable() to deal with iWMMXt coprocessor. This causes an unbalanced preempt_count due to excessive dec_preempt_count and destroyed return addresses in callers of concan_ labels due to a register collision: Linux version 3.16.0-rc3-00062-gd92a333-dirty (jef@armhf) (gcc version 4.8.3 (Debian 4.8.3-4) ) #5 PREEMPT Thu Jul 3 19:46:39 CEST 2014 CPU: ARMv7 Processor [560f5815] revision 5 (ARMv7), cr=10c5387d CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache Machine model: SolidRun CuBox ... PJ4 iWMMXt v2 coprocessor enabled. ... Unable to handle kernel paging request at virtual address fffffffe pgd = bb25c000 [fffffffe] *pgd=3bfde821, *pte=00000000, *ppte=00000000 Internal error: Oops: 80000007 [#1] PREEMPT ARM Modules linked in: CPU: 0 PID: 62 Comm: startpar Not tainted 3.16.0-rc3-00062-gd92a333-dirty #5 task: bb230b80 ti: bb256000 task.ti: bb256000 PC is at 0xfffffffe LR is at iwmmxt_task_copy+0x44/0x4c pc : [<fffffffe>] lr : [<800130ac>] psr: 40000033 sp : bb257de8 ip : 00000013 fp : bb257ea4 r10: bb256000 r9 : fffffdfe r8 : 76e898e6 r7 : bb257ec8 r6 : bb256000 r5 : 7ea12760 r4 : 000000a0 r3 : ffffffff r2 : 00000003 r1 : bb257df8 r0 : 00000000 Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 10c5387d Table: 3b25c019 DAC: 00000015 Process startpar (pid: 62, stack limit = 0xbb256248) This patch fixes the issue by moving concan_{save,dump,load} into separate code sections and make iwmmxt_task_enable() call them in the same way the other functions use concan_ symbols. The test for valid ownership is moved to concan_save and is safe for the other user of it, iwmmxt_task_disable(). The register collision is also resolved by moving concan_ symbols as {inc,dec}_preempt_count are now local to iwmmxt_task_enable(). Fixes: 431a84b1a4f7 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()") Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Jean-Francois Moine <moinejf@free.fr> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8103/1: save/restore Cortex-A9 CP15 registers on suspend/resumeShawn Guo
The CP15 diagnostic register holds ARM errata bits on Cortex-A9, so it needs to be saved/restored on suspend/resume. Otherwise, the effectiveness of errata workaround gets lost together with diagnostic register bit across suspend/resume cycle. And the CP15 power control register of Cortex-A9 shares the same problem. The patch adds a couple of Cortex-A9 specific suspend/resume functions to save/restore these two Cortex-A9 CP15 registers across the suspend/resume cycle. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8098/1: mcs lock: implement wfe-based polling for MCS lockingWill Deacon
This patch introduces a wfe-based polling loop for spinning on contended MCS locks and waking up corresponding waiters when the lock is released. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8091/2: add get_user() support for 8 byte typesDaniel Thompson
Recent contributions, including to DRM and binder, introduce 64-bit values in their interfaces. A common motivation for this is to allow the same ABI for 32- and 64-bit userspaces (and therefore also a shared ABI for 32/64 hybrid userspaces). Anyhow, the developers would like to avoid gotchas like having to use copy_from_user(). This feature is already implemented on x86-32 and the majority of other 32-bit architectures. The current list of get_user_8 hold out architectures are: arm, avr32, blackfin, m32r, metag, microblaze, mn10300, sh. Credit: My name sits rather uneasily at the top of this patch. The v1 and v2 versions of the patch were written by Rob Clark and to produce v4 I mostly copied code from Russell King and H. Peter Anvin. However I have mangled the patch sufficiently that *blame* is rightfully mine even if credit should more widely shared. Changelog: v5: updated to use the ret macro (requested by Russell King) v4: remove an inlined add on big endian systems (spotted by Russell King), used __ARMEB__ rather than BIG_ENDIAN (to match rest of file), cleared r3 on EFAULT during __get_user_8. v3: fix a couple of checkpatch issues v2: pass correct size to check_uaccess, and better handling of narrowing double word read with __get_user_xb() (Russell King's suggestion) v1: original Signed-off-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8097/1: unistd.h: relocate comments back to placeBaruch Siach
Commit cb8db5d45 (UAPI: (Scripted) Disintegrate arch/arm/include/asm) moved these syscall comments out of their context into the UAPI headers. Fix this. Fixes: cb8db5d4578a ("UAPI: (Scripted) Disintegrate arch/arm/include/asm") Signed-off-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8096/1: Describe required sort order for textofs-y (TEXT_OFFSET)Daniel Thompson
The section of the makefile that determines the TEXT_OFFSET is sorted by address so that, in multi-arch kernel builds, the architecture with the most stringent requirements for the kernel base address gets to define TEXT_OFFSET. The comment should reflect that. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8090/1: add revision info for PL310 errata 588369 and 727915Shawn Guo
Add revision info for PL310_ERRATA_588369 and PL310_ERRATA_727915 to help people understand if they need to enable the errata for their hardware. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8089/1: cpu_pj4b_suspend_size should base on cpu_v7_suspend_sizeShawn Guo
Since pj4b suspend/resume routines are implemented based on generic ARMv7 ones, instead of hard-coding cpu_pj4b_suspend_size, we should have it be cpu_v7_suspend_size plus pj4b specific bytes. Otherwise, if cpu_v7_suspend_size gets updated alone, the pj4b suspend/resume will likely be broken. While at it, fix the comments in cpu_pj4b_do_resume, as we're restoring CP15 registers rather than saving in there. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8088/1: vmlinux.lds.S: drop redundant .commentMark Rutland
Commit 78d7530ac3 ("ARM: Clean up linker script using new linker script macros.") modified the arm kernel linker script to use the STABS_DEBUG macro, but left a .comment section definition. As STABS_DEBUG defines the .comment section in an identical way, the second section definition is redundant and can be removed. This patch removes the redundant .comment section definition. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8075/1: oprofile: Use of arm_get_current_stackframeNikolay Borisov
Use the newly introduced API so that FP is correctly referenced from either R7/R11 based on whether we are running in THUMB2 mode or not. Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Robert Richter <rric@kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8074/1: traps: Make use of the frame_pointer macroNikolay Borisov
Use the newly-introduced frame_pointer macro to extract the correct FP based on whether we are in THUMB2 mode or not. Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8073/1: unwind: Use arm_get_current_stackframeNikolay Borisov
Make the unwind code use the correct API so that the frame pointer is extracted from the correct register. Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8072/1: time: Make use of arm_get_current_stackframeNikolay Borisov
Make use of the arm_get_current_stackframe api so that the frame pointer is correctly referenced in THUMB2 mode Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8071/1: perf: Make perf use arm_get_current_stackframeNikolay Borisov
Make the perf backend use the API so that it correctly references the FP when in THUMB2 mode Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8070/1: Introduce arm_get_current_stack_frame()Nikolay Borisov
Currently there are numerous places where "struct pt_regs" are used to populate "struct stackframe", however all of those location do not consider the situation where the kernel might be compiled in THUMB2 mode, in which case the framepointer member of pt_regs become ARM_r7 instead of ARM_fp (r11). Document this idiosyncracy in the definition of "struct stackframe" The easiest solution is to introduce a new function (in the spirit of https://groups.google.com/forum/#!topic/linux.kernel/dA2YuUcSpZ4) which would hide the complexity of initializing the stackframe struct from pt_regs. Also implement a macro frame_pointer(regs) that would return the correct register so that we can use it in cases where we just require the frame pointer and not a whole struct stackframe Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Robert Richter <rric@kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8079/1: zImage: identify kernel endiannessNicolas Pitre
With patch #8067/1 ("zImage: ensure header in LE format for BE8 kernels") applied, it is no longer possible to determine the endianness of a compiled kernel image. This normally shouldn't matter to the boot environment, except for those cases where the selection of a ramdisk or root filesystem with a matching endianness has to be automated. Let's add a flag to the zImage header indicating the actual endianness. Four bytes from offset 0x30 can be interpreted as follows: 04 03 02 01 big endian kernel 01 02 03 04 little endian kernel Anything else should be interpreted as "unknown", in which case it is most likely that patch #8067/1 was not applied either and the zImage magic number at offset 0x24 could be used instead to determine endianness. No zImage before this patch ever produced 0x01020304 nor 0x04030201 at offset 0x30 so there is no confusion possible. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: alignment: save last kernel aligned fault locationRussell King
Save and report (via the procfs file) the last kernel unaligned fault location. This allows us to trivially inspect where the last fault happened for cases which we don't expect to occur. Since we expect the kernel to generate misalignment faults (due to the networking layer), even when warnings are enabled, we don't log them for the kernel. Tested-by: Tony Lindgren <tony@atomide.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: Shawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: make it easier to check the CPU part number correctlyRussell King
Ensure that platform maintainers check the CPU part number in the right manner: the CPU part number is meaningless without also checking the CPU implement(e|o)r (choose your preferred spelling!) Provide an interface which returns both the implementer and part number together, and update the definitions to include the implementer. Mark the old function as being deprecated... indeed, using the old function with the definitions will now always evaluate as false, so people must update their un-merged code to the new function. While this could be avoided by adding new definitions, we'd also have to create new names for them which would be awkward. Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8099/1: EXYNOS: Fix MCPM build with SUSPEND=nKrzysztof Kozlowski
Building of EXYNOS5420_MCPM with disabled SUSPEND fails: arch/arm/mach-exynos/built-in.o: In function `exynos_mcpm_init': arch/arm/mach-exynos/mcpm-exynos.c:361: undefined reference to `mcpm_loopback' The exynos_mcpm_init() in mcp-exynos.c calls mcpm_loopback() which depends on cpu_suspend function (ARM_CPU_SUSPEND). Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8083/1: exynos: activate the CCI on boot CPU/cluster using the MCPM ↵Nicolas Pitre
loopback The Chromebook firmware doesn't enable the CCI for the boot cpu, and arguably it shouldn't have to either. Let's have the kernel handle the CCI on its own for the boot CPU the same way it does it for secondary CPUs by using the MCPM loopback. This allows to boot all 8 cores on exynos5420-peach-pit, exynos5800-peach-pi and ARM Chromebook 2. Signed-off-by: Nicolas Pitre <nico@linaro.org> Tested-by: Tushar Behera <tushar.b@samsung.com> Reviewed-by: Kevin Hilman <khilman@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Tested-by: Doug Anderson <dianders@chromium.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8082/1: TC2: test the MCPM loopback during bootNicolas Pitre
This is not strictly needed on TC2 but still a good idea to exercise that code. Signed-off-by: nicolas Pitre <nico@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8081/1: MCPM: provide infrastructure to allow for MCPM loopbackNicolas Pitre
The kernel already has the responsibility to handle resources such as the CCI when hotplugging CPUs, during the booting of secondary CPUs, and when resuming from suspend/idle. It would be more coherent and less confusing if the CCI for the boot CPU (or cluster) was also initialized by the kernel rather than expecting the firmware/bootloader to do it and only in that case. After all, the kernel has all the necessary code already and the bootloader shouldn't have to care at all. The CCI may be turned on only when the cache is off. Leveraging the CPU suspend code to loop back through the low-level MCPM entry point is all that is needed to properly turn on the CCI from the kernel by using the same code as during secondary boot. Let's provide a generic MCPM loopback function that can be invoked by backend initialization code to set things (CCI or similar) on the boot CPU just as it is done for the other CPUs. Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Tested-by: Doug Anderson <dianders@chromium.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8101/1: mach-iop13xx: fix possible build failureArnd Bergmann
After applying patch: "ARM: 8078/1: get rid of hardcoded assumptions about kernel stack size" following build failure happens on iop13xx platform: In file included from include/linux/srcu.h:33:0, from include/linux/notifier.h:15, from include/linux/reboot.h:5, from arch/arm/mach-iop13xx/include/mach/iop13xx.h:6, from arch/arm/mach-iop13xx/include/mach/hardware.h:14, from arch/arm/mach-iop13xx/include/mach/memory.h:4, from arch/arm/include/asm/memory.h:24, from arch/arm/include/asm/page.h:163, from arch/arm/include/asm/thread_info.h:17, from include/linux/thread_info.h:54, from include/asm-generic/preempt.h:4, from arch/arm/include/generated/asm/preempt.h:1, from include/linux/preempt.h:18, from include/linux/spinlock.h:50, from include/linux/seqlock.h:35, from include/linux/time.h:5, from include/uapi/linux/timex.h:56, from include/linux/timex.h:56, from include/linux/sched.h:19, from arch/arm/kernel/asm-offsets.c:13: include/linux/rcupdate.h: In function '__rcu_read_lock': >> include/linux/rcupdate.h:220:2: error: implicit declaration of function 'preempt_disable' [-Werror=implicit-function-declaration] preempt_disable(); The problem here is recursive header inclusion which could be avoided by removing linux/reboot.h from mach/iop13xxx.h. linux/reboot.h in include/mach/iop13xx.h is needed only for enum reboot_mode, so header it could be replaced with a enum declaration. Whatever patch "ARM: 8078/1: get rid of hardcoded assumptions about kernel stack size" does, I think it's good to avoid unnecessary header inclusion here in any case. Reported-by: kbuild test robot <fengguang.wu@intel.com> Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-17ARM: DMA: ensure that old section mappings are flushed from the TLBRussell King
When setting up the CMA region, we must ensure that the old section mappings are flushed from the TLB before replacing them with page tables, otherwise we can suffer from mismatched aliases if the CPU speculatively prefetches from these mappings at an inopportune time. A mismatched alias can occur when the TLB contains a section mapping, but a subsequent prefetch causes it to load a page table mapping, resulting in the possibility of the TLB containing two matching mappings for the same virtual address region. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-12Merge branch 'kprobes-test-fixes' of git://git.linaro.org/people/tixy/kernel ↵Russell King
into fixes
2014-07-07ARM: l2c: fix revision checkingRussell King
The revision checking in l2c310_enable() was not correct; we were masking the part number rather than the revision number. Fix this to use the correct macro. Fixes: 4374d64933b1 ("ARM: l2c: add automatic enable of early BRESP") Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-02ARM: kprobes: Fix test code compilation errors for ARMv4 targetsJon Medhurst
Conditionally compile kprobes test cases for ARMv5 instructions to avoid compilation errors with ARMv4 targets like: /tmp/cc7Tx8ST.s:16740: Error: selected processor does not support ARM mode `clz r0,r0' Signed-off-by: Jon Medhurst <tixy@linaro.org>
2014-07-02ARM: kprobes: Disallow instructions with PC and register specified shiftJon Medhurst
ARM data processing instructions which have a register specified shift are defined as UNPREDICTABLE if PC is used for any register, not just the shift value as the code was previous assuming. This issue manifests on A15 devices as either test case failures or undefined instructions aborts. Reported-by: David Long <dave.long@linaro.org> Signed-off-by: Jon Medhurst <tixy@linaro.org>
2014-07-02ARM: kprobes: Prevent known test failures stopping other tests runningJon Medhurst
Due to a long-standing issue with Thumb symbol lookup [1] the jprobes tests fail when built into a kernel compiled as Thumb mode. (They work fine for ARM mode kernels or for Thumb when built as a loadable module.) Rather than have this problem terminate testing prematurely lets instead emit an error message and carry on with the main kprobes tests, delaying the final failure report until the end. [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2011-August/063026.html Signed-off-by: Jon Medhurst <tixy@linaro.org>
2014-07-01ARM: 8078/1: get rid of hardcoded assumptions about kernel stack sizeAndrey Ryabinin
Changing kernel stack size on arm is not as simple as it should be: 1) THREAD_SIZE macro doesn't respect PAGE_SIZE and THREAD_SIZE_ORDER 2) stack size is hardcoded in get_thread_info macro This patch fixes it by calculating THREAD_SIZE and thread_info address taking into account PAGE_SIZE and THREAD_SIZE_ORDER. Now changing stack size becomes simply changing THREAD_SIZE_ORDER. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-01ARM: simplify generation of compressed vmlinux.lds fileRussell King
As we are now using the C preprocessor, we do not need to use sed to edit constants in this file, and then pass the resulting file through the C preprocessor. Instead, rely solely on the C preprocessor to rewrite TEXT_START and BSS_ADDR. Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-01ARM: 8067/1: zImage: ensure header in LE format for BE8 kernelsNicolas Pitre
All known BE8-capable systems have LE bootloaders, so we need to ensure that the magic number and image start/end values are in little endian format. [ben.dooks@codethink.co.uk: from nico's original email on this subject] [taras.kondratiuk@linaro.org: removed lds.S->lds rule, added target to extra-y] Signed-off-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Taras Kondratiuk <taras.kondratiuk@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>