summaryrefslogtreecommitdiff
path: root/arch/mips
AgeCommit message (Collapse)Author
2017-09-10MIPS: Send SIGILL for BPOSGE32 in `__compute_return_epc_for_insn'Maciej W. Rozycki
[ Upstream commit 7b82c1058ac1f8f8b9f2b8786b1f710a57a870a8 ] Fix commit e50c0a8fa60d ("Support the MIPS32 / MIPS64 DSP ASE.") and send SIGILL rather than SIGBUS whenever an unimplemented BPOSGE32 DSP ASE instruction has been encountered in `__compute_return_epc_for_insn' as our Reserved Instruction exception handler would in response to an attempt to actually execute the instruction. Sending SIGBUS only makes sense for the unaligned PC case, since moved to `__compute_return_epc'. Adjust function documentation accordingly, correct formatting and use `pr_info' rather than `printk' as the other exit path already does. Fixes: e50c0a8fa60d ("Support the MIPS32 / MIPS64 DSP ASE.") Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 2.6.14+ Patchwork: https://patchwork.linux-mips.org/patch/16396/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: math-emu: Prevent wrong ISA mode instruction emulationMaciej W. Rozycki
[ Upstream commit 13769ebad0c42738831787e27c7c7f982e7da579 ] Terminate FPU emulation immediately whenever an ISA mode switch has been observed. This is so that we do not interpret machine code in the wrong mode, for example when a regular MIPS FPU instruction has been placed in a delay slot of a jump that switches into the MIPS16 mode, as with the following code (taken from a GCC test suite case): 00400650 <set_fast_math>: 400650: 3c020100 lui v0,0x100 400654: 03e00008 jr ra 400658: 44c2f800 ctc1 v0,c1_fcsr 40065c: 00000000 nop [...] 004012d0 <__libc_csu_init>: 4012d0: f000 6a02 li v0,2 4012d4: f150 0b1c la v1,3f9430 <_DYNAMIC-0x6df0> 4012d8: f400 3240 sll v0,16 4012dc: e269 addu v0,v1 4012de: 659a move gp,v0 4012e0: f00c 64f6 save a0-a2,48,ra,s0-s1 4012e4: 673c move s1,gp 4012e6: f010 9978 lw v1,-32744(s1) 4012ea: d204 sw v0,16(sp) 4012ec: eb40 jalr v1 4012ee: 653b move t9,v1 4012f0: f010 997c lw v1,-32740(s1) 4012f4: f030 9920 lw s1,-32736(s1) 4012f8: e32f subu v1,s1 4012fa: 326b sra v0,v1,2 4012fc: d206 sw v0,24(sp) 4012fe: 220c beqz v0,401318 <__libc_csu_init+0x48> 401300: 6800 li s0,0 401302: 99e0 lw a3,0(s1) 401304: 4801 addiu s0,1 401306: 960e lw a2,56(sp) 401308: 4904 addiu s1,4 40130a: 950d lw a1,52(sp) 40130c: 940c lw a0,48(sp) 40130e: ef40 jalr a3 401310: 653f move t9,a3 401312: 9206 lw v0,24(sp) 401314: ea0a cmp v0,s0 401316: 61f5 btnez 401302 <__libc_csu_init+0x32> 401318: 6476 restore 48,ra,s0-s1 40131a: e8a0 jrc ra Here `set_fast_math' is called from `40130e' (`40130f' with the ISA bit) and emulation triggers for the CTC1 instruction. As it is in a jump delay slot emulation continues from `401312' (`401313' with the ISA bit). However we have no path to handle MIPS16 FPU code emulation, because there are no MIPS16 FPU instructions. So the default emulation path is taken, interpreting a 32-bit word fetched by `get_user' from `401313' as a regular MIPS instruction, which is: 401313: f5ea0a92 sdc1 $f10,2706(t7) This makes the FPU emulator proceed with the supposed SDC1 instruction and consequently makes the program considered here terminate with SIGSEGV. A similar although less severe issue exists with pure-microMIPS processors in the case where similarly an FPU instruction is emulated in a delay slot of a register jump that (incorrectly) switches into the regular MIPS mode. A subsequent instruction fetch from the jump's target is supposed to cause an Address Error exception, however instead we proceed with regular MIPS FPU emulation. For simplicity then, always terminate the emulation loop whenever a mode change is detected, denoted by an ISA mode bit flip. As from commit 377cb1b6c16a ("MIPS: Disable MIPS16/microMIPS crap for platforms not supporting these ASEs.") the result of `get_isa16_mode' can be hardcoded to 0, so we need to examine the ISA mode bit by hand. This complements commit 102cedc32a6e ("MIPS: microMIPS: Floating point support.") which added JALX decoding to FPU emulation. Fixes: 102cedc32a6e ("MIPS: microMIPS: Floating point support.") Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 3.9+ Patchwork: https://patchwork.linux-mips.org/patch/16393/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: Fix unaligned PC interpretation in `compute_return_epc'Maciej W. Rozycki
[ Upstream commit 11a3799dbeb620bf0400b1fda5cc2c6bea55f20a ] Fix a regression introduced with commit fb6883e5809c ("MIPS: microMIPS: Support handling of delay slots.") and defer to `__compute_return_epc' if the ISA bit is set in EPC with non-MIPS16, non-microMIPS hardware, which will then arrange for a SIGBUS due to an unaligned instruction reference. Returning EPC here is never correct as the API defines this function's result to be either a negative error code on failure or one of 0 and BRANCH_LIKELY_TAKEN on success. Fixes: fb6883e5809c ("MIPS: microMIPS: Support handling of delay slots.") Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 3.9+ Patchwork: https://patchwork.linux-mips.org/patch/16395/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: Actually decode JALX in `__compute_return_epc_for_insn'Maciej W. Rozycki
[ Upstream commit a9db101b735a9d49295326ae41f610f6da62b08c ] Complement commit fb6883e5809c ("MIPS: microMIPS: Support handling of delay slots.") and actually decode the regular MIPS JALX major instruction opcode, the handling of which has been added with the said commit for EPC calculation in `__compute_return_epc_for_insn'. Fixes: fb6883e5809c ("MIPS: microMIPS: Support handling of delay slots.") Signed-off-by: Maciej W. Rozycki <macro@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 3.9+ Patchwork: https://patchwork.linux-mips.org/patch/16394/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: Save static registers before sysmipsJames Hogan
[ Upstream commit 49955d84cd9ccdca5a16a495e448e1a06fad9e49 ] The MIPS sysmips system call handler may return directly from the MIPS_ATOMIC_SET case (mips_atomic_set()) to syscall_exit. This path restores the static (callee saved) registers, however they won't have been saved on entry to the system call. Use the save_static_function() macro to create a __sys_sysmips wrapper function which saves the static registers before calling sys_sysmips, so that the correct static register state is restored by syscall_exit. Fixes: f1e39a4a616c ("MIPS: Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16149/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: Fix MIPS I ISA /proc/cpuinfo reportingMaciej W. Rozycki
[ Upstream commit e5f5a5b06e51a36f6ddf31a4a485358263953a3d ] Correct a commit 515a6393dbac ("MIPS: kernel: proc: Add MIPS R6 support to /proc/cpuinfo") regression that caused MIPS I systems to show no ISA levels supported in /proc/cpuinfo, e.g.: system type : Digital DECstation 2100/3100 machine : Unknown processor : 0 cpu model : R3000 V2.0 FPU V2.0 BogoMIPS : 10.69 wait instruction : no microsecond timers : no tlb_entries : 64 extra interrupt vector : no hardware watchpoint : no isa : ASEs implemented : shadow register sets : 1 kscratch registers : 0 package : 0 core : 0 VCED exceptions : not available VCEI exceptions : not available and similarly exclude `mips1' from the ISA list for any processors below MIPSr1. This is because the condition to show `mips1' on has been made `cpu_has_mips_r1' rather than newly-introduced `cpu_has_mips_1'. Use the correct condition then. Fixes: 515a6393dbac ("MIPS: kernel: proc: Add MIPS R6 support to /proc/cpuinfo") Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 3.19+ Patchwork: https://patchwork.linux-mips.org/patch/16758/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: Negate error syscall return in traceJames Hogan
[ Upstream commit 4f32a39d49b25eaa66d2420f1f03d371ea4cd906 ] The sys_exit trace event takes a single return value for the system call, which MIPS passes the value of the $v0 (result) register, however MIPS returns positive error codes in $v0 with $a3 specifying that $v0 contains an error code. As a result erroring system calls are traced returning positive error numbers that can't always be distinguished from success. Use regs_return_value() to negate the error code if $a3 is set. Fixes: 1d7bf993e073 ("MIPS: ftrace: Add support for syscall tracepoints.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.13+ Patchwork: https://patchwork.linux-mips.org/patch/16651/ Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: Fix mips_atomic_set() with EVAJames Hogan
[ Upstream commit 4915e1b043d6286928207b1f6968197b50407294 ] EVA linked loads (LLE) and conditional stores (SCE) should be used on EVA kernels for the MIPS_ATOMIC_SET operation of the sysmips system call, or else the atomic set will apply to the kernel view of the virtual address space (potentially unmapped on EVA kernels) rather than the user view (TLB mapped). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15.x- Patchwork: https://patchwork.linux-mips.org/patch/16151/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-09-10MIPS: Fix mips_atomic_set() retry conditionJames Hogan
[ Upstream commit 2ec420b26f7b6ff332393f0bb5a7d245f7ad87f0 ] The inline asm retry check in the MIPS_ATOMIC_SET operation of the sysmips system call has been backwards since commit f1e39a4a616c ("MIPS: Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler") merged in v2.6.32, resulting in the non R10000_LLSC_WAR case retrying until the operation was inatomic, before returning the new value that was probably just written multiple times instead of the old value. Invert the branch condition to fix that particular issue. Fixes: f1e39a4a616c ("MIPS: Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16148/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-07-31MIPS: ralink: fix MT7628 wled_an pinmux gpioÁlvaro Fernández Rojas
[ Upstream commit 07b50db6e685172a41b9978aebffb2438166d9b6 ] Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com> Cc: john@phrozen.org Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13307/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-07-31MIPS: ralink: fix MT7628 pinmux typosÁlvaro Fernández Rojas
[ Upstream commit d7146829c9da24e285cb1b1f2156b5b3e2d40c07 ] Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com> Cc: john@phrozen.org Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13306/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-07-31MIPS: ralink: MT7688 pinmux fixesJohn Crispin
[ Upstream commit e906a5f67e5a3337d696ec848e9c28fc68b39aa3 ] A few fixes to the pinmux data, 2 new muxes and a minor whitespace cleanup. Signed-off-by: John Crispin <blogic@openwrt.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11991/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-07-31MIPS: Fix IRQ tracing & lockdep when reschedulingPaul Burton
[ Upstream commit d8550860d910c6b7b70f830f59003b33daaa52c9 ] When the scheduler sets TIF_NEED_RESCHED & we call into the scheduler from arch/mips/kernel/entry.S we disable interrupts. This is true regardless of whether we reach work_resched from syscall_exit_work, resume_userspace or by looping after calling schedule(). Although we disable interrupts in these paths we don't call trace_hardirqs_off() before calling into C code which may acquire locks, and we therefore leave lockdep with an inconsistent view of whether interrupts are disabled or not when CONFIG_PROVE_LOCKING & CONFIG_DEBUG_LOCKDEP are both enabled. Without tracing this interrupt state lockdep will print warnings such as the following once a task returns from a syscall via syscall_exit_partial with TIF_NEED_RESCHED set: [ 49.927678] ------------[ cut here ]------------ [ 49.934445] WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:3687 check_flags.part.41+0x1dc/0x1e8 [ 49.946031] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled) [ 49.946355] CPU: 0 PID: 1 Comm: init Not tainted 4.10.0-00439-gc9fd5d362289-dirty #197 [ 49.963505] Stack : 0000000000000000 ffffffff81bb5d6a 0000000000000006 ffffffff801ce9c4 [ 49.974431] 0000000000000000 0000000000000000 0000000000000000 000000000000004a [ 49.985300] ffffffff80b7e487 ffffffff80a24498 a8000000ff160000 ffffffff80ede8b8 [ 49.996194] 0000000000000001 0000000000000000 0000000000000000 0000000077c8030c [ 50.007063] 000000007fd8a510 ffffffff801cd45c 0000000000000000 a8000000ff127c88 [ 50.017945] 0000000000000000 ffffffff801cf928 0000000000000001 ffffffff80a24498 [ 50.028827] 0000000000000000 0000000000000001 0000000000000000 0000000000000000 [ 50.039688] 0000000000000000 a8000000ff127bd0 0000000000000000 ffffffff805509bc [ 50.050575] 00000000140084e0 0000000000000000 0000000000000000 0000000000040a00 [ 50.061448] 0000000000000000 ffffffff8010e1b0 0000000000000000 ffffffff805509bc [ 50.072327] ... [ 50.076087] Call Trace: [ 50.079869] [<ffffffff8010e1b0>] show_stack+0x80/0xa8 [ 50.086577] [<ffffffff805509bc>] dump_stack+0x10c/0x190 [ 50.093498] [<ffffffff8015dde0>] __warn+0xf0/0x108 [ 50.099889] [<ffffffff8015de34>] warn_slowpath_fmt+0x3c/0x48 [ 50.107241] [<ffffffff801c15b4>] check_flags.part.41+0x1dc/0x1e8 [ 50.114961] [<ffffffff801c239c>] lock_is_held_type+0x8c/0xb0 [ 50.122291] [<ffffffff809461b8>] __schedule+0x8c0/0x10f8 [ 50.129221] [<ffffffff80946a60>] schedule+0x30/0x98 [ 50.135659] [<ffffffff80106278>] work_resched+0x8/0x34 [ 50.142397] ---[ end trace 0cb4f6ef5b99fe21 ]--- [ 50.148405] possible reason: unannotated irqs-off. [ 50.154600] irq event stamp: 400463 [ 50.159566] hardirqs last enabled at (400463): [<ffffffff8094edc8>] _raw_spin_unlock_irqrestore+0x40/0xa8 [ 50.171981] hardirqs last disabled at (400462): [<ffffffff8094eb98>] _raw_spin_lock_irqsave+0x30/0xb0 [ 50.183897] softirqs last enabled at (400450): [<ffffffff8016580c>] __do_softirq+0x4ac/0x6a8 [ 50.195015] softirqs last disabled at (400425): [<ffffffff80165e78>] irq_exit+0x110/0x128 Fix this by using the TRACE_IRQS_OFF macro to call trace_hardirqs_off() when CONFIG_TRACE_IRQFLAGS is enabled. This is done before invoking schedule() following the work_resched label because: 1) Interrupts are disabled regardless of the path we take to reach work_resched() & schedule(). 2) Performing the tracing here avoids the need to do it in paths which disable interrupts but don't call out to C code before hitting a path which uses the RESTORE_SOME macro that will call trace_hardirqs_on() or trace_hardirqs_off() as appropriate. We call trace_hardirqs_on() using the TRACE_IRQS_ON macro before calling syscall_trace_leave() for similar reasons, ensuring that lockdep has a consistent view of state after we re-enable interrupts. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Cc: linux-mips@linux-mips.org Cc: stable <stable@vger.kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/15385/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-07-31MIPS: pm-cps: Drop manual cache-line alignment of ready_countPaul Burton
[ Upstream commit 161c51ccb7a6faf45ffe09aa5cf1ad85ccdad503 ] We allocate memory for a ready_count variable per-CPU, which is accessed via a cached non-coherent TLB mapping to perform synchronisation between threads within the core using LL/SC instructions. In order to ensure that the variable is contained within its own data cache line we allocate 2 lines worth of memory & align the resulting pointer to a line boundary. This is however unnecessary, since kmalloc is guaranteed to return memory which is at least cache-line aligned (see ARCH_DMA_MINALIGN). Stop the redundant manual alignment. Besides cleaning up the code & avoiding needless work, this has the side effect of avoiding an arithmetic error found by Bryan on 64 bit systems due to the 32 bit size of the former dlinesz. This led the ready_count variable to have its upper 32b cleared erroneously for MIPS64 kernels, causing problems when ready_count was later used on MIPS64 via cpuidle. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 3179d37ee1ed ("MIPS: pm-cps: add PM state entry code for CPS systems") Reported-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Tested-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable <stable@vger.kernel.org> # v3.16+ Patchwork: https://patchwork.linux-mips.org/patch/15383/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-07-31MIPS: Avoid accidental raw backtraceJames Hogan
[ Upstream commit 854236363370995a609a10b03e35fd3dc5e9e4a1 ] Since commit 81a76d7119f6 ("MIPS: Avoid using unwind_stack() with usermode") show_backtrace() invokes the raw backtracer when cp0_status & ST0_KSU indicates user mode to fix issues on EVA kernels where user and kernel address spaces overlap. However this is used by show_stack() which creates its own pt_regs on the stack and leaves cp0_status uninitialised in most of the code paths. This results in the non deterministic use of the raw back tracer depending on the previous stack content. show_stack() deals exclusively with kernel mode stacks anyway, so explicitly initialise regs.cp0_status to KSU_KERNEL (i.e. 0) to ensure we get a useful backtrace. Fixes: 81a76d7119f6 ("MIPS: Avoid using unwind_stack() with usermode") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/16656/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-06-28mm: larger stack guard gap, between vmasSasha Levin
[ Upstream commit 1be7107fbe18eed3e319a6c3e83c78254b693acb ] Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: Oleg Nesterov <oleg@redhat.com> Original-patch-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-06-25MIPS: Fix bnezc/jialc return address calculationPaul Burton
[ Upstream commit 1a73d9310e093fc3adffba4d0a67b9fab2ee3f63 ] The code handling the pop76 opcode (ie. bnezc & jialc instructions) in __compute_return_epc_for_insn() needs to set the value of $31 in the jialc case, which is encoded with rs = 0. However its check to differentiate bnezc (rs != 0) from jialc (rs = 0) was unfortunately backwards, meaning that if we emulate a bnezc instruction we clobber $31 & if we emulate a jialc instruction it actually behaves like a jic instruction. Fix this by inverting the check of rs to match the way the instructions are actually encoded. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 28d6f93d201d ("MIPS: Emulate the new MIPS R6 BNEZC and JIALC instructions") Cc: stable <stable@vger.kernel.org> # v4.0+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16178/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-06-13MIPS: R2-on-R6 MULTU/MADDU/MSUBU emulation bugfixLeonid Yegoshin
[ Upstream commit d65e5677ad5b3a49c43f60ec07644dc1f87bbd2e ] MIPS instructions MULTU, MADDU and MSUBU emulation requires registers HI/LO to be converted to signed 32bits before 64bit sign extension on MIPS64. Bug was found on running MIPS32 R2 test application on MIPS64 R6 kernel. Fixes: b0a668fb2038 ("MIPS: kernel: mips-r2-to-r6-emul: Add R2 emulator for MIPS R6") Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Reported-by: Nikola.Veljkovic@imgtec.com Cc: paul.burton@imgtec.com Cc: yamada.masahiro@socionext.com Cc: akpm@linux-foundation.org Cc: andrea.gelmini@gelma.net Cc: macro@imgtec.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14043/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-06-13MIPS: Fix crash registers on non-crashing CPUsCorey Minyard
[ Upstream commit c80e1b62ffca52e2d1d865ee58bc79c4c0c55005 ] As part of handling a crash on an SMP system, an IPI is send to all other CPUs to save their current registers and stop. It was using task_pt_regs(current) to get the registers, but that will only be accurate if the CPU was interrupted running in userland. Instead allow the architecture to pass in the registers (all pass NULL now, but allow for the future) and then use get_irq_regs() which should be accurate as we are in an interrupt. Fall back to task_pt_regs(current) if nothing else is available. Signed-off-by: Corey Minyard <cminyard@mvista.com> Cc: David Daney <ddaney@caviumnetworks.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13050/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-06-13MIPS: Select HAVE_IRQ_EXIT_ON_IRQ_STACKMatt Redfearn
[ Upstream commit 3cc3434fd6307d06b53b98ce83e76bf9807689b9 ] Since do_IRQ is now invoked on a separate IRQ stack, we select HAVE_IRQ_EXIT_ON_IRQ_STACK so that softirq's may be invoked directly from irq_exit(), rather than requiring do_softirq_own_stack. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Acked-by: Jason A. Donenfeld <jason@zx2c4.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14744/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-06-13MIPS: Only change $28 to thread_info if coming from user modeMatt Redfearn
[ Upstream commit 510d86362a27577f5ee23f46cfb354ad49731e61 ] The SAVE_SOME macro is used to save the execution context on all exceptions. If an exception occurs while executing user code, the stack is switched to the kernel's stack for the current task, and register $28 is switched to point to the current_thread_info, which is at the bottom of the stack region. If the exception occurs while executing kernel code, the stack is left, and this change ensures that register $28 is not updated. This is the correct behaviour when the kernel can be executing on the separate irq stack, because the thread_info will not be at the base of it. With this change, register $28 is only switched to it's kernel conventional usage of the currrent thread info pointer at the point at which execution enters kernel space. Doing it on every exception was redundant, but OK without an IRQ stack, but will be erroneous once that is introduced. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Acked-by: Jason A. Donenfeld <jason@zx2c4.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14742/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: KGDB: Use kernel context for sleeping threadsJames Hogan
[ Upstream commit 162b270c664dca2e0944308e92f9fcc887151a72 ] KGDB is a kernel debug stub and it can't be used to debug userland as it can only safely access kernel memory. On MIPS however KGDB has always got the register state of sleeping processes from the userland register context at the beginning of the kernel stack. This is meaningless for kernel threads (which never enter userland), and for user threads it prevents the user seeing what it is doing while in the kernel: (gdb) info threads Id Target Id Frame ... 3 Thread 2 (kthreadd) 0x0000000000000000 in ?? () 2 Thread 1 (init) 0x000000007705c4b4 in ?? () 1 Thread -2 (shadowCPU0) 0xffffffff8012524c in arch_kgdb_breakpoint () at arch/mips/kernel/kgdb.c:201 Get the register state instead from the (partial) kernel register context stored in the task's thread_struct for resume() to restore. All threads now correctly appear to be in context_switch(): (gdb) info threads Id Target Id Frame ... 3 Thread 2 (kthreadd) context_switch (rq=<optimized out>, cookie=..., next=<optimized out>, prev=0x0) at kernel/sched/core.c:2903 2 Thread 1 (init) context_switch (rq=<optimized out>, cookie=..., next=<optimized out>, prev=0x0) at kernel/sched/core.c:2903 1 Thread -2 (shadowCPU0) 0xffffffff8012524c in arch_kgdb_breakpoint () at arch/mips/kernel/kgdb.c:201 Call clobbered registers which aren't saved and exception registers (BadVAddr & Cause) which can't be easily determined without stack unwinding are reported as 0. The PC is taken from the return address, such that the state presented matches that found immediately after returning from resume(). Fixes: 8854700115ec ("[MIPS] kgdb: add arch support for the kernel's kgdb core") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15829/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: Avoid BUG warning in arch_check_elfJames Cowgill
[ Upstream commit c46f59e90226fa5bfcc83650edebe84ae47d454b ] arch_check_elf contains a usage of current_cpu_data that will call smp_processor_id() with preemption enabled and therefore triggers a "BUG: using smp_processor_id() in preemptible" warning when an fpxx executable is loaded. As a follow-up to commit b244614a60ab ("MIPS: Avoid a BUG warning during prctl(PR_SET_FP_MODE, ...)"), apply the same fix to arch_check_elf by using raw_current_cpu_data instead. The rationale quoted from the previous commit: "It is assumed throughout the kernel that if any CPU has an FPU, then all CPUs would have an FPU as well, so it is safe to perform the check with preemption enabled - change the code to use raw_ variant of the check to avoid the warning." Fixes: 46490b572544 ("MIPS: kernel: elf: Improve the overall ABI and FPU mode checks") Signed-off-by: James Cowgill <James.Cowgill@imgtec.com> CC: <stable@vger.kernel.org> # 4.0+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15951/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: ralink: Fix typos in rt3883 pinctrlJohn Crispin
[ Upstream commit 7c5a3d813050ee235817b0220dd8c42359a9efd8 ] There are two copy & paste errors in the definition of the 5GHz LNA and second ethernet pinmux. Fixes: f576fb6a0700 ("MIPS: ralink: cleanup the soc specific pinmux data") Signed-off-by: John Crispin <john@phrozen.org> Signed-off-by: Daniel Golle <daniel@makrotopia.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.19.x- Patchwork: https://patchwork.linux-mips.org/patch/15328/ Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: End spinlocks with .insnPaul Burton
[ Upstream commit 4b5347a24a0f2d3272032c120664b484478455de ] When building for microMIPS we need to ensure that the assembler always knows that there is code at the target of a branch or jump. Recent toolchains will fail to link a microMIPS kernel when this isn't the case due to what it thinks is a branch to non-microMIPS code. mips-mti-linux-gnu-ld kernel/built-in.o: .spinlock.text+0x2fc: Unsupported branch between ISA modes. mips-mti-linux-gnu-ld final link failed: Bad value This is due to inline assembly labels in spinlock.h not being followed by an instruction mnemonic, either due to a .subsection pseudo-op or the end of the inline asm block. Fix this with a .insn direction after such labels. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Maciej W. Rozycki <macro@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: <stable@vger.kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/15325/ Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: Force o32 fp64 support on 32bit MIPS64r6 kernelsJames Hogan
[ Upstream commit 2e6c7747730296a6d4fd700894286db1132598c4 ] When a 32-bit kernel is configured to support MIPS64r6 (CPU_MIPS64_R6), MIPS_O32_FP64_SUPPORT won't be selected as it should be because MIPS32_O32 is disabled (o32 is already the default ABI available on 32-bit kernels). This results in userland FP breakage as CP0_Status.FR is read-only 1 since r6 (when an FPU is present) so __enable_fpu() will fail to clear FR. This causes the FPU emulator to get used which will incorrectly emulate 32-bit FPU registers. Force o32 fp64 support in this case by also selecting MIPS_O32_FP64_SUPPORT from CPU_MIPS64_R6 if 32BIT. Fixes: 4e9d324d4288 ("MIPS: Require O32 FP64 support for MIPS64 with O32 compat") Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.0.x- Patchwork: https://patchwork.linux-mips.org/patch/15310/ Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: BCM47XX: Fix button inversion for Asus WL-500WMirko Parthey
[ Upstream commit bdfdaf1a016ef09cb941f2edad485a713510b8d5 ] The Asus WL-500W buttons are active high, but the software treats them as active low. Fix the inverted logic. Fixes: 3be972556fa1 ("MIPS: BCM47XX: Import buttons database from OpenWrt") Signed-off-by: Mirko Parthey <mirko.parthey@web.de> Acked-by: Rafał Miłecki <rafal@milecki.pl> Cc: Hauke Mehrtens <hauke@hauke-m.de> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.14.x- Patchwork: https://patchwork.linux-mips.org/patch/15295/ Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: OCTEON: Fix copy_from_user fault handling for large buffersJames Cowgill
[ Upstream commit 884b426917e4b3c85f33b382c792a94305dfdd62 ] If copy_from_user is called with a large buffer (>= 128 bytes) and the userspace buffer refers partially to unreadable memory, then it is possible for Octeon's copy_from_user to report the wrong number of bytes have been copied. In the case where the buffer size is an exact multiple of 128 and the fault occurs in the last 64 bytes, copy_from_user will report that all the bytes were copied successfully but leave some garbage in the destination buffer. The bug is in the main __copy_user_common loop in octeon-memcpy.S where in the middle of the loop, src and dst are incremented by 128 bytes. The l_exc_copy fault handler is used after this but that assumes that "src < THREAD_BUADDR($28)". This is not the case if src has already been incremented. Fix by adding an extra fault handler which rewinds the src and dst pointers 128 bytes before falling though to l_exc_copy. Thanks to the pwritev test from the strace test suite for originally highlighting this bug! Fixes: 5b3b16880f40 ("MIPS: Add Cavium OCTEON processor support ...") Signed-off-by: James Cowgill <James.Cowgill@imgtec.com> Acked-by: David Daney <david.daney@cavium.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14978/ Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-17MIPS: Fix special case in 64 bit IP checksumming.Ralf Baechle
[ Upstream commit 66fd848cadaa6be974a8c780fbeb328f0af4d3bd ] For certain arguments such as saddr = 0xc0a8fd60, daddr = 0xc0a8fda1, len = 80, proto = 17, sum = 0x7eae049d there will be a carry when folding the intermediate 64 bit checksum to 32 bit but the code doesn't add the carry back to the one's complement sum, thus an incorrect result will be generated. Reported-by: Mark Zhang <bomb.zhang@gmail.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: stable@vger.kernel.org Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-15MIPS: Handle microMIPS jumps in the same way as MIPS32/MIPS64 jumpsPaul Burton
[ Upstream commit 096a0de427ea333f56f0ee00328cff2a2731bcf1 ] is_jump_ins() checks for plain jump ("j") instructions since commit e7438c4b893e ("MIPS: Fix sibling call handling in get_frame_info") but that commit didn't make the same change to the microMIPS code, leaving it inconsistent with the MIPS32/MIPS64 code. Handle the microMIPS encoding of the jump instruction too such that it behaves consistently. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: e7438c4b893e ("MIPS: Fix sibling call handling in get_frame_info") Cc: Tony Wu <tung7970@gmail.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14533/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-15MIPS: Calculate microMIPS ra properly when unwinding the stackPaul Burton
[ Upstream commit bb9bc4689b9c635714fbcd5d335bad9934a7ebfc ] get_frame_info() calculates the offset of the return address within a stack frame simply by dividing a the bottom 16 bits of the instruction, treated as a signed integer, by the size of a long. Whilst this works for MIPS32 & MIPS64 ISAs where the sw or sd instructions are used, it's incorrect for microMIPS where encodings differ. The result is that we typically completely fail to unwind the stack on microMIPS. Fix this by adjusting is_ra_save_ins() to calculate the return address offset, and take into account the various different encodings there in the same place as we consider whether an instruction is storing the ra/$31 register. With this we are now able to unwind the stack for kernels targetting the microMIPS ISA, for example we can produce: Call Trace: [<80109e1f>] show_stack+0x63/0x7c [<8011ea17>] __warn+0x9b/0xac [<8011ea45>] warn_slowpath_fmt+0x1d/0x20 [<8013fe53>] register_console+0x43/0x314 [<8067c58d>] of_setup_earlycon+0x1dd/0x1ec [<8067f63f>] early_init_dt_scan_chosen_stdout+0xe7/0xf8 [<8066c115>] do_early_param+0x75/0xac [<801302f9>] parse_args+0x1dd/0x308 [<8066c459>] parse_early_options+0x25/0x28 [<8066c48b>] parse_early_param+0x2f/0x38 [<8066e8cf>] setup_arch+0x113/0x488 [<8066c4f3>] start_kernel+0x57/0x328 ---[ end trace 0000000000000000 ]--- Whereas previously we only produced: Call Trace: [<80109e1f>] show_stack+0x63/0x7c ---[ end trace 0000000000000000 ]--- Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14532/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-15MIPS: Fix is_jump_ins() handling of 16b microMIPS instructionsPaul Burton
[ Upstream commit 67c75057709a6d85c681c78b9b2f9b71191f01a2 ] is_jump_ins() checks 16b instruction fields without verifying that the instruction is indeed 16b, as is done by is_ra_save_ins() & is_sp_move_ins(). Add the appropriate check. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14531/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-15MIPS: Fix get_frame_info() handling of microMIPS function sizePaul Burton
[ Upstream commit b6c7a324df37bf05ef7a2c1580683cf10d082d97 ] get_frame_info() is meant to iterate over up to the first 128 instructions within a function, but for microMIPS kernels it will not reach that many instructions unless the function is 512 bytes long since we calculate the maximum number of instructions to check by dividing the function length by the 4 byte size of a union mips_instruction. In microMIPS kernels this won't do since instructions are variable length. Fix this by instead checking whether the pointer to the current instruction has reached the end of the function, and use max_insns as a simple constant to check the number of iterations against. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14530/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-15MIPS: Prevent unaligned accesses during stack unwindingPaul Burton
[ Upstream commit a3552dace7d1d0cabf573e88fc3025cb90c4a601 ] During stack unwinding we call a number of functions to determine what type of instruction we're looking at. The union mips_instruction pointer provided to them may be pointing at a 2 byte, but not 4 byte, aligned address & we thus cannot directly access the 4 byte wide members of the union mips_instruction. To avoid this is_ra_save_ins() copies the required half-words of the microMIPS instruction to a correctly aligned union mips_instruction on the stack, which it can then access safely. The is_jump_ins() & is_sp_move_ins() functions do not correctly perform this temporary copy, and instead attempt to directly dereference 4 byte fields which may be misaligned and lead to an address exception. Fix this by copying the instruction halfwords to a temporary union mips_instruction in get_frame_info() such that we can provide a 4 byte aligned union mips_instruction to the is_*_ins() functions and they do not need to deal with misalignment themselves. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14529/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-05-15MIPS: Clear ISA bit correctly in get_frame_info()Paul Burton
[ Upstream commit ccaf7caf2c73c6db920772bf08bf1d47b2170634 ] get_frame_info() can be called in microMIPS kernels with the ISA bit already clear. For example this happens when unwind_stack_by_address() is called because we begin with a PC that has the ISA bit set & subtract the (odd) offset from the preceding symbol (which does not have the ISA bit set). Since get_frame_info() unconditionally subtracts 1 from the PC in microMIPS kernels it incorrectly misaligns the address it then attempts to access code at, leading to an address error exception. Fix this by using msk_isa16_mode() to clear the ISA bit, which allows get_frame_info() to function regardless of whether it is provided with a PC that has the ISA bit set or not. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14528/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2017-03-02KVM: MIPS: Flush KVM entry code from icache globallyJames Hogan
[ Upstream commit 32eb12a6c11034867401d56b012e3c15d5f8141e ] Flush the KVM entry code from the icache on all CPUs, not just the one that built the entry code. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: <stable@vger.kernel.org> # 3.16.x- Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-11-25KVM: MIPS: Precalculate MMIO load resume PCJames Hogan
[ Upstream commit e1e575f6b026734be3b1f075e780e91ab08ca541 ] The advancing of the PC when completing an MMIO load is done before re-entering the guest, i.e. before restoring the guest ASID. However if the load is in a branch delay slot it may need to access guest code to read the prior branch instruction. This isn't safe in TLB mapped code at the moment, nor in the future when we'll access unmapped guest segments using direct user accessors too, as it could read the branch from host user memory instead. Therefore calculate the resume PC in advance while we're still in the right context and save it in the new vcpu->arch.io_pc (replacing the no longer needed vcpu->arch.pending_load_cause), and restore it on MMIO completion. Fixes: e685c689f3a8 ("KVM/MIPS32: Privileged instruction/target branch emulation.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: <stable@vger.kernel.org> # 3.10.x- Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-11-25KVM: MIPS: Make ERET handle ERL before EXLJames Hogan
[ Upstream commit ede5f3e7b54a4347be4d8525269eae50902bd7cd ] The ERET instruction to return from exception is used for returning from exception level (Status.EXL) and error level (Status.ERL). If both bits are set however we should be returning from ERL first, as ERL can interrupt EXL, for example when an NMI is taken. KVM however checks EXL first. Fix the order of the checks to match the pseudocode in the instruction set manual. Fixes: e685c689f3a8 ("KVM/MIPS32: Privileged instruction/target branch emulation.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: <stable@vger.kernel.org> # 3.10.x- Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-10-03MIPS: SMP: Fix possibility of deadlock when bringing CPUs onlineMatt Redfearn
[ Upstream commit 8f46cca1e6c06a058374816887059bcc017b382f ] This patch fixes the possibility of a deadlock when bringing up secondary CPUs. The deadlock occurs because the set_cpu_online() is called before synchronise_count_slave(). This can cause a deadlock if the boot CPU, having scheduled another thread, attempts to send an IPI to the secondary CPU, which it sees has been marked online. The secondary is blocked in synchronise_count_slave() waiting for the boot CPU to enter synchronise_count_master(), but the boot cpu is blocked in smp_call_function_many() waiting for the secondary to respond to it's IPI request. Fix this by marking the CPU online in cpu_callin_map and synchronising counters before declaring the CPU online and calculating the maps for IPIs. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reported-by: Justin Chen <justinpopo6@gmail.com> Tested-by: Justin Chen <justinpopo6@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14302/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-10-03MIPS: Fix pre-r6 emulation FPU initialisationPaul Burton
[ Upstream commit 7e956304eb8a285304a78582e4537e72c6365f20 ] In the mipsr2_decoder() function, used to emulate pre-MIPSr6 instructions that were removed in MIPSr6, the init_fpu() function is called if a removed pre-MIPSr6 floating point instruction is the first floating point instruction used by the task. However, init_fpu() performs varous actions that rely upon not being migrated. For example in the most basic case it sets the coprocessor 0 Status.CU1 bit to enable the FPU & then loads FP register context into the FPU registers. If the task were to migrate during this time, it may end up attempting to load FP register context on a different CPU where it hasn't set the CU1 bit, leading to errors such as: do_cpu invoked from kernel context![#2]: CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82 #2 task: 838e4000 ti: 88d38000 task.ti: 88d38000 $ 0 : 00000000 00000001 ffffffff 88d3fef8 $ 4 : 838e4000 88d38004 00000000 00000001 $ 8 : 3400fc01 801f8020 808e9100 24000000 $12 : dbffffff 807b69d8 807b0000 00000000 $16 : 00000000 80786150 00400fc4 809c0398 $20 : 809c0338 0040273c 88d3ff28 808e9d30 $24 : 808e9d30 00400fb4 $28 : 88d38000 88d3fe88 00000000 8011a2ac Hi : 0040273c Lo : 88d3ff28 epc : 80114178 _restore_fp+0x10/0xa0 ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660 Status: 1400fc03 KERNEL EXL IE Cause : 1080002c (ExcCode 0b) PrId : 0001a920 (MIPS I6400) Modules linked in: Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0) Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338 808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000 004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28 808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000 00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20 ... Call Trace: [<80114178>] _restore_fp+0x10/0xa0 [<8011a2ac>] mipsr2_decoder+0xd5c/0x1660 [<8010de18>] do_ri+0x90/0x6b8 [<80105c20>] ret_from_exception+0x0/0x10 Fix this by disabling preemption around the call to init_fpu(), ensuring that it starts & completes on one CPU. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: b0a668fb2038 ("MIPS: kernel: mips-r2-to-r6-emul: Add R2 emulator for MIPS R6") Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # v4.0+ Patchwork: https://patchwork.linux-mips.org/patch/14305/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-10-03MIPS: Avoid a BUG warning during prctl(PR_SET_FP_MODE, ...)Marcin Nowakowski
[ Upstream commit b244614a60ab7ce54c12a9cbe15cfbf8d79d0967 ] cpu_has_fpu macro uses smp_processor_id() and is currently executed with preemption enabled, that triggers the warning at runtime. It is assumed throughout the kernel that if any CPU has an FPU, then all CPUs would have an FPU as well, so it is safe to perform the check with preemption enabled - change the code to use raw_ variant of the check to avoid the warning. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 4.0+ Patchwork: https://patchwork.linux-mips.org/patch/14125/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-10-02mips: copy_from_user() must zero the destination on access_ok() failureAl Viro
[ Upstream commit e69d700535ac43a18032b3c399c69bf4639e89a2 ] Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-10-02MIPS: Add a missing ".set pop" in an early commitHuacai Chen
[ Upstream commit 3cbc6fc9c99f1709203711f125bc3b79487aba06 ] Commit 842dfc11ea9a21 ("MIPS: Fix build with binutils 2.24.51+") missing a ".set pop" in macro fpu_restore_16even, so add it. Signed-off-by: Huacai Chen <chenhc@lemote.com> Acked-by: Manuel Lauss <manuel.lauss@gmail.com> Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 3.18+ Patchwork: https://patchwork.linux-mips.org/patch/14210/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-10-02MIPS: paravirt: Fix undefined reference to smp_bootstrapMatt Redfearn
[ Upstream commit 951c39cd3bc0aedf67fbd8fb4b9380287e6205d1 ] If the paravirt machine is compiles without CONFIG_SMP, the following linker error occurs arch/mips/kernel/head.o: In function `kernel_entry': (.ref.text+0x10): undefined reference to `smp_bootstrap' due to the kernel entry macro always including SMP startup code. Wrap this code in CONFIG_SMP to fix the error. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org # 3.16+ Patchwork: https://patchwork.linux-mips.org/patch/14212/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-09-17MIPS: KVM: Check for pfn noslot caseJames Hogan
commit ba913e4f72fc9cfd03dad968dfb110eb49211d80 upstream. When mapping a page into the guest we error check using is_error_pfn(), however this doesn't detect a value of KVM_PFN_NOSLOT, indicating an error HVA for the page. This can only happen on MIPS right now due to unusual memslot management (e.g. being moved / removed / resized), or with an Enhanced Virtual Memory (EVA) configuration where the default KVM_HVA_ERR_* and kvm_is_error_hva() definitions are unsuitable (fixed in a later patch). This case will be treated as a pfn of zero, mapping the first page of physical memory into the guest. It would appear the MIPS KVM port wasn't updated prior to being merged (in v3.10) to take commit 81c52c56e2b4 ("KVM: do not treat noslot pfn as a error pfn") into account (merged v3.8), which converted a bunch of is_error_pfn() calls to is_error_noslot_pfn(). Switch to using is_error_noslot_pfn() instead to catch this case properly. Fixes: 858dd5d45733 ("KVM/MIPS32: MMU/TLB operations for the Guest.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [james.hogan@imgtec.com: Backport to v4.7.y] Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-08-19MIPS: KVM: Propagate kseg0/mapped tlb fault errorsJames Hogan
commit 9b731bcfdec4c159ad2e4312e25d69221709b96a upstream. Propagate errors from kvm_mips_handle_kseg0_tlb_fault() and kvm_mips_handle_mapped_seg_tlb_fault(), usually triggering an internal error since they normally indicate the guest accessed bad physical memory or the commpage in an unexpected way. Fixes: 858dd5d45733 ("KVM/MIPS32: MMU/TLB operations for the Guest.") Fixes: e685c689f3a8 ("KVM/MIPS32: Privileged instruction/target branch emulation.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> [james.hogan@imgtec.com: Backport to v4.7] Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-08-19MIPS: KVM: Fix gfn range check in kseg0 tlb faultsJames Hogan
commit 0741f52d1b980dbeb290afe67d88fc2928edd8ab upstream. Two consecutive gfns are loaded into host TLB, so ensure the range check isn't off by one if guest_pmap_npages is odd. Fixes: 858dd5d45733 ("KVM/MIPS32: MMU/TLB operations for the Guest.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> [james.hogan@imgtec.com: Backport to v4.7] Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-08-19MIPS: KVM: Add missing gfn range checkJames Hogan
commit 8985d50382359e5bf118fdbefc859d0dbf6cebc7 upstream. kvm_mips_handle_mapped_seg_tlb_fault() calculates the guest frame number based on the guest TLB EntryLo values, however it is not range checked to ensure it lies within the guest_pmap. If the physical memory the guest refers to is out of range then dump the guest TLB and emit an internal error. Fixes: 858dd5d45733 ("KVM/MIPS32: MMU/TLB operations for the Guest.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> [james.hogan@imgtec.com: Backport to v4.7] Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-08-19MIPS: KVM: Fix mapped fault broken commpage handlingJames Hogan
commit c604cffa93478f8888bec62b23d6073dad03d43a upstream. kvm_mips_handle_mapped_seg_tlb_fault() appears to map the guest page at virtual address 0 to PFN 0 if the guest has created its own mapping there. The intention is unclear, but it may have been an attempt to protect the zero page from being mapped to anything but the comm page in code paths you wouldn't expect from genuine commpage accesses (guest kernel mode cache instructions on that address, hitting trapping instructions when executing from that address with a coincidental TLB eviction during the KVM handling, and guest user mode accesses to that address). Fix this to check for mappings exactly at KVM_GUEST_COMMPAGE_ADDR (it may not be at address 0 since commit 42aa12e74e91 ("MIPS: KVM: Move commpage so 0x0 is unmapped")), and set the corresponding EntryLo to be interpreted as 0 (invalid). Fixes: 858dd5d45733 ("KVM/MIPS32: MMU/TLB operations for the Guest.") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> [james.hogan@imgtec.com: Backport to v4.7] Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
2016-08-19MIPS: mm: Fix definition of R6 cache instructionMatt Redfearn
[ Upstream commit 4f53989b0652ffe2605221c81ca8ffcfc90aed2a ] Commit a168b8f1cde6 ("MIPS: mm: Add MIPS R6 instruction encodings") added an incorrect definition of the redefined MIPSr6 cache instruction. Executing any kernel code including this instuction results in a reserved instruction exception and kernel panic. Fix the instruction definition. Fixes: a168b8f1cde6588ff7a67699fa11e01bc77a5ddd Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: <stable@vger.kernel.org> # 4.x- Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13663/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>