summaryrefslogtreecommitdiff
path: root/arch/powerpc/mm/tlb_low_64e.S
AgeCommit message (Collapse)Author
2019-05-03powerpc/mm: Move nohash specifics in subdirectory mm/nohashChristophe Leroy
Many files in arch/powerpc/mm are only for nohash. This patch creates a subdirectory for them. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Shorten new filenames] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)Diana Craciun
In order to protect against speculation attacks on indirect branches, the branch predictor is flushed at kernel entry to protect for the following situations: - userspace process attacking another userspace process - userspace process attacking the kernel Basically when the privillege level change (i.e. the kernel is entered), the branch predictor state is flushed. Signed-off-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: clean inclusions of asm/feature-fixups.hChristophe Leroy
files not using feature fixup don't need asm/feature-fixups.h files using feature fixup need asm/feature-fixups.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-03-01powerpc: Fix misspellings in comments.Adam Buchbinder
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-10-27powerpc/e6500: hw tablewalk: make sure we invalidate and write to the same ↵Kevin Hao
tlb entry In order to workaround Erratum A-008139, we have to invalidate the tlb entry with tlbilx before overwriting. Due to the performance consideration, we don't add any memory barrier when acquire/release the tcd lock. This means the two load instructions for esel_next do have the possibility to return different value. This is definitely not acceptable due to the Erratum A-008139. We have two options to fix this issue: a) Add memory barrier when acquire/release tcd lock to order the load/store to esel_next. b) Just make sure to invalidate and write to the same tlb entry and tolerate the race that we may get the wrong value and overwrite the tlb entry just updated by the other thread. We observe better performance using option b. So reserve an additional register to save the value of the esel_next. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-08-17powerpc/e6500: hw tablewalk: optimize a bit for tcd lock acquiring codesKevin Hao
It makes no sense to put the instructions for calculating the lock value (cpu number + 1) and the clearing of eq bit of cr1 in lbarx/stbcx loop. And when the lock is acquired by the other thread, the current lock value has no chance to equal with the lock value used by current cpu. So we can skip the comparing for these two lock values in the lbz/bne loop. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-06-02powerpc/e6500: Optimize hugepage TLB missesScott Wood
Some workloads take a lot of TLB misses despite using traditional hugepages. Handle these TLB misses in the asm fastpath rather than going through a bunch of C code. With this patch I measured around a 5x speedup in handling hugepage TLB misses. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-07-29powerpc/e6500: Work around erratum A-008139Scott Wood
Erratum A-008139 can cause duplicate TLB entries if an indirect entry is overwritten using tlbwe while the other thread is using it to do a lookup. Work around this by using tlbilx to invalidate prior to overwriting. To avoid the need to save another register to hold MAS1 during the workaround code, TID clearing has been moved from tlb_miss_kernel_e6500 until after the SMT section. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-06-20powerpc/booke64: wrap tlb lock and search in htw miss with FTR_SMTLaurentiu Tudor
Virtualized environments may expose a e6500 dual-threaded core as two single-threaded e6500 cores. Take advantage of this and get rid of the tlb lock and the trap-causing tlbsx in the htw miss handler by guarding with CPU_FTR_SMT, as it's already being done in the bolted tlb1 miss handler. As seen in the results below, measurements done with lmbench random memory access latency test running under Freescale's Embedded Hypervisor, there is a ~34% improvement. Memory latencies in nanoseconds - smaller is better (WARNING - may not be correct, check graphs) ---------------------------------------------------- Host Mhz L1 $ L2 $ Main mem Rand mem --------- --- ---- ---- -------- -------- smt 1665 1.8020 13.2 83.0 1149.7 nosmt 1665 1.8020 13.2 83.0 758.1 Signed-off-by: Laurentiu Tudor <Laurentiu.Tudor@freescale.com> Cc: Scott Wood <scottwood@freescale.com> [scottwood@freescale.com: commit message tweak] Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-06-20powerpc/e6500: hw tablewalk: fix recursive tlb lock on cpu 0Scott Wood
Commit 82d86de25b9c99db546e17c6f7ebf9a691da557e "TLB lock recursive" introduced a bug whereby cpu 0 uses the same value for "lock held" as is used to indicate that the lock is free. This means that cpu 1 can acquire the lock whenever it wants, regardless of whether cpu 0 has it locked, which in turn means we can get duplicate TLB entries. Add one to the CPU value to ensure we do not use zero as a "lock held" value. Signed-off-by: Scott Wood <scottwood@freescale.com> Reported-by: Ed Swarthout <ed.swarthout@freescale.com>
2014-06-20powerpc/e6500: hw tablewalk: clear TID in kernel indirect entriesScott Wood
Previously TID was being cleared before the tlbsx, but not after. This can lead to a multiway hit between a TLB entry with TID=0 (previously inserted when PID=0) and a TLB entry with TID!=0 that matches PID. This can theoretically result in undefined behavior, though we probably get lucky due to the details of the overlap. It also results in the inability to use multihit detection to detect other conflicting TLB entries, as well as poorer TLB utilization due to duplicating kernel TLB entries. Rather than try to patch up MAS1 after tlbsx, the entire value is saved/restored as with MAS2. I observed a slight improvement in TLB miss performance with this patch applied. Signed-off-by: Scott Wood <scottwood@freescale.com> Reported-by: Ed Swarthout <ed.swarthout@freescale.com>
2014-03-19powerpc/booke64: Use SPRG_TLB_EXFRAME on bolted handlersScott Wood
While bolted handlers (including e6500) do not need to deal with a TLB miss recursively causing another TLB miss, nested TLB misses can still happen with crit/mc/debug exceptions -- so we still need to honor SPRG_TLB_EXFRAME. We don't need to spend time modifying it in the TLB miss fastpath, though -- the special level exception will handle that. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com> Cc: kvm-ppc@vger.kernel.org
2014-03-19powerpc/e6500: Make TLB lock recursiveScott Wood
Once special level interrupts are supported, we may take nested TLB misses -- so allow the same thread to acquire the lock recursively. The lock will not be effective against the nested TLB miss handler trying to write the same entry as the interrupted TLB miss handler, but that's also a problem on non-threaded CPUs that lack TLB write conditional. This will be addressed in the patch that enables crit/mc support by invalidating the TLB on return from level exceptions. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-07powerpc/book3e: Fix check for linear mapping in TLB miss handlerBenjamin Krill
The previous code added wrong TLBs and causes machine check errors if a driver accessed passed the end of the linear mapping instead of a clean page fault. Signed-off-by: Ralph E. Bellofatto <ralphbel@us.ibm.com> Signed-off-by: Benjamin Krill <ben@codiert.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-01-17powerpc/booke64: Guard e6500 tlb handler with CONFIG_PPC_FSL_BOOK3EScott Wood
...and make CONFIG_PPC_FSL_BOOK3E conflict with CONFIG_PPC_64K_PAGES. This fixes a build break with CONFIG_PPC_64K_PAGES on 64-bit book3e, that was introduced by commit 28efc35fe68dacbddc4b12c2fa8f2df1593a4ad3 ("powerpc/e6500: TLB miss handler with hardware tablewalk support"). Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-10powerpc/booke-64: fix tlbsrx. path in bolted tlb handlerScott Wood
It was branching to the cleanup part of the non-bolted handler, which would have been bad if there were any chips with tlbsrx. that use the bolted handler. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09powerpc/e6500: TLB miss handler with hardware tablewalk supportScott Wood
There are a few things that make the existing hw tablewalk handlers unsuitable for e6500: - Indirect entries go in TLB1 (though the resulting direct entries go in TLB0). - It has threads, but no "tlbsrx." -- so we need a spinlock and a normal "tlbsx". Because we need this lock, hardware tablewalk is mandatory on e6500 unless we want to add spinlock+tlbsx to the normal bolted TLB miss handler. - TLB1 has no HES (nor next-victim hint) so we need software round robin (TODO: integrate this round robin data with hugetlb/KVM) - The existing tablewalk handlers map half of a page table at a time, because IBM hardware has a fixed 1MiB indirect page size. e6500 has variable size indirect entries, with a minimum of 2MiB. So we can't do the half-page indirect mapping, and even if we could it would be less efficient than mapping the full page. - Like on e5500, the linear mapping is bolted, so we don't need the overhead of supporting nested tlb misses. Note that hardware tablewalk does not work in rev1 of e6500. We do not expect to support e6500 rev1 in mainline Linux. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com>
2012-09-05powerpc/booke64: Use SPRG0/3 scratch for bolted TLB miss & crit intMihai Caraman
Embedded.Hypervisor category defines GSPRG0..3 physical registers for guests. Avoid SPRG4-7 usage as scratch in host exception handlers, otherwise guest SPRG4-7 registers will be clobbered. For bolted TLB miss exception handlers, which is the version currently supported by KVM, use SPRN_SPRG_GEN_SCRATCH aka SPRG0 instead of SPRN_SPRG_TLB_SCRATCH aka SPRG6. Keep using TLB PACA slots to fit in one 64-byte cache line. For critical exception handlers use SPRG3 instead of SPRG7. Provide a routine to store and restore user-visible SPRGs. This will be subsequently used to restore VDSO information in SPRG3. Add EX_R13 to paca slots to free up SPRG3 and change the critical exception epilog to use it. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-05powerpc/booke64: Add DO_KVM kernel hooksMihai Caraman
Hook DO_KVM macro into 64-bit booke for KVM integration. Extend interrupt handlers' parameter list with interrupt vector numbers to accomodate the macro. Only the bolted version of tlb miss handers is addressed now. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Enforce usage of RA 0-R31 where possibleMichael Neuling
Some macros use RA where when RA=R0 the values is 0, so make this the enforced mnemonic in the macro. Idea suggested by Andreas Schwab. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Fix usage of register macros getting ready for %r0 changeMichael Neuling
Anything that uses a constructed instruction (ie. from ppc-opcode.h), need to use the new R0 macro, as %r0 is not going to work. Also convert usages of macros where we are just determining an offset (usually for a load/store), like: std r14,STK_REG(r14)(r1) Can't use STK_REG(r14) as %r14 doesn't work in the STK_REG macro since it's just calculating an offset. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-12-07powerpc: Add hugepage support to 64-bit tablewalk code for FSL_BOOK3EBecky Bruce
Before hugetlb, at each level of the table, we test for !0 to determine if we have a valid table entry. With hugetlb, this compare becomes: < 0 is a normal entry 0 is an invalid entry > 0 is huge This works because the hugepage code pulls the top bit off the entry (which for non-huge entries always has the top bit set) as an indicator that we have a hugepage. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-12-07powerpc: Whitespace/comment changes to tlb_low_64e.SBecky Bruce
I happened to comment this code while I was digging through it; we might as well commit that. I also made some whitespace changes - the existing code had a lot of unnecessary newlines that I found annoying when I was working on my tiny laptop. No functional changes. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-09-20powerpc: Hugetlb for BookEBecky Bruce
Enable hugepages on Freescale BookE processors. This allows the kernel to use huge TLB entries to map pages, which can greatly reduce the number of TLB misses and the amount of TLB thrashing experienced by applications with large memory footprints. Care should be taken when using this on FSL processors, as the number of large TLB entries supported by the core is low (16-64) on current processors. The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g. Page sizes larger than the max zone size are called "gigantic" pages and must be allocated on the command line (and cannot be deallocated). This is currently only fully implemented for Freescale 32-bit BookE processors, but there is some infrastructure in the code for 64-bit BooKE. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-06-29powerpc/book3e-64: use a separate TLB handler when linear map is boltedScott Wood
On MMUs such as FSL where we can guarantee the entire linear mapping is bolted, we don't need to worry about linear TLB misses. If on top of that we do a full table walk, we get rid of all recursive TLB faults, and can dispense with some state saving. This gains a few percent on TLB-miss-heavy workloads, and around 50% on a benchmark that had a high rate of virtual page table faults under the normal handler. While touching the EX_TLB layout, remove EX_TLB_MMUCR0, EX_TLB_SRR0, and EX_TLB_SRR1 as they're not used. [BenH: Fixed build with 64K pages (wsp config)] Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-04-07Merge branch 'for-linus2' of git://git.profusion.mobi/users/lucas/linux-2.6Linus Torvalds
* 'for-linus2' of git://git.profusion.mobi/users/lucas/linux-2.6: Fix common misspellings
2011-04-04Documentation: fix minor typos/spellingSylvestre Ledru
Fix some minor typos: * informations => information * there own => their own * these => this Signed-off-by: Sylvestre Ledru <sylvestre.ledru@scilab.org> Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-31Fix common misspellingsLucas De Marchi
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
2010-11-18powerpc/mm: Fix module instruction tlb fault handling on Book-E 64Kumar Gala
We were seeing oops like the following when we did an rmmod on a module: Unable to handle kernel paging request for instruction fetch Faulting instruction address: 0x8000000000008010 Oops: Kernel access of bad area, sig: 11 [#1] SMP NR_CPUS=2 P5020 DS last sysfs file: /sys/devices/qman-portals.2/qman-pool.9/uevent Modules linked in: qman_tester(-) NIP: 8000000000008010 LR: c000000000074858 CTR: 8000000000008010 REGS: c00000002e29bab0 TRAP: 0400 Not tainted (2.6.34.6-00744-g2d21f14) MSR: 0000000080029000 <EE,ME,CE> CR: 24000448 XER: 00000000 TASK = c00000007a8be600[4987] 'rmmod' THREAD: c00000002e298000 CPU: 1 GPR00: 8000000000008010 c00000002e29bd30 8000000000012798 c00000000035fb28 GPR04: 0000000000000002 0000000000000002 0000000024022428 c000000000009108 GPR08: fffffffffffffffe 800000000000a618 c0000000003c13c8 0000000000000000 GPR12: 0000000022000444 c00000000fffed00 0000000000000000 0000000000000000 GPR16: 00000000100c0000 0000000000000000 00000000100dabc8 0000000010099688 GPR20: 0000000000000000 00000000100cfc28 0000000000000000 0000000010011a44 GPR24: 00000000100017b2 0000000000000000 0000000000000000 0000000000000880 GPR28: c00000000035fb28 800000000000a7b8 c000000000376d80 c0000000003cce50 NIP [8000000000008010] .test_exit+0x0/0x10 [qman_tester] LR [c000000000074858] .SyS_delete_module+0x1f8/0x2f0 Call Trace: [c00000002e29bd30] [c0000000000748b4] .SyS_delete_module+0x254/0x2f0 (unreliable) [c00000002e29be30] [c000000000000580] syscall_exit+0x0/0x2c Instruction dump: XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX 38600000 4e800020 60000000 60000000 <4e800020> 60000000 60000000 60000000 ---[ end trace 4f57124939a84dc8 ]--- This appears to be due to checking the wrong permission bits in the instruction_tlb_miss handling if the address that faulted was in vmalloc space. We need to look at the supervisor execute (_PAGE_BAP_SX) bit and not the user bit (_PAGE_BAP_UX/_PAGE_EXEC). Also removed a branch level since it did not appear to be used. Reported-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-02-03powerpc: Fix typo s/leve/level/ in TLB codeThadeu Lima de Souza Cascardo
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/mm: Remove duplicated #includeHuang Weiyi
Remove duplicated #include('s) in arch/powerpc/mm/tlb_low_64e.S Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-28powerpc/mm: Add MMU features for TLB reservation & Paired MAS registersKumar Gala
Support for TLB reservation (or TLB Write Conditional) and Paired MAS registers are optional for a processor implementation so we handle them via MMU feature sections. We currently only used paired MAS registers to access the full RPN + perm bits that are kept in MAS7||MAS3. We assume that if an implementation has hardware page table at this time it also implements in TLB reservations. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-27powerpc/mm: Cleanup handling of execute permissionBenjamin Herrenschmidt
This is an attempt at cleaning up a bit the way we handle execute permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only defined by CPUs that can do something with it, and the myriad of #ifdef's in the I$/D$ coherency code is reduced to 2 cases that hopefully should cover everything. The logic on BookE is a little bit different than what it was though not by much. Since now, _PAGE_EXEC will be set by the generic code for executable pages, we need to filter out if they are unclean and recover it. However, I don't expect the code to be more bloated than it already was in that area due to that change. I could boast that this brings proper enforcing of per-page execute permissions to all BookE and 40x but in fact, we've had that now for some time as a side effect of my previous rework in that area (and I didn't even know it :-) We would only enable execute permission if the page was cache clean and we would only cache clean it if we took and exec fault. Since we now enforce that the later only work if VM_EXEC is part of the VMA flags, we de-fact already enforce per-page execute permissions... Unless I missed something Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20powerpc: Add TLB management code for 64-bit Book3EBenjamin Herrenschmidt
This adds the TLB miss handler assembly, the low level TLB flush routines along with the necessary hook for dealing with our virtual page tables or indirect TLB entries that need to be flushes when PTE pages are freed. There is currently no support for hugetlbfs Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>