summaryrefslogtreecommitdiff
path: root/arch/powerpc/mm/tlb_low_64e.S
AgeCommit message (Collapse)Author
2014-01-10powerpc/booke-64: fix tlbsrx. path in bolted tlb handlerScott Wood
It was branching to the cleanup part of the non-bolted handler, which would have been bad if there were any chips with tlbsrx. that use the bolted handler. Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09powerpc/e6500: TLB miss handler with hardware tablewalk supportScott Wood
There are a few things that make the existing hw tablewalk handlers unsuitable for e6500: - Indirect entries go in TLB1 (though the resulting direct entries go in TLB0). - It has threads, but no "tlbsrx." -- so we need a spinlock and a normal "tlbsx". Because we need this lock, hardware tablewalk is mandatory on e6500 unless we want to add spinlock+tlbsx to the normal bolted TLB miss handler. - TLB1 has no HES (nor next-victim hint) so we need software round robin (TODO: integrate this round robin data with hugetlb/KVM) - The existing tablewalk handlers map half of a page table at a time, because IBM hardware has a fixed 1MiB indirect page size. e6500 has variable size indirect entries, with a minimum of 2MiB. So we can't do the half-page indirect mapping, and even if we could it would be less efficient than mapping the full page. - Like on e5500, the linear mapping is bolted, so we don't need the overhead of supporting nested tlb misses. Note that hardware tablewalk does not work in rev1 of e6500. We do not expect to support e6500 rev1 in mainline Linux. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com>
2012-09-05powerpc/booke64: Use SPRG0/3 scratch for bolted TLB miss & crit intMihai Caraman
Embedded.Hypervisor category defines GSPRG0..3 physical registers for guests. Avoid SPRG4-7 usage as scratch in host exception handlers, otherwise guest SPRG4-7 registers will be clobbered. For bolted TLB miss exception handlers, which is the version currently supported by KVM, use SPRN_SPRG_GEN_SCRATCH aka SPRG0 instead of SPRN_SPRG_TLB_SCRATCH aka SPRG6. Keep using TLB PACA slots to fit in one 64-byte cache line. For critical exception handlers use SPRG3 instead of SPRG7. Provide a routine to store and restore user-visible SPRGs. This will be subsequently used to restore VDSO information in SPRG3. Add EX_R13 to paca slots to free up SPRG3 and change the critical exception epilog to use it. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-05powerpc/booke64: Add DO_KVM kernel hooksMihai Caraman
Hook DO_KVM macro into 64-bit booke for KVM integration. Extend interrupt handlers' parameter list with interrupt vector numbers to accomodate the macro. Only the bolted version of tlb miss handers is addressed now. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Enforce usage of RA 0-R31 where possibleMichael Neuling
Some macros use RA where when RA=R0 the values is 0, so make this the enforced mnemonic in the macro. Idea suggested by Andreas Schwab. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-10powerpc: Fix usage of register macros getting ready for %r0 changeMichael Neuling
Anything that uses a constructed instruction (ie. from ppc-opcode.h), need to use the new R0 macro, as %r0 is not going to work. Also convert usages of macros where we are just determining an offset (usually for a load/store), like: std r14,STK_REG(r14)(r1) Can't use STK_REG(r14) as %r14 doesn't work in the STK_REG macro since it's just calculating an offset. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-12-07powerpc: Add hugepage support to 64-bit tablewalk code for FSL_BOOK3EBecky Bruce
Before hugetlb, at each level of the table, we test for !0 to determine if we have a valid table entry. With hugetlb, this compare becomes: < 0 is a normal entry 0 is an invalid entry > 0 is huge This works because the hugepage code pulls the top bit off the entry (which for non-huge entries always has the top bit set) as an indicator that we have a hugepage. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-12-07powerpc: Whitespace/comment changes to tlb_low_64e.SBecky Bruce
I happened to comment this code while I was digging through it; we might as well commit that. I also made some whitespace changes - the existing code had a lot of unnecessary newlines that I found annoying when I was working on my tiny laptop. No functional changes. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-09-20powerpc: Hugetlb for BookEBecky Bruce
Enable hugepages on Freescale BookE processors. This allows the kernel to use huge TLB entries to map pages, which can greatly reduce the number of TLB misses and the amount of TLB thrashing experienced by applications with large memory footprints. Care should be taken when using this on FSL processors, as the number of large TLB entries supported by the core is low (16-64) on current processors. The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g. Page sizes larger than the max zone size are called "gigantic" pages and must be allocated on the command line (and cannot be deallocated). This is currently only fully implemented for Freescale 32-bit BookE processors, but there is some infrastructure in the code for 64-bit BooKE. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-06-29powerpc/book3e-64: use a separate TLB handler when linear map is boltedScott Wood
On MMUs such as FSL where we can guarantee the entire linear mapping is bolted, we don't need to worry about linear TLB misses. If on top of that we do a full table walk, we get rid of all recursive TLB faults, and can dispense with some state saving. This gains a few percent on TLB-miss-heavy workloads, and around 50% on a benchmark that had a high rate of virtual page table faults under the normal handler. While touching the EX_TLB layout, remove EX_TLB_MMUCR0, EX_TLB_SRR0, and EX_TLB_SRR1 as they're not used. [BenH: Fixed build with 64K pages (wsp config)] Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-04-07Merge branch 'for-linus2' of git://git.profusion.mobi/users/lucas/linux-2.6Linus Torvalds
* 'for-linus2' of git://git.profusion.mobi/users/lucas/linux-2.6: Fix common misspellings
2011-04-04Documentation: fix minor typos/spellingSylvestre Ledru
Fix some minor typos: * informations => information * there own => their own * these => this Signed-off-by: Sylvestre Ledru <sylvestre.ledru@scilab.org> Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-31Fix common misspellingsLucas De Marchi
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
2010-11-18powerpc/mm: Fix module instruction tlb fault handling on Book-E 64Kumar Gala
We were seeing oops like the following when we did an rmmod on a module: Unable to handle kernel paging request for instruction fetch Faulting instruction address: 0x8000000000008010 Oops: Kernel access of bad area, sig: 11 [#1] SMP NR_CPUS=2 P5020 DS last sysfs file: /sys/devices/qman-portals.2/qman-pool.9/uevent Modules linked in: qman_tester(-) NIP: 8000000000008010 LR: c000000000074858 CTR: 8000000000008010 REGS: c00000002e29bab0 TRAP: 0400 Not tainted (2.6.34.6-00744-g2d21f14) MSR: 0000000080029000 <EE,ME,CE> CR: 24000448 XER: 00000000 TASK = c00000007a8be600[4987] 'rmmod' THREAD: c00000002e298000 CPU: 1 GPR00: 8000000000008010 c00000002e29bd30 8000000000012798 c00000000035fb28 GPR04: 0000000000000002 0000000000000002 0000000024022428 c000000000009108 GPR08: fffffffffffffffe 800000000000a618 c0000000003c13c8 0000000000000000 GPR12: 0000000022000444 c00000000fffed00 0000000000000000 0000000000000000 GPR16: 00000000100c0000 0000000000000000 00000000100dabc8 0000000010099688 GPR20: 0000000000000000 00000000100cfc28 0000000000000000 0000000010011a44 GPR24: 00000000100017b2 0000000000000000 0000000000000000 0000000000000880 GPR28: c00000000035fb28 800000000000a7b8 c000000000376d80 c0000000003cce50 NIP [8000000000008010] .test_exit+0x0/0x10 [qman_tester] LR [c000000000074858] .SyS_delete_module+0x1f8/0x2f0 Call Trace: [c00000002e29bd30] [c0000000000748b4] .SyS_delete_module+0x254/0x2f0 (unreliable) [c00000002e29be30] [c000000000000580] syscall_exit+0x0/0x2c Instruction dump: XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX 38600000 4e800020 60000000 60000000 <4e800020> 60000000 60000000 60000000 ---[ end trace 4f57124939a84dc8 ]--- This appears to be due to checking the wrong permission bits in the instruction_tlb_miss handling if the address that faulted was in vmalloc space. We need to look at the supervisor execute (_PAGE_BAP_SX) bit and not the user bit (_PAGE_BAP_UX/_PAGE_EXEC). Also removed a branch level since it did not appear to be used. Reported-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-02-03powerpc: Fix typo s/leve/level/ in TLB codeThadeu Lima de Souza Cascardo
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/mm: Remove duplicated #includeHuang Weiyi
Remove duplicated #include('s) in arch/powerpc/mm/tlb_low_64e.S Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-28powerpc/mm: Add MMU features for TLB reservation & Paired MAS registersKumar Gala
Support for TLB reservation (or TLB Write Conditional) and Paired MAS registers are optional for a processor implementation so we handle them via MMU feature sections. We currently only used paired MAS registers to access the full RPN + perm bits that are kept in MAS7||MAS3. We assume that if an implementation has hardware page table at this time it also implements in TLB reservations. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-27powerpc/mm: Cleanup handling of execute permissionBenjamin Herrenschmidt
This is an attempt at cleaning up a bit the way we handle execute permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only defined by CPUs that can do something with it, and the myriad of #ifdef's in the I$/D$ coherency code is reduced to 2 cases that hopefully should cover everything. The logic on BookE is a little bit different than what it was though not by much. Since now, _PAGE_EXEC will be set by the generic code for executable pages, we need to filter out if they are unclean and recover it. However, I don't expect the code to be more bloated than it already was in that area due to that change. I could boast that this brings proper enforcing of per-page execute permissions to all BookE and 40x but in fact, we've had that now for some time as a side effect of my previous rework in that area (and I didn't even know it :-) We would only enable execute permission if the page was cache clean and we would only cache clean it if we took and exec fault. Since we now enforce that the later only work if VM_EXEC is part of the VMA flags, we de-fact already enforce per-page execute permissions... Unless I missed something Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20powerpc: Add TLB management code for 64-bit Book3EBenjamin Herrenschmidt
This adds the TLB miss handler assembly, the low level TLB flush routines along with the necessary hook for dealing with our virtual page tables or indirect TLB entries that need to be flushes when PTE pages are freed. There is currently no support for hugetlbfs Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>