diff options
author | Catalin Marinas <catalin.marinas@arm.com> | 2012-04-17 17:53:05 +0100 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2012-04-17 17:58:00 +0100 |
commit | 7cf62161f3b42fdd0629ed09a06f6536cf63d02d (patch) | |
tree | 3dac805b5975a40127cafe400a33b482f43d7d51 /mm/mmap.c | |
parent | e43ef917d9e1fc4f0176c579d3b6780ba4bb9a20 (diff) |
mm: Limit pgd range freeing to TASK_SIZE
ARM processors with LPAE enabled use 3 levels of page tables, with an
entry in the top one (pgd/pud) covering 1GB of virtual space. Because of
the relocation limitations on ARM, the loadable modules are mapped 16MB
below PAGE_OFFSET, making the corresponding 1GB pgd/pud shared between
kernel modules and user space. During fault processing, pmd entries
corresponding to modules are populated to point to the init_mm pte
tables.
Since free_pgtables() is called with ceiling == 0, free_pgd_range() (and
subsequently called functions) also clears the pgd/pud entry that is
shared between user space and kernel modules. If a module interrupt
routine is invoked during this window, the kernel gets a translation
fault and becomes confused.
There is proposed fix for ARM (within the arch/arm/ code) but it
wouldn't be needed if the pgd range freeing is capped at TASK_SIZE. The
concern is that there are architectures with vmas beyond TASK_SIZE, so
the aim of this RFC is to ask whether those architectures rely on
free_pgtables() to free any page tables beyond TASK_SIZE.
Alternatively, we can define something like LAST_USER_ADDRESS,
defaulting to 0 for most architectures.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Diffstat (limited to 'mm/mmap.c')
-rw-r--r-- | mm/mmap.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/mmap.c b/mm/mmap.c index a7bf6a31c9f6..6542f420b3cc 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1893,7 +1893,7 @@ static void unmap_region(struct mm_struct *mm, unmap_vmas(&tlb, vma, start, end, &nr_accounted, NULL); vm_unacct_memory(nr_accounted); free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, - next ? next->vm_start : 0); + next ? next->vm_start : TASK_SIZE); tlb_finish_mmu(&tlb, start, end); } @@ -2267,7 +2267,7 @@ void exit_mmap(struct mm_struct *mm) unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL); vm_unacct_memory(nr_accounted); - free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0); + free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, TASK_SIZE); tlb_finish_mmu(&tlb, 0, -1); /* |