diff options
| author | Hugh Dickins <hugh@veritas.com> | 2006-11-14 02:03:32 -0800 | 
|---|---|---|
| committer | Linus Torvalds <torvalds@woody.osdl.org> | 2006-11-14 09:09:27 -0800 | 
| commit | 68589bc353037f233fe510ad9ff432338c95db66 (patch) | |
| tree | dedc58ff66134f54796642917e2a2a26ac6802b0 /arch | |
| parent | 69ae9e3ee4ce99140a7db424bebf55d8d180da2f (diff) | |
[PATCH] hugetlb: prepare_hugepage_range check offset too
(David:)
If hugetlbfs_file_mmap() returns a failure to do_mmap_pgoff() - for example,
because the given file offset is not hugepage aligned - then do_mmap_pgoff
will go to the unmap_and_free_vma backout path.
But at this stage the vma hasn't been marked as hugepage, and the backout path
will call unmap_region() on it.  That will eventually call down to the
non-hugepage version of unmap_page_range().  On ppc64, at least, that will
cause serious problems if there are any existing hugepage pagetable entries in
the vicinity - for example if there are any other hugepage mappings under the
same PUD.  unmap_page_range() will trigger a bad_pud() on the hugepage pud
entries.  I suspect this will also cause bad problems on ia64, though I don't
have a machine to test it on.
(Hugh:)
prepare_hugepage_range() should check file offset alignment when it checks
virtual address and length, to stop MAP_FIXED with a bad huge offset from
unmapping before it fails further down.  PowerPC should apply the same
prepare_hugepage_range alignment checks as ia64 and all the others do.
Then none of the alignment checks in hugetlbfs_file_mmap are required (nor
is the check for too small a mapping); but even so, move up setting of
VM_HUGETLB and add a comment to warn of what David Gibson discovered - if
hugetlbfs_file_mmap fails before setting it, do_mmap_pgoff's unmap_region
when unwinding from error will go the non-huge way, which may cause bad
behaviour on architectures (powerpc and ia64) which segregate their huge
mappings into a separate region of the address space.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Acked-by: Adam Litke <agl@us.ibm.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch')
| -rw-r--r-- | arch/ia64/mm/hugetlbpage.c | 4 | ||||
| -rw-r--r-- | arch/powerpc/mm/hugetlbpage.c | 8 | 
2 files changed, 9 insertions, 3 deletions
| diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c index eee5c1cfbe32..f3a9585e98a8 100644 --- a/arch/ia64/mm/hugetlbpage.c +++ b/arch/ia64/mm/hugetlbpage.c @@ -70,8 +70,10 @@ huge_pte_offset (struct mm_struct *mm, unsigned long addr)   * Don't actually need to do any preparation, but need to make sure   * the address is in the right region.   */ -int prepare_hugepage_range(unsigned long addr, unsigned long len) +int prepare_hugepage_range(unsigned long addr, unsigned long len, pgoff_t pgoff)  { +	if (pgoff & (~HPAGE_MASK >> PAGE_SHIFT)) +		return -EINVAL;  	if (len & ~HPAGE_MASK)  		return -EINVAL;  	if (addr & ~HPAGE_MASK) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index fd68b74c07c3..506d89768d45 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -491,11 +491,15 @@ static int open_high_hpage_areas(struct mm_struct *mm, u16 newareas)  	return 0;  } -int prepare_hugepage_range(unsigned long addr, unsigned long len) +int prepare_hugepage_range(unsigned long addr, unsigned long len, pgoff_t pgoff)  {  	int err = 0; -	if ( (addr+len) < addr ) +	if (pgoff & (~HPAGE_MASK >> PAGE_SHIFT)) +		return -EINVAL; +	if (len & ~HPAGE_MASK) +		return -EINVAL; +	if (addr & ~HPAGE_MASK)  		return -EINVAL;  	if (addr < 0x100000000UL) | 
