diff options
author | David Gibson <david@gibson.dropbear.id.au> | 2008-07-18 15:55:49 +1000 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2008-08-01 11:51:01 -0700 |
commit | 482780d80ba2ab6e6bcbb4ec2bf868d2f9bd4628 (patch) | |
tree | 25fcca7c0613f132d0d7affe435cce3fb597c8a6 /mm | |
parent | 620f2f722008f4cb95d7e2282aafc89253119627 (diff) |
Correct hash flushing from huge_ptep_set_wrprotect()
Correct hash flushing from huge_ptep_set_wrprotect() [stable tree version]
A fix for incorrect flushing of the hash page table at fork() for
hugepages was recently committed as
86df86424939d316b1f6cfac1b6204f0c7dee317. Without this fix, a process
can make a MAP_PRIVATE hugepage mapping, then fork() and have writes
to the mapping after the fork() pollute the child's version.
Unfortunately this bug also exists in the stable branch. In fact in
that case copy_hugetlb_page_range() from mm/hugetlb.c calls
ptep_set_wrprotect() directly, the hugepage variant hook
huge_ptep_set_wrprotect() doesn't even exist.
The patch below is a port of the fix to the stable25/master branch.
It introduces a huge_ptep_set_wrprotect() call, but this is #defined
to be equal to ptep_set_wrprotect() unless the arch defines its own
version and sets __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT.
This arch preprocessor flag is kind of nasty, but it seems the sanest
way to introduce this fix with minimum risk of breaking other archs
for whom prep_set_wprotect() is suitable for hugepages.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/hugetlb.c | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 51c9e2c01640..893558ab9ff6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -738,6 +738,10 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, } +#ifndef __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT +#define huge_ptep_set_wrprotect ptep_set_wrprotect +#endif + int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *vma) { @@ -764,7 +768,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, spin_lock(&src->page_table_lock); if (!pte_none(*src_pte)) { if (cow) - ptep_set_wrprotect(src, addr, src_pte); + huge_ptep_set_wrprotect(src, addr, src_pte); entry = *src_pte; ptepage = pte_page(entry); get_page(ptepage); |