diff options
author | Ian Campbell <ian.campbell@citrix.com> | 2013-12-11 17:02:27 +0000 |
---|---|---|
committer | Stefano Stabellini <stefano.stabellini@eu.citrix.com> | 2013-12-11 17:06:05 +0000 |
commit | a7892f32cc3534d4cc0e64b245fbf47a8e364652 (patch) | |
tree | 4b4d9de32643a1b67e886c0b1cfab875e3f5a311 /arch/arm/xen | |
parent | 02ab71cdae248533620abefa1d46097581457110 (diff) |
arm: xen: foreign mapping PTEs are special.
These mappings are in fact special and require special handling in privcmd,
which already exists. Failure to mark the PTE as special on arm64 causes all
sorts of bad PTE fun. e.g.
e.g.:
BUG: Bad page map in process xl pte:e0004077b33f53 pmd:4079575003
page:ffffffbce1a2f328 count:1 mapcount:-1 mapping: (null) index:0x0
page flags: 0x4000000000000014(referenced|dirty)
addr:0000007fb5259000 vm_flags:040644fa anon_vma: (null) mapping:ffffffc03a6fda58 index:0
vma->vm_ops->fault: privcmd_fault+0x0/0x38
vma->vm_file->f_op->mmap: privcmd_mmap+0x0/0x2c
CPU: 0 PID: 2657 Comm: xl Not tainted 3.12.0+ #102
Call trace:
[<ffffffc0000880f8>] dump_backtrace+0x0/0x12c
[<ffffffc000088238>] show_stack+0x14/0x1c
[<ffffffc0004b67e0>] dump_stack+0x70/0x90
[<ffffffc000125690>] print_bad_pte+0x12c/0x1bc
[<ffffffc0001268f4>] unmap_single_vma+0x4cc/0x700
[<ffffffc0001273b4>] unmap_vmas+0x68/0xb4
[<ffffffc00012c050>] unmap_region+0xcc/0x1d4
[<ffffffc00012df20>] do_munmap+0x218/0x314
[<ffffffc00012e060>] vm_munmap+0x44/0x64
[<ffffffc00012ed78>] SyS_munmap+0x24/0x34
Where unmap_single_vma contains inlined -> unmap_page_range -> zap_pud_range
-> zap_pmd_range -> zap_pte_range -> print_bad_pte.
Or:
BUG: Bad page state in process xl pfn:4077b4d
page:ffffffbce1a2f8d8 count:0 mapcount:-1 mapping: (null) index:0x0
page flags: 0x4000000000000014(referenced|dirty)
Modules linked in:
CPU: 0 PID: 2657 Comm: xl Tainted: G B 3.12.0+ #102
Call trace:
[<ffffffc0000880f8>] dump_backtrace+0x0/0x12c
[<ffffffc000088238>] show_stack+0x14/0x1c
[<ffffffc0004b67e0>] dump_stack+0x70/0x90
[<ffffffc00010f798>] bad_page+0xc4/0x110
[<ffffffc00010f8b4>] free_pages_prepare+0xd0/0xd8
[<ffffffc000110e94>] free_hot_cold_page+0x28/0x178
[<ffffffc000111460>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000114cf0>] release_pages+0x190/0x1dc
[<ffffffc00012c0e0>] unmap_region+0x15c/0x1d4
[<ffffffc00012df20>] do_munmap+0x218/0x314
[<ffffffc00012e060>] vm_munmap+0x44/0x64
[<ffffffc00012ed78>] SyS_munmap+0x24/0x34
x86 already gets this correct. 32-bit arm gets away with this because there is
not PTE_SPECIAL bit in the PTE there and the vm_normal_page fallback path does
the right thing.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Diffstat (limited to 'arch/arm/xen')
-rw-r--r-- | arch/arm/xen/enlighten.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c index 6a288c7be49f..85501238b425 100644 --- a/arch/arm/xen/enlighten.c +++ b/arch/arm/xen/enlighten.c @@ -96,7 +96,7 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr, struct remap_data *info = data; struct page *page = info->pages[info->index++]; unsigned long pfn = page_to_pfn(page); - pte_t pte = pfn_pte(pfn, info->prot); + pte_t pte = pte_mkspecial(pfn_pte(pfn, info->prot)); if (map_foreign_page(pfn, info->fgmfn, info->domid)) return -EFAULT; |