diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-19 19:09:42 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-19 19:09:42 -0800 |
commit | a57ed93600f2dab64e859d524c3320fe0922e99d (patch) | |
tree | e490392e4b438401b0690a2a7781bb5c88feda34 /arch/x86/mm/init_64.c | |
parent | 5800700f66678ea5c85e7d62b138416070bf7f60 (diff) | |
parent | 5e2a044daf0c6f897eb69de931e3b29020e874a9 (diff) |
Merge branch 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86/asm changes from Ingo Molnar:
"The biggest change (by line count) is the unification of the XOR code
and then the introduction of an additional SSE based XOR assembly
method.
The other bigger change is the head_32.S rework/cleanup by Borislav
Petkov.
Last but not least there's the usual laundry list of small but
dangerous (and hopefully perfectly tested) changes to subtle low level
x86 code, plus cleanups."
* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86, head_32: Give the 6 label a real name
x86, head_32: Remove second CPUID detection from default_entry
x86: Detect CPUID support early at boot
x86, head_32: Remove i386 pieces
x86: Require MOVBE feature in cpuid when we use it
x86: Enable ARCH_USE_BUILTIN_BSWAP
x86/xor: Add alternative SSE implementation only prefetching once per 64-byte line
x86/xor: Unify SSE-base xor-block routines
x86: Fix a typo
x86/mm: Fix the argument passed to sync_global_pgds()
x86/mm: Convert update_mmu_cache() and update_mmu_cache_pmd() to functions
ix86: Tighten asmlinkage_protect() constraints
Diffstat (limited to 'arch/x86/mm/init_64.c')
-rw-r--r-- | arch/x86/mm/init_64.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 75c9a6a59697..d6eeead43758 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -605,7 +605,7 @@ kernel_physical_mapping_init(unsigned long start, } if (pgd_changed) - sync_global_pgds(addr, end); + sync_global_pgds(addr, end - 1); __flush_tlb_all(); @@ -984,7 +984,7 @@ vmemmap_populate(struct page *start_page, unsigned long size, int node) } } - sync_global_pgds((unsigned long)start_page, end); + sync_global_pgds((unsigned long)start_page, end - 1); return 0; } |