From 86e587623a0ca8426267dad8d3eaebd6fc2d00f1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ville=20Syrj=C3=A4l=C3=A4?= Date: Sun, 13 Apr 2014 12:45:03 +0300 Subject: x86/gpu: Fix sign extension issue in Intel graphics stolen memory quirks MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Have the KB(),MB(),GB() macros produce unsigned longs to avoid unintended sign extension issues with the gen2 memory size detection. What happens is first the uint8_t returned by read_pci_config_byte() gets promoted to an int which gets multiplied by another int from the MB() macro, and finally the result gets sign extended to size_t. Although this shouldn't be a problem in practice as all affected gen2 platforms are 32bit AFAIK, so size_t will be 32 bits. Reported-by: Bjorn Helgaas Suggested-by: H. Peter Anvin Signed-off-by: Ville Syrjälä Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1397382303-17525-1-git-send-email-ville.syrjala@linux.intel.com Signed-off-by: Ingo Molnar --- arch/x86/kernel/early-quirks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arch') diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c index b0cc3809723d..6e2537c32190 100644 --- a/arch/x86/kernel/early-quirks.c +++ b/arch/x86/kernel/early-quirks.c @@ -240,7 +240,7 @@ static u32 __init intel_stolen_base(int num, int slot, int func, size_t stolen_s return base; } -#define KB(x) ((x) * 1024) +#define KB(x) ((x) * 1024UL) #define MB(x) (KB (KB (x))) #define GB(x) (MB (KB (x))) -- cgit v1.2.3