summaryrefslogtreecommitdiff
path: root/include/asm-ia64
AgeCommit message (Collapse)Author
2006-02-17Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
2006-02-15[PATCH] add asm-generic/mman.hMichael S. Tsirkin
Make new MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK consistent across all arches. The idea is to make it possible to use them portably even before distros include them in libc headers. Move common flags to asm-generic/mman.h Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Cc: Roland Dreier <rolandd@cisco.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-15Pull fix-cpu-possible-map into release branchTony Luck
2006-02-15[IA64] ia64: simplify and fix udelay()hawkes@sgi.com
The original ia64 udelay() was simple, but flawed for platforms without synchronized ITCs: a preemption and migration to another CPU during the while-loop likely resulted in too-early termination or very, very lengthy looping. The first fix (now in 2.6.15) broke the delay loop into smaller, non-preemptible chunks, reenabling preemption between the chunks. This fix is flawed in that the total udelay is computed to be the sum of just the non-premptible while-loop pieces, i.e., not counting the time spent in the interim preemptible periods. If an interrupt or a migration occurs during one of these interim periods, then that time is invisible and only serves to lengthen the effective udelay(). This new fix backs out the current flawed fix and returns to a simple udelay(), fully preemptible and interruptible. It implements two simple alternative udelay() routines: one a default generic version that uses ia64_get_itc(), and the other an sn-specific version that uses that platform's RTC. Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-15[IA64-SGI] enforce proper ordering of callouts by XPCDean Nelson
Fix XPC so that it does not deliver any messages until the connected callout has returned, as well as, prevent the disconnected callout to occur before the disconnecting callout has returned. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-15[IA64-SGI] fix the size of __sn_cnodeid_to_nasidDean Roe
The __sn_cnodeid_to_nasid array was incorrectly sized at MAX_NUMNODES. On a large system, this array could overflow. The following patch corrects this by defining it to MAX_COMPACT_NODES. Signed-off-by: Dean Roe <roe@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-15[IA64] remove obsolete corporate addressJes Sorensen
Remove obsolete SGI address Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-15[IA64-SGI] sn2 minor fixes and cleanupsJes Sorensen
General SN2 code cleanup: - Do not initialize global variables to zero - Use kzalloc instead of kmalloc+memset - Check kmalloc return values - Do not obfuscate spin lock calls - Remove some unused code - Various formatting cleanups Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-14[PATCH] madvise MADV_DONTFORK/MADV_DOFORKMichael S. Tsirkin
Currently, copy-on-write may change the physical address of a page even if the user requested that the page is pinned in memory (either by mlock or by get_user_pages). This happens if the process forks meanwhile, and the parent writes to that page. As a result, the page is orphaned: in case of get_user_pages, the application will never see any data hardware DMA's into this page after the COW. In case of mlock'd memory, the parent is not getting the realtime/security benefits of mlock. In particular, this affects the Infiniband modules which do DMA from and into user pages all the time. This patch adds madvise options to control whether memory range is inherited across fork. Useful e.g. for when hardware is doing DMA from/into these pages. Could also be useful to an application wanting to speed up its forks by cutting large areas out of consideration. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Acked-by: Hugh Dickins <hugh@veritas.com> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-14[IA64] Count disabled cpus as potential hot-pluggable CPUsAshok Raj
Have a facility to account for potentially hot-pluggable CPUs. ACPI doesnt give a determinstic method to find hot-pluggable CPUs. Hence we use 2 methods to assist. - BIOS can mark potentially hot-pluggable CPUs as disabled in the MADT tables. - User can specify the number of hot-pluggable CPUs via parameter additional_cpus=X The option is enabled only if ACPI_CONFIG_HOTPLUG_CPU=y which enables the physical hotplug option. Without which user can still use logical onlining and offlining of CPUs by enabling CONFIG_HOTPLUG_CPU=y Adds more bits to cpu_possible_map for potentially hot-pluggable cpus. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-09Pull new-syscalls into release branchTony Luck
2006-02-08[IA64] unshare system call registration for ia64Janak Desai
Registers system call for the ia64 architecture. Reserves space for ppoll and pselect, and adds unshare at system call number 1296. Signed-off-by: Janak Desai <janak@us.ibm.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-06Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
2006-02-06[IA64] add syscall entry for *at()Chen, Kenneth W
Wire up the ia64 syscalls for *at() functions. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-06[IA64-SGI] Shub2 BTE address fixRuss Anderson
After converting the cpu physical address to shub2 physical addressing, the address was run through TO_PHYS() which clobbered a high node offset bit causing the BTE to fail on shub2 nodes with large memory. This fix corrects that problem. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-03[PATCH] ia64: drop arch-specific IDE MAX_HWIFS definitionBjorn Helgaas
There's no reason MAX_HWIFS needs to be ia64-specific, so set MAX_HWIFS from CONFIG_IDE_MAX_HWIFS. This reduces the default from 10 to 4, but I don't think that's a problem. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Acked-by: Bartlomiej Zolnierkiewicz <B.Zolnierkiewicz@elka.pw.edu.pl> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-03[PATCH] Export cpu topology in sysfsZhang, Yanmin
The patch implements cpu topology exportation by sysfs. Items (attributes) are similar to /proc/cpuinfo. 1) /sys/devices/system/cpu/cpuX/topology/physical_package_id: represent the physical package id of cpu X; 2) /sys/devices/system/cpu/cpuX/topology/core_id: represent the cpu core id to cpu X; 3) /sys/devices/system/cpu/cpuX/topology/thread_siblings: represent the thread siblings to cpu X in the same core; 4) /sys/devices/system/cpu/cpuX/topology/core_siblings: represent the thread siblings to cpu X in the same physical package; To implement it in an architecture-neutral way, a new source file, driver/base/topology.c, is to export the 5 attributes. If one architecture wants to support this feature, it just needs to implement 4 defines, typically in file include/asm-XXX/topology.h. The 4 defines are: #define topology_physical_package_id(cpu) #define topology_core_id(cpu) #define topology_thread_siblings(cpu) #define topology_core_siblings(cpu) The type of **_id is int. The type of siblings is cpumask_t. To be consistent on all architectures, the 4 attributes should have deafult values if their values are unavailable. Below is the rule. 1) physical_package_id: If cpu has no physical package id, -1 is the default value. 2) core_id: If cpu doesn't support multi-core, its core id is 0. 3) thread_siblings: Just include itself, if the cpu doesn't support HT/multi-thread. 4) core_siblings: Just include itself, if the cpu doesn't support multi-core and HT/Multi-thread. So be careful when declaring the 4 defines in include/asm-XXX/topology.h. If an attribute isn't defined on an architecture, it won't be exported. Thank Nathan, Greg, Andi, Paul and Venki. The patch provides defines for i386/x86_64/ia64. Signed-off-by: Zhang, Yanmin <yanmin.zhang@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-02[IA64-SGI] include/asm-ia64/sn/intr.h more sn2 housekeepingJes Sorensen
House keeping - eliminate unneeded parenthesis in macro defines. Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-02[IA64] avoid broken SAL_CACHE_FLUSH implementationsBjorn Helgaas
If SAL_CACHE_FLUSH drops interrupts, complain about it and fall back to using PAL_CACHE_FLUSH instead. This is to work around a defect in HP rx5670 firmware: when an interrupt occurs during SAL_CACHE_FLUSH, SAL drops the interrupt but leaves it marked "in-service", which leaves the interrupt (and others of equal or lower priority) masked. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-02[IA64] remove staled comments in asm/system.hChen, Kenneth W
With the recent optimization made to wrap_mmu_context function, we don't hold tasklist_lock anymore when wrapping context id. The comments in asm/system.h must fall through the crack earlier. Remove staled comments. I believe it is still beneficial to unlock the runqueue lock across context switch. So leave __ARCH_WANT_UNLOCKED_CTXSW on. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-26[IA64-SGI] Add PROM feature set for device flush listPrarit Bhargava
Introduce PRF_DEVICE_FLUSH_LIST flag for older PROMs. Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-24[IA64-SGI] add sn_feature_sets bitDean Roe
SGI's prom has added a new feature which avoids an Altix-specific MCA that can occur with excessive use of ia64_pal_cache_flush. This patch adds the #define to the sn_feature_sets.h to reflect that bit is taken. Signed-off-by: Dean Roe <roe@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-17[IA64] Fix bug in ia64 specific down() functionZoltan Menyhart
Chen, Kenneth W wrote: > The memory order semantics for include/asm-ia64/semaphore.h:down() > doesn't look right. It is using atomic_dec_return, which eventually > translate into ia64_fetch_and_add() that uses release semantics. > Shouldn't it use acquire semantics? Use ia64_fetchadd() instead of atomic_dec_return() Acked-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-17[IA64] Zonelists for nodes without cpusJack Steiner
If a node runs out of memory, ensure that memory on nodes w/o cpus is used before using memory on nodes with cpus. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-17[IA64-SGI] sn2 mutex conversionJes Sorensen
Migrate sn2 code to use mutex and completion events rather than semaphores. Signed-off-by: Jes Sorensen <jes@sgi.com> Acked-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-16Pull perfmon-montecito into release branchTony Luck
2006-01-16[IA64] Cleanup of arch/ia64/sn and include/asm-ia64/snPrarit Bhargava
Replace uintX_t declarations with uX declarations. Replace intX_t declarations with sX declarations. Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-16[IA64] pal cache flush patchXu, Anthony
Because PAL spec has changed since 2002, you can goto http://developer.intel.com/design/itanium/manuals/iiasdmanual.htm to download new SDM, all PAL calls should be invoked with psr.ic=1, and it's caller's responsibility to handle possible tlb miss. Ia64_pal_cache_flush was written according to old spec, it is obsolete, and this patch has ia64_pal_cache_flush conform to new spec. Signed-off-by Anthony Xu <anthony.xu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-16[IA64] Perfmon for MontecitoStephane Eranian
Add Montecito PMU description table for perfmon2 Signed-off-by: Stephane Eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-14[PATCH] Altix: ioc3 serial supportPatrick Gefre
Add driver support for a 2 port PCI IOC3-based serial card on Altix boxes: This is a re-submission. On the original submission I was asked to organize the code so that the MIPS ioc3 ethernet and serial parts could be used with this driver. Stanislaw Skowronek was kind enough to provide the shim layer for this - thanks Stanislaw. This patch includes the shim layer and the Altix PCI ioc3 serial driver. The MIPS merged ioc3 ethernet and serial support is forthcoming. Signed-off-by: Patrick Gefre <pfg@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-13[IA64] prevent accidental modification of args in jprobe handlerZhang Yanmin
When jprobe is hit, the function parameters of the original function should be saved before jprobe handler is executed, and restored it after jprobe handler is executed, because jprobe handler might change the register values due to tail call optimization by the gcc. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Handle debug traps in fsys modeJason Uhlenkott
We need to handle debug traps in fsys mode non-fatally. They can happen now that we have fsyscalls which contain probe instructions. Signed-off-by: Jason Uhlenkott <jasonuhl@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64-SGI] Fix sn_flush_device_kernel & spinlock initializationPrarit Bhargava
This patch separates the sn_flush_device_list struct into kernel and common (both kernel and PROM accessible) structures. As it was, if the size of a spinlock_t changed (due to additional CONFIG options, etc.) the sal call which populated the sn_flush_device_list structs would erroneously write data (and cause memory corruption and/or a panic). This patch does the following: 1. Removes sn_flush_device_list and adds sn_flush_device_common and sn_flush_device_kernel. 2. Adds a new SAL call to populate a sn_flush_device_common struct per device, not per widget as previously done. 3. Correctly initializes each device's sn_flush_device_kernel spinlock_t struct (before it was only doing each widget's first device). Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64-SGI] Altix BTE error handling fixesRuss Anderson
Altix (shub2) pushes the BTE clean-up into SAL. This patch correctly interfaces with the now implemented SAL call. It also fixes a bug when delaying clean-up to allow busy BTEs to complete (or error out). Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64-SGI] move xpc.h to include/asm-ia64/sn (cleanup)Dean Nelson
Cleanup a few items after moving xpc.h from arch/ia64/sn/kernel to include/asm-ia64/sn. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64-SGI] move xpc.h to include/asm-ia64/snDean Nelson
Move xpc.h from arch/ia64/sn/kernel to include/asm-ia64/sn without change. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64-SGI] ensure XPC disengage request is processedDean Nelson
This patch fixes a problem in XPC disengage processing whereby it was not seeing the request to disengage from a remote partition, so the disengage wasn't happening. The disengagement is suppose to transpire during the time a XPC channel is disconnecting, and should be completed before the channel is declared to be disconnected. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-12[PATCH] ia64: task_pt_regs()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] ia64: task_thread_info()Al Viro
on ia64 thread_info is at the constant offset from task_struct and stack is embedded into the same beast. Set __HAVE_THREAD_FUNCTIONS, made task_thread_info() just add a constant. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] scheduler cache-hot-autodetectakpm@osdl.org
) From: Ingo Molnar <mingo@elte.hu> This is the latest version of the scheduler cache-hot-auto-tune patch. The first problem was that detection time scaled with O(N^2), which is unacceptable on larger SMP and NUMA systems. To solve this: - I've added a 'domain distance' function, which is used to cache measurement results. Each distance is only measured once. This means that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT distances 0 and 1, and on SMP distance 0 is measured. The code walks the domain tree to determine the distance, so it automatically follows whatever hierarchy an architecture sets up. This cuts down on the boot time significantly and removes the O(N^2) limit. The only assumption is that migration costs can be expressed as a function of domain distance - this covers the overwhelming majority of existing systems, and is a good guess even for more assymetric systems. [ People hacking systems that have assymetries that break this assumption (e.g. different CPU speeds) should experiment a bit with the cpu_distance() function. Adding a ->migration_distance factor to the domain structure would be one possible solution - but lets first see the problem systems, if they exist at all. Lets not overdesign. ] Another problem was that only a single cache-size was used for measuring the cost of migration, and most architectures didnt set that variable up. Furthermore, a single cache-size does not fit NUMA hierarchies with L3 caches and does not fit HT setups, where different CPUs will often have different 'effective cache sizes'. To solve this problem: - Instead of relying on a single cache-size provided by the platform and sticking to it, the code now auto-detects the 'effective migration cost' between two measured CPUs, via iterating through a wide range of cachesizes. The code searches for the maximum migration cost, which occurs when the working set of the test-workload falls just below the 'effective cache size'. I.e. real-life optimized search is done for the maximum migration cost, between two real CPUs. This, amongst other things, has the positive effect hat if e.g. two CPUs share a L2/L3 cache, a different (and accurate) migration cost will be found than between two CPUs on the same system that dont share any caches. (The reliable measurement of migration costs is tricky - see the source for details.) Furthermore i've added various boot-time options to override/tune migration behavior. Firstly, there's a blanket override for autodetection: migration_cost=1000,2000,3000 will override the depth 0/1/2 values with 1msec/2msec/3msec values. Secondly, there's a global factor that can be used to increase (or decrease) the autodetected values: migration_factor=120 will increase the autodetected values by 20%. This option is useful to tune things in a workload-dependent way - e.g. if a workload is cache-insensitive then CPU utilization can be maximized by specifying migration_factor=0. I've tested the autodetection code quite extensively on x86, on 3 P3/Xeon/2MB, and the autodetected values look pretty good: Dual Celeron (128K L2 cache): --------------------- migration cost matrix (max_cache_size: 131072, cpu: 467 MHz): --------------------- [00] [01] [00]: - 1.7(1) [01]: 1.7(1) - --------------------- cacheflush times [2]: 0.0 (0) 1.7 (1784008) --------------------- Here the slow memory subsystem dominates system performance, and even though caches are small, the migration cost is 1.7 msecs. Dual HT P4 (512K L2 cache): --------------------- migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz): --------------------- [00] [01] [02] [03] [00]: - 0.4(1) 0.0(0) 0.4(1) [01]: 0.4(1) - 0.4(1) 0.0(0) [02]: 0.0(0) 0.4(1) - 0.4(1) [03]: 0.4(1) 0.0(0) 0.4(1) - --------------------- cacheflush times [2]: 0.0 (33900) 0.4 (448514) --------------------- Here it can be seen that there is no migration cost between two HT siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory system makes inter-physical-CPU migration pretty cheap: 0.4 msecs. 8-way P3/Xeon [2MB L2 cache]: --------------------- migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz): --------------------- [00] [01] [02] [03] [04] [05] [06] [07] [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - --------------------- cacheflush times [2]: 0.0 (0) 19.2 (19281756) --------------------- This one has huge caches and a relatively slow memory subsystem - so the migration cost is 19 msecs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Cc: <wilder@us.ibm.com> Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] sched: add cacheflush() asmIngo Molnar
Add per-arch sched_cacheflush() which is a write-back cacheflush used by the migration-cost calibration code at bootup time. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-10[PATCH] kprobes: fix build breakageAnanth N Mavinakayanahalli
The following patch (against 2.6.15-rc5-mm3) fixes a kprobes build break due to changes introduced in the kprobe locking in 2.6.15-rc5-mm3. In addition, the patch reverts back the open-coding of kprobe_mutex. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-10[PATCH] kprobes: arch_remove_kprobeAnil S Keshavamurthy
Currently arch_remove_kprobes() is only implemented/required for x86_64 and powerpc. All other architecture like IA64, i386 and sparc64 implementes a dummy function which is being called from arch independent kprobes.c file. This patch removes the dummy functions and replaces it with #define arch_remove_kprobe(p, s) do { } while(0) Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-10[PATCH] kprobes: changed from using spinlock to mutexAnil S Keshavamurthy
Since Kprobes runtime exception handlers is now lock free as this code path is now using RCU to walk through the list, there is no need for the register/unregister{_kprobe} to use spin_{lock/unlock}_isr{save/restore}. The serialization during registration/unregistration is now possible using just a mutex. In the above process, this patch also fixes a minor memory leak for x86_64 and powerpc. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-10[PATCH] kprobes: cleanup include/asm/kprobes.hAnil S Keshavamurthy
The arch specific kprobes.h files never gets included when CONFIG_KPROBES is turned off. Hence check for CONFIG_KPROBES is not appropriate here in this arch specific kprobes.h files. Also the below defined function kprobes_exception_notify() is not needed when CONFIG_KPROBES is off. Compile tested for both CONFIG_KPROBES=y and N. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-10[PATCH] Generic ioctl.hBrian Gerst
Most arches copied the i386 ioctl.h. Combine them into a generic header. Signed-off-by: Brian Gerst <bgerst@didntduck.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-09[PATCH] mutex subsystem, add default include/asm-*/mutex.h filesArjan van de Ven
add the per-arch mutex.h files for the remaining architectures. We default to asm-generic/mutex-dec.h, because that performs quite well on most arches. Arches that do not have atomic decrement/increment instructions should switch to mutex-xchg.h instead. Arches can also provide their own implementation for the mutex fastpath primitives. Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2006-01-09[PATCH] mutex subsystem, add atomic_xchg() to all archesIngo Molnar
add atomic_xchg() to all the architectures. Needed by the new mutex code. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@infradead.org>
2006-01-08[PATCH] /dev/mem: validate mmap requestsBjorn Helgaas
Add a hook so architectures can validate /dev/mem mmap requests. This is analogous to validation we already perform in the read/write paths. The identity mapping scheme used on ia64 requires that each 16MB or 64MB granule be accessed with exactly one attribute (write-back or uncacheable). This avoids "attribute aliasing", which can cause a machine check. Sample problem scenario: - Machine supports VGA, so it has uncacheable (UC) MMIO at 640K-768K - efi_memmap_init() discards any write-back (WB) memory in the first granule - Application (e.g., "hwinfo") mmaps /dev/mem, offset 0 - hwinfo receives UC mapping (the default, since memmap says "no WB here") - Machine check abort (on chipsets that don't support UC access to WB memory, e.g., sx1000) In the scenario above, the only choices are - Use WB for hwinfo mmap. Can't do this because it causes attribute aliasing with the UC mapping for the VGA MMIO space. - Use UC for hwinfo mmap. Can't do this because the chipset may not support UC for that region. - Disallow the hwinfo mmap with -EINVAL. That's what this patch does. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] remove gcc-2 checksAndrew Morton
Remove various things which were checking for gcc-1.x and gcc-2.x compilers. From: Adrian Bunk <bunk@stusta.de> Some documentation updates and removes some code paths for gcc < 3.2. Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>