Age | Commit message (Collapse) | Author |
|
o Move all files from sparc64/kernel/ to sparc/kernel
- rename as appropriate
o Update sparc/Makefile to the changes
o Update sparc/kernel/Makefile to include the sparc64 files
NOTE: This commit changes link order on sparc64!
Link order had to change for either of sparc32 and sparc64.
And assuming sparc64 see more testing than sparc32 change link
order on sparc64 where issues will be caught faster.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
So that we can profile code even in a local_irq_disable() section,
only write 14 (instead of 15) into the %pil register to disable IRQs.
This allows PIL level 15 to serve as a pseudo NMI.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The kernel always executes in the TSO memory model now,
so none of this stuff is necessary any more.
With helpful feedback from Nick Piggin.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6
Conflicts:
sound/core/memalloc.c
|
|
Based upon a report by Meelis Roos.
Any function call can try to access the current
thread register via the _mcount hooks when the kernel
is built with -pg (via ftrace or STACK_DEBUG).
That can't be setup properly very early on during
the bootup of other cpus for sun4u and some early
sun4v systems.
So add notrace markers to these specific functions, so
that _mcount doesn't get invoked too early.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
1) Declare pci_poke_* in pci_impl.h
2) of_create_pci_dev() should be static
3) ->setup_msi_irq() wants an unsigned int pointer not a plain
int one
4) void value expression return in arch_teardown_msi_irq()
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
1) edge8 tables should be static
2) add vis_emul() extern decl. to asm/visasm.h
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The way to do this varies by platform type and the exact memory
controller the cpu uses.
For Spitfire cpus we currently just use prom_getunumber() and hope
that works.
For Cheetah cpus we have a memory controller driver that can
compute this information.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The story is that what we used to do when we actually used
smp_report_regs() is that if you specifically only wanted to have the
current cpu's registers dumped you would call "__show_regs()"
otherwise you would call show_regs() which also invoked
smp_report_regs().
Now that we killed off smp_report_regs() there is no longer any
reason to have these two routines, just show_regs() is sufficient.
Also kill off a stray declaration of show_regs() in sparc64_ksym.c
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
All the call sites are #if 0'd out and we have a much more
useful global cpu dumping facility these days. smp_report_regs()
is way too verbose to be usable.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
One place is just a comment, the other a conditional, unused
inclusion of linux/kmod.h.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The offset to the pt_regs area was wrong, so we weren't
looking at the right location for the magic cookie.
A trap frame is composed of a "struct sparc_stackf" then
a "struct pt_regs", the code was using "struct reg_window"
instead of "struct sparc_stackf".
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that we have a magic cookie in the pt_regs, we can
properly detect trap frames in stack bactraces.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add 'UL' markers to DCU_* macros.
Declare C functions called from assembler in entry.h
Declare C functions called from within the sparc64 arch
code in include/asm-sparc64/*.h headers as appropriate.
Remove unused routines in traps.c
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
arch/sparc64/kernel/process.c:219:6: warning: symbol '__show_regs' was not declared. Should it be static?
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
1) Trap level wasn't being passed down properly, we need to
move it from %l4 into the correct outgoing arg register.
2) Although the TPC often provides the most direct clue, we
have the caller PC so we should provide that as well.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
One of the easiest things to isolate is the pid printed in kernel log.
There was a patch, that made this for arch-independent code, this one makes
so for arch/xxx files.
It took some time to cross-compile it, but hopefully these are all the
printks in arch code.
Signed-off-by: Alexey Dobriyan <adobriyan@openvz.org>
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It didn't handle that case at all, and now dump_stack()
can be implemented directly as show_stack(current, NULL)
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If the kernel OOPSed or BUGed then it probably should be considered as
tainted. Thus, all subsequent OOPSes and SysRq dumps will report the
tainted kernel. This saves a lot of time explaining oddities in the
calltraces.
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Added parisc patch from Matthew Wilson -Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
With the advent of kdump, the assumption that the boot CPU when booting an UP
kernel is always the CPU with a particular hardware ID (often 0) (usually
referred to as BSP on some architectures) is not valid anymore. The reason
being that the dump capture kernel boots on the crashed CPU (the CPU that
invoked crash_kexec), which may be or may not be that particular CPU.
Move definition of hard_smp_processor_id for the UP case to
architecture-specific code ("asm/smp.h") where it belongs, so that each
architecture can provide its own implementation.
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Cc: "Luck, Tony" <tony.luck@intel.com>
Acked-by: Andi Kleen <ak@suse.de>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.
Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This patch moves the die notifier handling to common code. Previous
various architectures had exactly the same code for it. Note that the new
code is compiled unconditionally, this should be understood as an appel to
the other architecture maintainer to implement support for it aswell (aka
sprinkling a notify_die or two in the proper place)
arm had a notifiy_die that did something totally different, I renamed it to
arm_notify_die as part of the patch and made it static to the file it's
declared and used at. avr32 used to pass slightly less information through
this interface and I brought it into line with the other architectures.
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix vmalloc_sync_all bustage]
[bryan.wu@analog.com: fix vmalloc_sync_all in nommu]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: <linux-arch@vger.kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To add this logic, put the VIS instruction check at the
vis_emul() call site instead of inside of vis_emul().
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is the first in a series of cleanups that will hopefully
allow a seamless attempt at using the generic IRQ handling
infrastructure in the Linux kernel.
Define PIL_DEVICE_IRQ and vector all device interrupts through
there.
Get rid of the ugly pil0_dummy_{bucket,desc}, instead vector
the timer interrupt directly to a specific handler since the
timer interrupt is the only event that will be signaled on
PIL 14.
The irq_worklist is now in the per-cpu trap_block[].
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This makes the debugging information more usable.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This makes debugging things a little bit easier.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The kernel's implementation of notifier chains is unsafe. There is no
protection against entries being added to or removed from a chain while the
chain is in use. The issues were discussed in this thread:
http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2
We noticed that notifier chains in the kernel fall into two basic usage
classes:
"Blocking" chains are always called from a process context
and the callout routines are allowed to sleep;
"Atomic" chains can be called from an atomic context and
the callout routines are not allowed to sleep.
We decided to codify this distinction and make it part of the API. Therefore
this set of patches introduces three new, parallel APIs: one for blocking
notifiers, one for atomic notifiers, and one for "raw" notifiers (which is
really just the old API under a new name). New kinds of data structures are
used for the heads of the chains, and new routines are defined for
registration, unregistration, and calling a chain. The three APIs are
explained in include/linux/notifier.h and their implementation is in
kernel/sys.c.
With atomic and blocking chains, the implementation guarantees that the chain
links will not be corrupted and that chain callers will not get messed up by
entries being added or removed. For raw chains the implementation provides no
guarantees at all; users of this API must provide their own protections. (The
idea was that situations may come up where the assumptions of the atomic and
blocking APIs are not appropriate, so it should be possible for users to
handle these things in their own way.)
There are some limitations, which should not be too hard to live with. For
atomic/blocking chains, registration and unregistration must always be done in
a process context since the chain is protected by a mutex/rwsem. Also, a
callout routine for a non-raw chain must not try to register or unregister
entries on its own chain. (This did happen in a couple of places and the code
had to be changed to avoid it.)
Since atomic chains may be called from within an NMI handler, they cannot use
spinlocks for synchronization. Instead we use RCU. The overhead falls almost
entirely in the unregister routine, which is okay since unregistration is much
less frequent that calling a chain.
Here is the list of chains that we adjusted and their classifications. None
of them use the raw API, so for the moment it is only a placeholder.
ATOMIC CHAINS
-------------
arch/i386/kernel/traps.c: i386die_chain
arch/ia64/kernel/traps.c: ia64die_chain
arch/powerpc/kernel/traps.c: powerpc_die_chain
arch/sparc64/kernel/traps.c: sparc64die_chain
arch/x86_64/kernel/traps.c: die_chain
drivers/char/ipmi/ipmi_si_intf.c: xaction_notifier_list
kernel/panic.c: panic_notifier_list
kernel/profile.c: task_free_notifier
net/bluetooth/hci_core.c: hci_notifier
net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_chain
net/ipv4/netfilter/ip_conntrack_core.c: ip_conntrack_expect_chain
net/ipv6/addrconf.c: inet6addr_chain
net/netfilter/nf_conntrack_core.c: nf_conntrack_chain
net/netfilter/nf_conntrack_core.c: nf_conntrack_expect_chain
net/netlink/af_netlink.c: netlink_chain
BLOCKING CHAINS
---------------
arch/powerpc/platforms/pseries/reconfig.c: pSeries_reconfig_chain
arch/s390/kernel/process.c: idle_chain
arch/x86_64/kernel/process.c idle_notifier
drivers/base/memory.c: memory_chain
drivers/cpufreq/cpufreq.c cpufreq_policy_notifier_list
drivers/cpufreq/cpufreq.c cpufreq_transition_notifier_list
drivers/macintosh/adb.c: adb_client_list
drivers/macintosh/via-pmu.c sleep_notifier_list
drivers/macintosh/via-pmu68k.c sleep_notifier_list
drivers/macintosh/windfarm_core.c wf_client_list
drivers/usb/core/notify.c usb_notifier_list
drivers/video/fbmem.c fb_notifier_list
kernel/cpu.c cpu_chain
kernel/module.c module_notify_list
kernel/profile.c munmap_notifier
kernel/profile.c task_exit_notifier
kernel/sys.c reboot_notifier_list
net/core/dev.c netdev_chain
net/decnet/dn_dev.c: dnaddr_chain
net/ipv4/devinet.c: inetaddr_chain
It's possible that some of these classifications are wrong. If they are,
please let us know or submit a patch to fix them. Note that any chain that
gets called very frequently should be atomic, because the rwsem read-locking
used for blocking chains is very likely to incur cache misses on SMP systems.
(However, if the chain's callout routines may sleep then the chain cannot be
atomic.)
The patch set was written by Alan Stern and Chandra Seetharaman, incorporating
material written by Keith Owens and suggestions from Paul McKenney and Andrew
Morton.
[jes@sgi.com: restructure the notifier chain initialization macros]
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com>
Signed-off-by: Jes Sorensen <jes@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Niagara does not implement some of the VIS instructions in
hardware, so we have to emulate them.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Check TLB flush hypervisor calls for errors and report them.
Pass HV_MMU_ALL always for now, we can add back the optimization
to avoid the I-TLB flush later.
Always explicitly page align the virtual address arguments.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Should be "Dax" not "Iax".
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Actually make use of the 'limit' we compute.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It's extremely noisy and causes much grief on slow
consoles with large numbers of cpus.
We'll have to provide this some saner way in order
to re-enable this.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We're about to seriously die in these cases so it is important
that the messages make it to the console.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
1) Add error return checking for TLB load hypervisor
calls.
2) Don't fallthru to dtlb tsb miss handler from itlb tsb
miss handler, oops.
3) On window fixups, propagate fault information to fixup
handler correctly.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The sibling cpu bringup is extremely fragile. We can only
perform the most basic calls until we take over the trap
table from the firmware/hypervisor on the new cpu.
This means no accesses to %g4, %g5, %g6 since those can't be
TLB translated without our trap handlers.
In order to achieve this:
1) Change sun4v_init_mondo_queues() so that it can operate in
several modes.
It can allocate the queues, or install them in the current
processor, or both.
The boot cpu does both in it's call early on.
Later, the boot cpu allocates the sibling cpu queue, starts
the sibling cpu, then the sibling cpu loads them in.
2) init_cur_cpu_trap() is changed to take the current_thread_info()
as an argument instead of reading %g6 directly on the current
cpu.
3) Create a trampoline stack for the sibling cpus. We do our basic
kernel calls using this stack, which is locked into the kernel
image, then go to our proper thread stack after taking over the
trap table.
4) While we are in this delicate startup state, we put 0xdeadbeef
into %g4/%g5/%g6 in order to catch accidental accesses.
5) On the final prom_set_trap_table*() call, we put &init_thread_union
into %g6. This is a hack to make prom_world(0) work. All that
wants to do is restore the %asi register using
get_thread_current_ds().
Longer term we should just do the OBP calls to set the trap table by
hand just like we do for everything else. This would avoid that silly
prom_world(0) issue, then we can remove the init_thread_union hack.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
No trap levels above 2 in privileged mode on SUN4V.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The trap code was calling itself :-)
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Technically the hypervisor call supports sending in a list
of all cpus to get the cross-call, but I only pass in one
cpu at a time for now.
The multi-cpu support is there, just ifdef'd out so it's easy to
enable or delete it later.
Signed-off-by: David S. Miller <davem@davemloft.net>
|