diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2009-12-10 15:35:10 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2010-01-06 14:26:22 -0800 |
commit | 5c87f0c04abf876c8019c04ff581976a9a241206 (patch) | |
tree | 0db1a372467c01f642484f66fd8b34a982997eea /kernel | |
parent | c7d622e356d6a21b788bd34cef607e01ae28c197 (diff) |
clockevents: Prevent clockevent_devices list corruption on cpu hotplug
commit bb6eddf7676e1c1f3e637aa93c5224488d99036f upstream.
Xiaotian Feng triggered a list corruption in the clock events list on
CPU hotplug and debugged the root cause.
If a CPU registers more than one per cpu clock event device, then only
the active clock event device is removed on CPU_DEAD. The unused
devices are kept in the clock events device list.
On CPU up the clock event devices are registered again, which means
that we list_add an already enqueued list_head. That results in list
corruption.
Resolve this by removing all devices which are associated to the dead
CPU on CPU_DEAD.
Reported-by: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/time/clockevents.c | 18 |
1 files changed, 15 insertions, 3 deletions
diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c index 620b58abdc32..9484be456d69 100644 --- a/kernel/time/clockevents.c +++ b/kernel/time/clockevents.c @@ -237,8 +237,9 @@ void clockevents_exchange_device(struct clock_event_device *old, */ void clockevents_notify(unsigned long reason, void *arg) { - struct list_head *node, *tmp; + struct clock_event_device *dev, *tmp; unsigned long flags; + int cpu; spin_lock_irqsave(&clockevents_lock, flags); clockevents_do_notify(reason, arg); @@ -249,8 +250,19 @@ void clockevents_notify(unsigned long reason, void *arg) * Unregister the clock event devices which were * released from the users in the notify chain. */ - list_for_each_safe(node, tmp, &clockevents_released) - list_del(node); + list_for_each_entry_safe(dev, tmp, &clockevents_released, list) + list_del(&dev->list); + /* + * Now check whether the CPU has left unused per cpu devices + */ + cpu = *((int *)arg); + list_for_each_entry_safe(dev, tmp, &clockevent_devices, list) { + if (cpumask_test_cpu(cpu, dev->cpumask) && + cpumask_weight(dev->cpumask) == 1) { + BUG_ON(dev->mode != CLOCK_EVT_MODE_UNUSED); + list_del(&dev->list); + } + } break; default: break; |