diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2012-07-17 18:05:32 -0400 |
---|---|---|
committer | Willy Tarreau <w@1wt.eu> | 2012-10-07 23:37:07 +0200 |
commit | 5a82cdee505bb35403878c0946b521a37350a365 (patch) | |
tree | 5a1e24e391185d6f078425ce02041addefdae247 /kernel | |
parent | c2358cf4165eeac6eacd6017c4419b33cacb0418 (diff) |
hrtimers: Move lock held region in hrtimer_interrupt()
This is a backport of 196951e91262fccda81147d2bcf7fdab08668b40
We need to update the base offsets from this code and we need to do
that under base->lock. Move the lock held region around the
ktime_get() calls. The ktime_get() calls are going to be replaced with
a function which gets the time and the offsets atomically.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Link: http://lkml.kernel.org/r/1341960205-56738-6-git-send-email-johnstul@us.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Linux Kernel <linux-kernel@vger.kernel.org>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/hrtimer.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c index c4acec717aeb..8ba6d311db5e 100644 --- a/kernel/hrtimer.c +++ b/kernel/hrtimer.c @@ -1263,11 +1263,10 @@ void hrtimer_interrupt(struct clock_event_device *dev) cpu_base->nr_events++; dev->next_event.tv64 = KTIME_MAX; + spin_lock(&cpu_base->lock); entry_time = now = ktime_get(); retry: expires_next.tv64 = KTIME_MAX; - - spin_lock(&cpu_base->lock); /* * We set expires_next to KTIME_MAX here with cpu_base->lock * held to prevent that a timer is enqueued in our queue via @@ -1342,6 +1341,7 @@ retry: * interrupt routine. We give it 3 attempts to avoid * overreacting on some spurious event. */ + spin_lock(&cpu_base->lock); now = ktime_get(); cpu_base->nr_retries++; if (++retries < 3) @@ -1354,6 +1354,7 @@ retry: */ cpu_base->nr_hangs++; cpu_base->hang_detected = 1; + spin_unlock(&cpu_base->lock); delta = ktime_sub(now, entry_time); if (delta.tv64 > cpu_base->max_hang_time.tv64) cpu_base->max_hang_time = delta; |