diff options
author | Peter Zijlstra <peterz@infradead.org> | 2014-02-03 16:21:09 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-03-11 12:14:59 +0100 |
commit | 34c6bc2c919a55e5ad4e698510a2f35ee13ab900 (patch) | |
tree | 5ad85f7c86b6e42987d974f73e1b45aed05550a3 /kernel/locking | |
parent | fb0527bd5ea99bfeb2dd91e3c1433ecf745d6b99 (diff) |
locking/mutexes: Add extra reschedule point
Add in an extra reschedule in an attempt to avoid getting reschedule
the moment we've acquired the lock.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-zah5eyn9gu7qlgwh9r6n2anc@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking')
-rw-r--r-- | kernel/locking/mutex.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 2670b84067d6..02c61a9c8906 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -468,6 +468,13 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, } osq_unlock(&lock->osq); slowpath: + /* + * If we fell out of the spin path because of need_resched(), + * reschedule now, before we try-lock the mutex. This avoids getting + * scheduled out right after we obtained the mutex. + */ + if (need_resched()) + schedule_preempt_disabled(); #endif spin_lock_mutex(&lock->wait_lock, flags); |