diff options
author | Peter Zijlstra <peterz@infradead.org> | 2013-11-19 16:13:38 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-01-13 13:47:36 +0100 |
commit | 9ea4c380066fbe23fe0da7f4abfabc444f2467f4 (patch) | |
tree | a6f3e8def29275f04b33cfb0defcefc9e0adaaf0 /include/linux/preempt_mask.h | |
parent | c726099ec224be8078d91072207053ff9a1ad6fc (diff) |
locking: Optimize lock_bh functions
Currently all _bh_ lock functions do two preempt_count operations:
local_bh_disable();
preempt_disable();
and for the unlock:
preempt_enable_no_resched();
local_bh_enable();
Since its a waste of perfectly good cycles to modify the same variable
twice when you can do it in one go; use the new
__local_bh_{dis,en}able_ip() functions that allow us to provide a
preempt_count value to add/sub.
So define SOFTIRQ_LOCK_OFFSET as the offset a _bh_ lock needs to
add/sub to be done in one go.
As a bonus it gets rid of the preempt_enable_no_resched() usage.
This reduces a 1000 loops of:
spin_lock_bh(&bh_lock);
spin_unlock_bh(&bh_lock);
from 53596 cycles to 51995 cycles. I didn't do enough measurements to
say for absolute sure that the result is significant but the the few
runs I did for each suggest it is so.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: jacob.jun.pan@linux.intel.com
Cc: Mike Galbraith <bitbucket@online.de>
Cc: hpa@zytor.com
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: lenb@kernel.org
Cc: rjw@rjwysocki.net
Cc: rui.zhang@intel.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20131119151338.GF3694@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/preempt_mask.h')
-rw-r--r-- | include/linux/preempt_mask.h | 15 |
1 files changed, 15 insertions, 0 deletions
diff --git a/include/linux/preempt_mask.h b/include/linux/preempt_mask.h index d169820203dd..b8d96bca4357 100644 --- a/include/linux/preempt_mask.h +++ b/include/linux/preempt_mask.h @@ -79,6 +79,21 @@ #endif /* + * The preempt_count offset needed for things like: + * + * spin_lock_bh() + * + * Which need to disable both preemption (CONFIG_PREEMPT_COUNT) and + * softirqs, such that unlock sequences of: + * + * spin_unlock(); + * local_bh_enable(); + * + * Work as expected. + */ +#define SOFTIRQ_LOCK_OFFSET (SOFTIRQ_DISABLE_OFFSET + PREEMPT_CHECK_OFFSET) + +/* * Are we running in atomic context? WARNING: this macro cannot * always detect atomic context; in particular, it cannot know about * held spinlocks in non-preemptible kernels. Thus it should not be |