diff options
author | Peter Zijlstra <peterz@infradead.org> | 2014-09-30 15:26:00 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-10-03 06:09:30 +0200 |
commit | 8acd91e8620836a56ff62028ed28ba629f2881a0 (patch) | |
tree | 4a95b0d1a41b75f6ad4cd8a6c59dcdb9268d1e4b /include | |
parent | debfab74e453f079cd8b12b0604387a8c510ef3a (diff) |
locking/lockdep: Revert qrwlock recusive stuff
Commit f0bab73cb539 ("locking/lockdep: Restrict the use of recursive
read_lock() with qrwlock") changed lockdep to try and conform to the
qrwlock semantics which differ from the traditional rwlock semantics.
In particular qrwlock is fair outside of interrupt context, but in
interrupt context readers will ignore all fairness.
The problem modeling this is that read and write side have different
lock state (interrupts) semantics but we only have a single
representation of these. Therefore lockdep will get confused, thinking
the lock can cause interrupt lock inversions.
So revert it for now; the old rwlock semantics were already imperfectly
modeled and the qrwlock extra won't fit either.
If we want to properly fix this, I think we need to resurrect the work
by Gautham did a few years ago that split the read and write state of
locks:
http://lwn.net/Articles/332801/
FWIW the locking selftest that would've failed (and was reported by
Borislav earlier) is something like:
RL(X1); /* IRQ-ON */
LOCK(A);
UNLOCK(A);
RU(X1);
IRQ_ENTER();
RL(X1); /* IN-IRQ */
RU(X1);
IRQ_EXIT();
At which point it would report that because A is an IRQ-unsafe lock we
can suffer the following inversion:
CPU0 CPU1
lock(A)
lock(X1)
lock(A)
<IRQ>
lock(X1)
And this is 'wrong' because X1 can recurse (assuming the above lock are
in fact read-lock) but lockdep doesn't know about this.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Waiman Long <Waiman.Long@hp.com>
Cc: ego@linux.vnet.ibm.com
Cc: bp@alien8.de
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20140930132600.GA7444@worktop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/lockdep.h | 10 |
1 files changed, 1 insertions, 9 deletions
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index b5a84b62fb84..f388481201cd 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -478,24 +478,16 @@ static inline void print_irqtrace_events(struct task_struct *curr) * on the per lock-class debug mode: */ -/* - * Read states in the 2-bit held_lock:read field: - * 0: Exclusive lock - * 1: Shareable lock, cannot be recursively called - * 2: Shareable lock, can be recursively called - * 3: Shareable lock, cannot be recursively called except in interrupt context - */ #define lock_acquire_exclusive(l, s, t, n, i) lock_acquire(l, s, t, 0, 1, n, i) #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) -#define lock_acquire_shared_irecursive(l, s, t, n, i) lock_acquire(l, s, t, 3, 1, n, i) #define spin_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) #define spin_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) #define spin_release(l, n, i) lock_release(l, n, i) #define rwlock_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define rwlock_acquire_read(l, s, t, i) lock_acquire_shared_irecursive(l, s, t, NULL, i) +#define rwlock_acquire_read(l, s, t, i) lock_acquire_shared_recursive(l, s, t, NULL, i) #define rwlock_release(l, n, i) lock_release(l, n, i) #define seqcount_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) |