From 894d1b3db41cf7e6ae0304429a1747b3c3f390bc Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Wed, 9 Oct 2024 16:53:34 -0700 Subject: locking/mutex: Remove wakeups from under mutex::wait_lock In preparation to nest mutex::wait_lock under rq::lock we need to remove wakeups from under it. Do this by utilizing wake_qs to defer the wakeup until after the lock is dropped. [Heavily changed after 55f036ca7e74 ("locking: WW mutex cleanup") and 08295b3b5bee ("locking: Implement an algorithm choice for Wound-Wait mutexes")] [jstultz: rebased to mainline, added extra wake_up_q & init to avoid hangs, similar to Connor's rework of this patch] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Juri Lelli Signed-off-by: John Stultz Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Metin Kaya Acked-by: Davidlohr Bueso Tested-by: K Prateek Nayak Tested-by: Metin Kaya Link: https://lore.kernel.org/r/20241009235352.1614323-2-jstultz@google.com --- kernel/locking/spinlock_rt.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) (limited to 'kernel/locking/spinlock_rt.c') diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index 38e292454fcc..014143934e00 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -162,9 +162,10 @@ rwbase_rtmutex_lock_state(struct rt_mutex_base *rtm, unsigned int state) } static __always_inline int -rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state) +rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state, + struct wake_q_head *wake_q) { - rtlock_slowlock_locked(rtm); + rtlock_slowlock_locked(rtm, wake_q); return 0; } -- cgit v1.2.3