diff options
| author | Kumar Kartikeya Dwivedi <memxor@gmail.com> | 2025-03-15 21:05:29 -0700 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2025-03-19 08:03:05 -0700 |
| commit | c9102a68c070134ade5c941d7315481a77bcea53 (patch) | |
| tree | 264d6bf0f79d4a80e07e48fe686ed28d52c1d8f6 /include | |
| parent | 31158ad02ddbed2b0672c9701a0a2f3e5b3bc01a (diff) | |
rqspinlock: Add a test-and-set fallback
Include a test-and-set fallback when queued spinlock support is not
available. Introduce a rqspinlock type to act as a fallback when
qspinlock support is absent.
Include ifdef guards to ensure the slow path in this file is only
compiled when CONFIG_QUEUED_SPINLOCKS=y. Subsequent patches will add
further logic to ensure fallback to the test-and-set implementation
when queued spinlock support is unavailable on an architecture.
Unlike other waiting loops in rqspinlock code, the one for test-and-set
has no theoretical upper bound under contention, therefore we need a
longer timeout than usual. Bump it up to a second in this case.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20250316040541.108729-14-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include')
| -rw-r--r-- | include/asm-generic/rqspinlock.h | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h index 34c3dcb4299e..12f72c4a97cd 100644 --- a/include/asm-generic/rqspinlock.h +++ b/include/asm-generic/rqspinlock.h @@ -12,11 +12,28 @@ #include <linux/types.h> #include <vdso/time64.h> #include <linux/percpu.h> +#ifdef CONFIG_QUEUED_SPINLOCKS +#include <asm/qspinlock.h> +#endif + +struct rqspinlock { + union { + atomic_t val; + u32 locked; + }; +}; struct qspinlock; +#ifdef CONFIG_QUEUED_SPINLOCKS typedef struct qspinlock rqspinlock_t; +#else +typedef struct rqspinlock rqspinlock_t; +#endif +extern int resilient_tas_spin_lock(rqspinlock_t *lock); +#ifdef CONFIG_QUEUED_SPINLOCKS extern int resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val); +#endif /* * Default timeout for waiting loops is 0.25 seconds |
