diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-06-27 14:14:30 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-06-27 14:14:30 -0700 |
| commit | bc6cb4d5bc3a44197de30784eae71d8ba28483eb (patch) | |
| tree | fdd00391c6068c217eeb8a4a06afc40cc1fc6853 /scripts/atomic/fallbacks/fetch_add_unless | |
| parent | ed3b7923a816ded62dccef377c9ee346c7d3b1b4 (diff) | |
| parent | b33eb50a92b0a298fa8a6ac350e741c3ec100f6d (diff) | |
Merge tag 'locking-core-2023-06-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
- Introduce cmpxchg128() -- aka. the demise of cmpxchg_double()
The cmpxchg128() family of functions is basically & functionally the
same as cmpxchg_double(), but with a saner interface.
Instead of a 6-parameter horror that forced u128 - u64/u64-halves
layout details on the interface and exposed users to complexity,
fragility & bugs, use a natural 3-parameter interface with u128
types.
- Restructure the generated atomic headers, and add kerneldoc comments
for all of the generic atomic{,64,_long}_t operations.
The generated definitions are much cleaner now, and come with
documentation.
- Implement lock_set_cmp_fn() on lockdep, for defining an ordering when
taking multiple locks of the same type.
This gets rid of one use of lockdep_set_novalidate_class() in the
bcache code.
- Fix raw_cpu_generic_try_cmpxchg() bug due to an unintended variable
shadowing generating garbage code on Clang on certain ARM builds.
* tag 'locking-core-2023-06-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (43 commits)
locking/atomic: scripts: fix ${atomic}_dec_if_positive() kerneldoc
percpu: Fix self-assignment of __old in raw_cpu_generic_try_cmpxchg()
locking/atomic: treewide: delete arch_atomic_*() kerneldoc
locking/atomic: docs: Add atomic operations to the driver basic API documentation
locking/atomic: scripts: generate kerneldoc comments
docs: scripts: kernel-doc: accept bitwise negation like ~@var
locking/atomic: scripts: simplify raw_atomic*() definitions
locking/atomic: scripts: simplify raw_atomic_long*() definitions
locking/atomic: scripts: split pfx/name/sfx/order
locking/atomic: scripts: restructure fallback ifdeffery
locking/atomic: scripts: build raw_atomic_long*() directly
locking/atomic: treewide: use raw_atomic*_<op>()
locking/atomic: scripts: add trivial raw_atomic*_<op>()
locking/atomic: scripts: factor out order template generation
locking/atomic: scripts: remove leftover "${mult}"
locking/atomic: scripts: remove bogus order parameter
locking/atomic: xtensa: add preprocessor symbols
locking/atomic: x86: add preprocessor symbols
locking/atomic: sparc: add preprocessor symbols
locking/atomic: sh: add preprocessor symbols
...
Diffstat (limited to 'scripts/atomic/fallbacks/fetch_add_unless')
| -rwxr-xr-x | scripts/atomic/fallbacks/fetch_add_unless | 17 |
1 files changed, 2 insertions, 15 deletions
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless index 68ce13c8b9da..8db7e9e17fac 100755 --- a/scripts/atomic/fallbacks/fetch_add_unless +++ b/scripts/atomic/fallbacks/fetch_add_unless @@ -1,23 +1,10 @@ cat << EOF -/** - * arch_${atomic}_fetch_add_unless - add unless the number is already a given value - * @v: pointer of type ${atomic}_t - * @a: the amount to add to v... - * @u: ...unless v is equal to u. - * - * Atomically adds @a to @v, so long as @v was not already @u. - * Returns original value of @v - */ -static __always_inline ${int} -arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u) -{ - ${int} c = arch_${atomic}_read(v); + ${int} c = raw_${atomic}_read(v); do { if (unlikely(c == u)) break; - } while (!arch_${atomic}_try_cmpxchg(v, &c, c + a)); + } while (!raw_${atomic}_try_cmpxchg(v, &c, c + a)); return c; -} EOF |
