diff options
| author | Kai Aizen <kai@snailsploit.com> | 2026-02-18 17:36:41 +0000 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2026-02-18 10:39:48 -0700 |
| commit | 003049b1c4fb8aabb93febb7d1e49004f6ad653b (patch) | |
| tree | bdebae3d7d6667f236dd31ecb4a183ecbcb0b8c6 | |
| parent | 2961f841b025fb234860bac26dfb7fa7cb0fb122 (diff) | |
io_uring/zcrx: fix user_ref race between scrub and refill paths
The io_zcrx_put_niov_uref() function uses a non-atomic
check-then-decrement pattern (atomic_read followed by separate
atomic_dec) to manipulate user_refs. This is serialized against other
callers by rq_lock, but io_zcrx_scrub() modifies the same counter with
atomic_xchg() WITHOUT holding rq_lock.
On SMP systems, the following race exists:
CPU0 (refill, holds rq_lock) CPU1 (scrub, no rq_lock)
put_niov_uref:
atomic_read(uref) - 1
// window opens
atomic_xchg(uref, 0) - 1
return_niov_freelist(niov) [PUSH #1]
// window closes
atomic_dec(uref) - wraps to -1
returns true
return_niov(niov)
return_niov_freelist(niov) [PUSH #2: DOUBLE-FREE]
The same niov is pushed to the freelist twice, causing free_count to
exceed nr_iovs. Subsequent freelist pushes then perform an out-of-bounds
write (a u32 value) past the kvmalloc'd freelist array into the adjacent
slab object.
Fix this by replacing the non-atomic read-then-dec in
io_zcrx_put_niov_uref() with an atomic_try_cmpxchg loop that atomically
tests and decrements user_refs. This makes the operation safe against
concurrent atomic_xchg from scrub without requiring scrub to acquire
rq_lock.
Fixes: 34a3e60821ab ("io_uring/zcrx: implement zerocopy receive pp memory provider")
Cc: stable@vger.kernel.org
Signed-off-by: Kai Aizen <kai@snailsploit.com>
[pavel: removed a warning and a comment]
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| -rw-r--r-- | io_uring/zcrx.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index 28150c6578e3..97984a73a95d 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -349,10 +349,14 @@ static inline atomic_t *io_get_user_counter(struct net_iov *niov) static bool io_zcrx_put_niov_uref(struct net_iov *niov) { atomic_t *uref = io_get_user_counter(niov); + int old; + + old = atomic_read(uref); + do { + if (unlikely(old == 0)) + return false; + } while (!atomic_try_cmpxchg(uref, &old, old - 1)); - if (unlikely(!atomic_read(uref))) - return false; - atomic_dec(uref); return true; } |
