diff options
| author | Kuniyuki Iwashima <kuniyu@google.com> | 2026-05-01 07:39:41 +0000 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2026-05-04 18:34:45 -0700 |
| commit | d82ba05263c69fa2437fe93e4e561cc40f4c03af (patch) | |
| tree | 923e1148a21db16e7c26fe3121d7622512fb7225 | |
| parent | bd3a4795d5744f59a1f485379f1303e5e606f377 (diff) | |
af_unix: Set gc_in_progress to true in unix_gc().
Igor Ushakov reported that unix_gc() could run with gc_in_progress
being false if the work is scheduled while running:
Thread 1 Thread 2 Thread 3
-------- -------- --------
unix_schedule_gc() unix_schedule_gc()
`- if (!gc_in_progress) `- if (!gc_in_progress)
|- gc_in_progress = true |
`- queue_work() |
unix_gc() <----------------/ |
| |- gc_in_progress = true
... `- queue_work()
| |
`- gc_in_progress = false |
|
unix_gc() <---------------------------------------------'
|
... /* gc_in_progress == false */
|
`- gc_in_progress = false
unix_peek_fpl() relies on gc_in_progress not to confuse GC
by MSG_PEEK.
Let's set gc_in_progress to true in unix_gc().
Fixes: 8b90a9f819dc ("af_unix: Run GC on only one CPU.")
Reported-by: Igor Ushakov <sysroot314@gmail.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
Link: https://patch.msgid.link/20260501073945.1884564-1-kuniyu@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
| -rw-r--r-- | net/unix/garbage.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/net/unix/garbage.c b/net/unix/garbage.c index a7967a345827..0783555e2526 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -607,6 +607,8 @@ static void unix_gc(struct work_struct *work) struct sk_buff_head hitlist; struct sk_buff *skb; + WRITE_ONCE(gc_in_progress, true); + spin_lock(&unix_gc_lock); if (unix_graph_state == UNIX_GRAPH_NOT_CYCLIC) { @@ -649,10 +651,8 @@ void unix_schedule_gc(struct user_struct *user) READ_ONCE(user->unix_inflight) < UNIX_INFLIGHT_SANE_USER) return; - if (!READ_ONCE(gc_in_progress)) { - WRITE_ONCE(gc_in_progress, true); + if (!READ_ONCE(gc_in_progress)) queue_work(system_dfl_wq, &unix_gc_work); - } if (user && READ_ONCE(unix_graph_cyclic_sccs)) flush_work(&unix_gc_work); |
