diff options
| author | Ming Lei <ming.lei@redhat.com> | 2026-01-14 16:54:05 +0800 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2026-01-14 10:18:19 -0700 |
| commit | da579f05ef0faada3559e7faddf761c75cdf85e1 (patch) | |
| tree | 8938b21813a3c6f994eabbe9944094f03c6d8ac0 | |
| parent | e4fdbca2dc774366aca6532b57bfcdaae29aaf63 (diff) | |
io_uring: move local task_work in exit cancel loop
With IORING_SETUP_DEFER_TASKRUN, task work is queued to ctx->work_llist
(local work) rather than the fallback list. During io_ring_exit_work(),
io_move_task_work_from_local() was called once before the cancel loop,
moving work from work_llist to fallback_llist.
However, task work can be added to work_llist during the cancel loop
itself. There are two cases:
1) io_kill_timeouts() is called from io_uring_try_cancel_requests() to
cancel pending timeouts, and it adds task work via io_req_queue_tw_complete()
for each cancelled timeout:
2) URING_CMD requests like ublk can be completed via
io_uring_cmd_complete_in_task() from ublk_queue_rq() during canceling,
given ublk request queue is only quiesced when canceling the 1st uring_cmd.
Since io_allowed_defer_tw_run() returns false in io_ring_exit_work()
(kworker != submitter_task), io_run_local_work() is never invoked,
and the work_llist entries are never processed. This causes
io_uring_try_cancel_requests() to loop indefinitely, resulting in
100% CPU usage in kworker threads.
Fix this by moving io_move_task_work_from_local() inside the cancel
loop, ensuring any work on work_llist is moved to fallback before
each cancel attempt.
Cc: stable@vger.kernel.org
Fixes: c0e0d6ba25f1 ("io_uring: add IORING_SETUP_DEFER_TASKRUN")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| -rw-r--r-- | io_uring/io_uring.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 87a87396e940..b7a077c11c21 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3003,12 +3003,12 @@ static __cold void io_ring_exit_work(struct work_struct *work) mutex_unlock(&ctx->uring_lock); } - if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) - io_move_task_work_from_local(ctx); - /* The SQPOLL thread never reaches this path */ - while (io_uring_try_cancel_requests(ctx, NULL, true, false)) + do { + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) + io_move_task_work_from_local(ctx); cond_resched(); + } while (io_uring_try_cancel_requests(ctx, NULL, true, false)); if (ctx->sq_data) { struct io_sq_data *sqd = ctx->sq_data; |
