diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 10:33:08 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 10:33:08 -0800 |
commit | 54e60e505d6144a22c787b5be1fdce996a27be1b (patch) | |
tree | a0b582fa8d9de216fef4fb6320199bb055125c65 /io_uring/notif.h | |
parent | d523ec4c6af4314575d6ab8b52629ae3e2039a50 (diff) | |
parent | 5d772916855f593672de55c437925daccc8ecd73 (diff) |
Merge tag 'for-6.2/io_uring-2022-12-08' of git://git.kernel.dk/linux
Pull io_uring updates from Jens Axboe:
- Always ensure proper ordering in case of CQ ring overflow, which then
means we can remove some work-arounds for that (Dylan)
- Support completion batching for multishot, greatly increasing the
efficiency for those (Dylan)
- Flag epoll/eventfd wakeups done from io_uring, so that we can easily
tell if we're recursing into io_uring again.
Previously, this would have resulted in repeated multishot
notifications if we had a dependency there. That could happen if an
eventfd was registered as the ring eventfd, and we multishot polled
for events on it. Or if an io_uring fd was added to epoll, and
io_uring had a multishot request for the epoll fd.
Test cases here:
https://git.kernel.dk/cgit/liburing/commit/?id=919755a7d0096fda08fb6d65ac54ad8d0fe027cd
Previously these got terminated when the CQ ring eventually
overflowed, now it's handled gracefully (me).
- Tightening of the IOPOLL based completions (Pavel)
- Optimizations of the networking zero-copy paths (Pavel)
- Various tweaks and fixes (Dylan, Pavel)
* tag 'for-6.2/io_uring-2022-12-08' of git://git.kernel.dk/linux: (41 commits)
io_uring: keep unlock_post inlined in hot path
io_uring: don't use complete_post in kbuf
io_uring: spelling fix
io_uring: remove io_req_complete_post_tw
io_uring: allow multishot polled reqs to defer completion
io_uring: remove overflow param from io_post_aux_cqe
io_uring: add lockdep assertion in io_fill_cqe_aux
io_uring: make io_fill_cqe_aux static
io_uring: add io_aux_cqe which allows deferred completion
io_uring: allow defer completion for aux posted cqes
io_uring: defer all io_req_complete_failed
io_uring: always lock in io_apoll_task_func
io_uring: remove iopoll spinlock
io_uring: iopoll protect complete_post
io_uring: inline __io_req_complete_put()
io_uring: remove io_req_tw_post_queue
io_uring: use io_req_task_complete() in timeout
io_uring: hold locks for io_req_complete_failed
io_uring: add completion locking for iopoll
io_uring: kill io_cqring_ev_posted() and __io_cq_unlock_post()
...
Diffstat (limited to 'io_uring/notif.h')
-rw-r--r-- | io_uring/notif.h | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/io_uring/notif.h b/io_uring/notif.h index 5b4d710c8ca5..c88c800cd89d 100644 --- a/io_uring/notif.h +++ b/io_uring/notif.h @@ -13,16 +13,29 @@ struct io_notif_data { struct file *file; struct ubuf_info uarg; unsigned long account_pages; + bool zc_report; + bool zc_used; + bool zc_copied; }; -void io_notif_flush(struct io_kiocb *notif); struct io_kiocb *io_alloc_notif(struct io_ring_ctx *ctx); +void io_notif_set_extended(struct io_kiocb *notif); static inline struct io_notif_data *io_notif_to_data(struct io_kiocb *notif) { return io_kiocb_to_cmd(notif, struct io_notif_data); } +static inline void io_notif_flush(struct io_kiocb *notif) + __must_hold(¬if->ctx->uring_lock) +{ + struct io_notif_data *nd = io_notif_to_data(notif); + + /* drop slot's master ref */ + if (refcount_dec_and_test(&nd->uarg.refcnt)) + io_req_task_work_add(notif); +} + static inline int io_notif_account_mem(struct io_kiocb *notif, unsigned len) { struct io_ring_ctx *ctx = notif->ctx; |