diff options
| author | Jens Axboe <axboe@kernel.dk> | 2024-03-18 16:13:01 -0600 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2024-04-15 08:10:25 -0600 |
| commit | a9165b83c1937eeed1f0c731468216d6371d647f (patch) | |
| tree | b7444f11aca3a27908c5b1660344efe44cbc190e /include | |
| parent | d80f940701302e84d1398ecb103083468b566a69 (diff) | |
io_uring/rw: always setup io_async_rw for read/write requests
read/write requests try to put everything on the stack, and then alloc
and copy if a retry is needed. This necessitates a bunch of nasty code
that deals with intermediate state.
Get rid of this, and have the prep side setup everything that is needed
upfront, which greatly simplifies the opcode handlers.
This includes adding an alloc cache for io_async_rw, to make it cheap
to handle.
In terms of cost, this should be basically free and transparent. For
the worst case of {READ,WRITE}_FIXED which didn't need it before,
performance is unaffected in the normal peak workload that is being
used to test that. Still runs at 122M IOPS.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'include')
| -rw-r--r-- | include/linux/io_uring_types.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 75b46119d4c8..60d7e35fc303 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -300,6 +300,7 @@ struct io_ring_ctx { struct io_hash_table cancel_table_locked; struct io_alloc_cache apoll_cache; struct io_alloc_cache netmsg_cache; + struct io_alloc_cache rw_cache; /* * Any cancelable uring_cmd is added to this list in |
