diff options
| author | Christian Brauner <brauner@kernel.org> | 2026-03-09 15:28:27 +0100 |
|---|---|---|
| committer | Christian Brauner <brauner@kernel.org> | 2026-03-10 10:29:18 +0100 |
| commit | 1b63f91d1c9013629fb2005ace48b7aeead32330 (patch) | |
| tree | e25179eec2b5495581a2028120d92ad878ae7162 /tools/testing/selftests/syscall_user_dispatch | |
| parent | 969ebebc30fff0b9756130e3b4f6f3036e7c53ab (diff) | |
| parent | 6bbb4d96f797d42d4baef1691a27a62275727146 (diff) | |
Merge patch series "support file system generated / verified integrity information v4"
Christoph Hellwig <hch@lst.de> says:
This series adds support to generate and verify integrity information
(aka T10 PI) in the file system, instead of the automatic below the
covers support that is currently used.
There two reasons for this:
a) to increase the protection enveloped. Right now this is just a
minor step from the bottom of the block layer to the file system,
but it is required to support io_uring integrity data passthrough in
the file system similar to the currently existing support for block
devices, which will follow next. It also allows the file system to
directly see the integrity error and act upon in, e.g. when using
RAID either integrated (as in btrfs) or by supporting reading
redundant copies through the block layer.
b) to make the PI processing more efficient. This is primarily a
concern for reads, where the block layer auto PI has to schedule a
work item for each bio, and the file system them has to do it again
for bounce buffering. Additionally the current iomap post-I/O
workqueue handling is a lot more efficient by supporting merging and
avoiding workqueue scheduling storms.
The implementation is based on refactoring the existing block layer PI
code to be reusable for this use case, and then adding relatively small
wrappers for the file system use case. These are then used in iomap
to implement the semantics, and wired up in XFS with a small amount of
glue code.
Compared to the baseline (iomap-bounce branch), this does not change
performance for writes, but increases read performance up to 15% for 4k
I/O, with the benefit decreasing with larger I/O sizes as even the
baseline maxes out the device quickly on my older enterprise SSD.
Anuj Gupta also measured a large decrease in QD1 latency on an Intel
Optane device for small I/O sizes, but also an increase for very large
ones.
Note that the upcoming XFS fsverity support also depends on some
infrastructure in this series.
* patches from https://patch.msgid.link/20260223132021.292832-1-hch@lst.de:
xfs: support T10 protection information
iomap: support T10 protection information
iomap: support ioends for buffered reads
iomap: add a bioset pointer to iomap_read_folio_ops
ntfs3: remove copy and pasted iomap code
iomap: allow file systems to hook into buffered read bio submission
iomap: only call into ->submit_read when there is a read_ctx
iomap: pass the iomap_iter to ->submit_read
iomap: refactor iomap_bio_read_folio_range
Link: https://patch.msgid.link/20260223132021.292832-1-hch@lst.de
Signed-off-by: Christian Brauner <brauner@kernel.org>
Diffstat (limited to 'tools/testing/selftests/syscall_user_dispatch')
0 files changed, 0 insertions, 0 deletions
