diff options
author | Al Viro <viro@zeniv.linux.org.uk> | 2017-09-24 10:21:15 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2017-10-18 09:35:41 +0200 |
commit | ed35ded9c781b2cd86bec3b7b91fd65f310e4700 (patch) | |
tree | c462a0051904c6cabb6490a0941057a166eabeef /block | |
parent | e67dfe75b6830279ef24bfa5237c1488e2890a8d (diff) |
bio_copy_user_iov(): don't ignore ->iov_offset
commit 1cfd0ddd82232804e03f3023f6a58b50dfef0574 upstream.
Since "block: support large requests in blk_rq_map_user_iov" we
started to call it with partially drained iter; that works fine
on the write side, but reads create a copy of iter for completion
time. And that needs to take the possibility of ->iov_iter != 0
into account...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'block')
-rw-r--r-- | block/bio.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/block/bio.c b/block/bio.c index cbf2db1ed284..07f287b14cff 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1171,8 +1171,8 @@ struct bio *bio_copy_user_iov(struct request_queue *q, */ bmd->is_our_pages = map_data ? 0 : 1; memcpy(bmd->iov, iter->iov, sizeof(struct iovec) * iter->nr_segs); - iov_iter_init(&bmd->iter, iter->type, bmd->iov, - iter->nr_segs, iter->count); + bmd->iter = *iter; + bmd->iter.iov = bmd->iov; ret = -ENOMEM; bio = bio_kmalloc(gfp_mask, nr_pages); |