diff options
author | Eric Biggers <ebiggers@google.com> | 2019-04-09 23:46:29 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-05-22 07:38:37 +0200 |
commit | 0a348941ad06718d08cbcc9c6fd4cf7758b47588 (patch) | |
tree | 5b0a8fc55edc46cbf4faec60c3cb9c1379edfed9 | |
parent | 25f1509c739f28aea0278e1eed9431052f208540 (diff) |
crypto: lrw - don't access already-freed walk.iv
commit aec286cd36eacfd797e3d5dab8d5d23c15d1bb5e upstream.
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.
Fix this in the LRW template by checking the return value of
skcipher_walk_virt().
This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation. When the extra
self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
splat occured during lrw(aes) testing.
Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | crypto/lrw.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/crypto/lrw.c b/crypto/lrw.c index 08a0e458bc3e..cc5c89246193 100644 --- a/crypto/lrw.c +++ b/crypto/lrw.c @@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_request *req, bool second_pass) } err = skcipher_walk_virt(&w, req, false); - iv = (__be32 *)w.iv; + if (err) + return err; + iv = (__be32 *)w.iv; counter[0] = be32_to_cpu(iv[3]); counter[1] = be32_to_cpu(iv[2]); counter[2] = be32_to_cpu(iv[1]); |