summaryrefslogtreecommitdiff
path: root/include/crypto
AgeCommit message (Collapse)Author
2021-09-22crypto: public_key: fix overflow during implicit conversionzhenwei pi
commit f985911b7bc75d5c98ed24d8aaa8b94c590f7c6a upstream. Hit kernel warning like this, it can be reproduced by verifying 256 bytes datafile by keyctl command, run script: RAWDATA=rawdata SIGDATA=sigdata modprobe pkcs8_key_parser rm -rf *.der *.pem *.pfx rm -rf $RAWDATA dd if=/dev/random of=$RAWDATA bs=256 count=1 openssl req -nodes -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem \ -subj "/C=CN/ST=GD/L=SZ/O=vihoo/OU=dev/CN=xx.com/emailAddress=yy@xx.com" KEY_ID=`openssl pkcs8 -in key.pem -topk8 -nocrypt -outform DER | keyctl \ padd asymmetric 123 @s` keyctl pkey_sign $KEY_ID 0 $RAWDATA enc=pkcs1 hash=sha1 > $SIGDATA keyctl pkey_verify $KEY_ID 0 $RAWDATA $SIGDATA enc=pkcs1 hash=sha1 Then the kernel reports: WARNING: CPU: 5 PID: 344556 at crypto/rsa-pkcs1pad.c:540 pkcs1pad_verify+0x160/0x190 ... Call Trace: public_key_verify_signature+0x282/0x380 ? software_key_query+0x12d/0x180 ? keyctl_pkey_params_get+0xd6/0x130 asymmetric_key_verify_signature+0x66/0x80 keyctl_pkey_verify+0xa5/0x100 do_syscall_64+0x35/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae The reason of this issue, in function 'asymmetric_key_verify_signature': '.digest_size(u8) = params->in_len(u32)' leads overflow of an u8 value, so use u32 instead of u8 for digest_size field. And reorder struct public_key_signature, it saves 8 bytes on a 64-bit machine. Cc: stable@vger.kernel.org Signed-off-by: zhenwei pi <pizhenwei@bytedance.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-07-14crypto: shash - avoid comparing pointers to exported functions under CFIArd Biesheuvel
[ Upstream commit 22ca9f4aaf431a9413dcc115dd590123307f274f ] crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-05-11crypto: api - check for ERR pointers in crypto_destroy_tfm()Ard Biesheuvel
[ Upstream commit 83681f2bebb34dbb3f03fecd8f570308ab8b7c2c ] Given that crypto_alloc_tfm() may return ERR pointers, and to avoid crashes on obscure error paths where such pointers are presented to crypto_destroy_tfm() (such as [0]), add an ERR_PTR check there before dereferencing the second argument as a struct crypto_tfm pointer. [0] https://lore.kernel.org/linux-crypto/000000000000de949705bc59e0f6@google.com/ Reported-by: syzbot+12cf5fbfdeba210a89dd@syzkaller.appspotmail.com Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-20crypto: x86 - Regularize glue function prototypesKees Cook
commit 9c1e8836edbbaf3656bc07437b59c04be034ac4e upstream. The crypto glue performed function prototype casting via macros to make indirect calls to assembly routines. Instead of performing casts at the call sites (which trips Control Flow Integrity prototype checking), switch each prototype to a common standard set of arguments which allows the removal of the existing macros. In order to keep pointer math unchanged, internal casting between u128 pointers and u8 pointers is added. Co-developed-by: João Moreira <joao.moreira@intel.com> Signed-off-by: João Moreira <joao.moreira@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ard Biesheuvel <ardb@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-09crypto - shash: reduce minimum alignment of shash_desc structureArd Biesheuvel
commit 660d2062190db131d2feaf19914e90f868fe285c upstream. Unlike many other structure types defined in the crypto API, the 'shash_desc' structure is permitted to live on the stack, which implies its contents may not be accessed by DMA masters. (This is due to the fact that the stack may be located in the vmalloc area, which requires a different virtual-to-physical translation than the one implemented by the DMA subsystem) Our definition of CRYPTO_MINALIGN_ATTR is based on ARCH_KMALLOC_MINALIGN, which may take DMA constraints into account on architectures that support non-cache coherent DMA such as ARM and arm64. In this case, the value is chosen to reflect the largest cacheline size in the system, in order to ensure that explicit cache maintenance as required by non-coherent DMA masters does not affect adjacent, unrelated slab allocations. On arm64, this value is currently set at 128 bytes. This means that applying CRYPTO_MINALIGN_ATTR to struct shash_desc is both unnecessary (as it is never used for DMA), and undesirable, given that it wastes stack space (on arm64, performing the alignment costs 112 bytes in the worst case, and the hole between the 'tfm' and '__ctx' members takes up another 120 bytes, resulting in an increased stack footprint of up to 232 bytes.) So instead, let's switch to the minimum SLAB alignment, which does not take DMA constraints into account. Note that this is a no-op for x86. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21crypto: algif_aead - Only wake up when ctx->more is zeroHerbert Xu
[ Upstream commit f3c802a1f30013f8f723b62d7fa49eb9e991da23 ] AEAD does not support partial requests so we must not wake up while ctx->more is set. In order to distinguish between the case of no data sent yet and a zero-length request, a new init flag has been added to ctx. SKCIPHER has also been modified to ensure that at least a block of data is available if there is more data to come. Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-09crypto: af_alg - fix use-after-free in af_alg_accept() due to bh_lock_sock()Herbert Xu
commit 34c86f4c4a7be3b3e35aa48bd18299d4c756064d upstream. The locking in af_alg_release_parent is broken as the BH socket lock can only be taken if there is a code-path to handle the case where the lock is owned by process-context. Instead of adding such handling, we can fix this by changing the ref counts to atomic_t. This patch also modifies the main refcnt to include both normal and nokey sockets. This way we don't have to fudge the nokey ref count when a socket changes from nokey to normal. Credits go to Mauricio Faria de Oliveira who diagnosed this bug and sent a patch for it: https://lore.kernel.org/linux-crypto/20200605161657.535043-1-mfo@canonical.com/ Reported-by: Brian Moyles <bmoyles@netflix.com> Reported-by: Mauricio Faria de Oliveira <mfo@canonical.com> Fixes: 37f96694cf73 ("crypto: af_alg - Use bh_lock_sock in...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-01-17crypto: algif_skcipher - Use chunksize instead of blocksizeHerbert Xu
commit 5b0fe9552336338acb52756daf65dd7a4eeca73f upstream. When algif_skcipher does a partial operation it always process data that is a multiple of blocksize. However, for algorithms such as CTR this is wrong because even though it can process any number of bytes overall, the partial block must come at the very end and not in the middle. This is exactly what chunksize is meant to describe so this patch changes blocksize to chunksize. Fixes: 8ff590903d5f ("crypto: algif_skcipher - User-space...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-09-27Merge branch 'next-integrity' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity Pull integrity updates from Mimi Zohar: "The major feature in this time is IMA support for measuring and appraising appended file signatures. In addition are a couple of bug fixes and code cleanup to use struct_size(). In addition to the PE/COFF and IMA xattr signatures, the kexec kernel image may be signed with an appended signature, using the same scripts/sign-file tool that is used to sign kernel modules. Similarly, the initramfs may contain an appended signature. This contained a lot of refactoring of the existing appended signature verification code, so that IMA could retain the existing framework of calculating the file hash once, storing it in the IMA measurement list and extending the TPM, verifying the file's integrity based on a file hash or signature (eg. xattrs), and adding an audit record containing the file hash, all based on policy. (The IMA support for appended signatures patch set was posted and reviewed 11 times.) The support for appended signature paves the way for adding other signature verification methods, such as fs-verity, based on a single system-wide policy. The file hash used for verifying the signature and the signature, itself, can be included in the IMA measurement list" * 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity: ima: ima_api: Use struct_size() in kzalloc() ima: use struct_size() in kzalloc() sefltest/ima: support appended signatures (modsig) ima: Fix use after free in ima_read_modsig() MODSIGN: make new include file self contained ima: fix freeing ongoing ahash_request ima: always return negative code for error ima: Store the measurement again when appraising a modsig ima: Define ima-modsig template ima: Collect modsig ima: Implement support for module-style appended signatures ima: Factor xattr_verify() out of ima_appraise_measurement() ima: Add modsig appraise_type option for module-style appended signatures integrity: Select CONFIG_KEYS instead of depending on it PKCS#7: Introduce pkcs7_get_digest() PKCS#7: Refactor verify_pkcs7_signature() MODSIGN: Export module signature definitions ima: initialize the "template" field with the default template
2019-09-09crypto: skcipher - add the ability to abort a skcipher walkArd Biesheuvel
After starting a skcipher walk, the only way to ensure that all resources it has tied up are released is to complete it. In some cases, it will be useful to be able to abort a walk cleanly after it has started, so add this ability to the skcipher walk API. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-05crypto: sha256 - Remove sha256/224_init code duplicationHans de Goede
lib/crypto/sha256.c and include/crypto/sha256_base.h define 99% identical functions to init a sha256_state struct for sha224 or sha256 use. This commit moves the functions from lib/crypto/sha256.c to include/crypto/sha.h (making them static inline) and makes the sha224/256_base_init static inline functions from include/crypto/sha256_base.h wrappers around the now also static inline include/crypto/sha.h functions. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-05crypto: sha256 - Merge crypto/sha256.h into crypto/sha.hHans de Goede
The generic sha256 implementation from lib/crypto/sha256.c uses data structs defined in crypto/sha.h, so lets move the function prototypes there too. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: sha256 - Add sha224 support to sha256 library codeHans de Goede
Add sha224 support to the lib/crypto/sha256 library code. This will allow us to replace both the sha256 and sha224 parts of crypto/sha256_generic.c when we remove the code duplication in further patches in this series. Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: sha256 - Make lib/crypto/sha256.c suitable for generic useHans de Goede
Before this commit lib/crypto/sha256.c has only been used in the s390 and x86 purgatory code, make it suitable for generic use: * Export interesting symbols * Add -D__DISABLE_EXPORTS to CFLAGS_sha256.o for purgatory builds to avoid the exports for the purgatory builds * Add to lib/crypto/Makefile and crypto/Kconfig Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: sha256 - Move lib/sha256.c to lib/cryptoHans de Goede
Generic crypto implementations belong under lib/crypto not directly in lib, likewise the header should be in include/crypto, not include/linux. Note that the code in lib/crypto/sha256.c is not yet available for generic use after this commit, it is still only used by the s390 and x86 purgatory code. Making it suitable for generic use is done in further patches in this series. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: des - remove now unused __des3_ede_setkey()Ard Biesheuvel
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: des - split off DES library from generic DES cipher driverArd Biesheuvel
Another one for the cipher museum: split off DES core processing into a separate module so other drivers (mostly for crypto accelerators) can reuse the code without pulling in the generic DES cipher itself. This will also permit the cipher interface to be made private to the crypto API itself once we move the only user in the kernel (CIFS) to this library interface. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: des - remove unused functionArd Biesheuvel
Remove the old DES3 verification functions that are no longer used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-22crypto: des/3des_ede - add new helpers to verify keysArd Biesheuvel
The recently added helper routine to perform key strength validation of triple DES keys is slightly inadequate, since it comes in two versions, neither of which are highly useful for anything other than skciphers (and many drivers still use the older blkcipher interfaces). So let's add a new helper and, considering that this is a helper function that is only intended to be used by crypto code itself, put it in a new des.h header under crypto/internal. While at it, implement a similar helper for single DES, so that we can start replacing the pattern of calling des_ekey() into a temp buffer that occurs in many drivers in drivers/crypto. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-09crypto: aes - helper function to validate key length for AES algorithmsIuliana Prodan
Add inline helper function to check key length for AES algorithms. The key can be 128, 192 or 256 bits size. This function is used in the generic aes implementation. Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-09crypto: gcm - helper functions for assoclen/authsize checkIuliana Prodan
Added inline helper functions to check authsize and assoclen for gcm, rfc4106 and rfc4543. These are used in the generic implementation of gcm, rfc4106 and rfc4543. Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-05PKCS#7: Introduce pkcs7_get_digest()Thiago Jung Bauermann
IMA will need to access the digest of the PKCS7 message (as calculated by the kernel) before the signature is verified, so introduce pkcs7_get_digest() for that purpose. Also, modify pkcs7_digest() to detect when the digest was already calculated so that it doesn't have to do redundant work. Verifying that sinfo->sig->digest isn't NULL is sufficient because both places which allocate sinfo->sig (pkcs7_parse_message() and pkcs7_note_signed_info()) use kzalloc() so sig->digest is always initialized to zero. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Cc: David Howells <dhowells@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2019-08-02crypto: api - Remove redundant #ifdef in crypto_yield()Thomas Gleixner
While looking at CONFIG_PREEMPT dependencies treewide the #ifdef in crypto_yield() matched. CONFIG_PREEMPT and CONFIG_PREEMPT_VOLUNTARY are mutually exclusive so the extra !CONFIG_PREEMPT conditional is redundant. cond_resched() has only an effect when CONFIG_PREEMPT_VOLUNTARY is set, otherwise it's a stub which the compiler optimizes out. Remove the whole conditional. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-crypto@vger.kernel.org Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-02crypto: user - fix potential warnings in cryptouser.hMasahiro Yamada
Function definitions in headers are usually marked as 'static inline'. Since 'inline' is missing for crypto_reportstat(), if it were not referenced from a .c file that includes this header, it would produce a warning. Also, 'struct crypto_user_alg' is not declared in this header. I included <linux/crytouser.h> instead of adding the forward declaration as suggested [1]. Detected by compile-testing this header as a standalone unit: ./include/crypto/internal/cryptouser.h:6:44: warning: ‘struct crypto_user_alg’ declared inside parameter list will not be visible outside of this definition or declaration struct crypto_alg *crypto_alg_match(struct crypto_user_alg *p, int exact); ^~~~~~~~~~~~~~~ ./include/crypto/internal/cryptouser.h:11:12: warning: ‘crypto_reportstat’ defined but not used [-Wunused-function] static int crypto_reportstat(struct sk_buff *in_skb, struct nlmsghdr *in_nlh, struct nlattr **attrs) ^~~~~~~~~~~~~~~~~ [1] https://lkml.org/lkml/2019/6/13/1121 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-08-02crypto: add header include guardsMasahiro Yamada
Add header include guards in case they are included multiple times. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-27crypto: ghash - add comment and improve help textEric Biggers
To help avoid confusion, add a comment to ghash-generic.c which explains the convention that the kernel's implementation of GHASH uses. Also update the Kconfig help text and module descriptions to call GHASH a "hash function" rather than a "message digest", since the latter normally means a real cryptographic hash function, which GHASH is not. Cc: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: user - make NETLINK_CRYPTO work inside netnsOndrej Mosnacek
Currently, NETLINK_CRYPTO works only in the init network namespace. It doesn't make much sense to cut it out of the other network namespaces, so do the minor plumbing work necessary to make it work in any network namespace. Code inspired by net/core/sock_diag.c. Tested using kcapi-dgst from libkcapi [1]: Before: # unshare -n kcapi-dgst -c sha256 </dev/null | wc -c libkcapi - Error: Netlink error: sendmsg failed libkcapi - Error: Netlink error: sendmsg failed libkcapi - Error: NETLINK_CRYPTO: cannot obtain cipher information for hmac(sha512) (is required crypto_user.c patch missing? see documentation) 0 After: # unshare -n kcapi-dgst -c sha256 </dev/null | wc -c 32 [1] https://github.com/smuellerDD/libkcapi Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: morus - remove generic and x86 implementationsArd Biesheuvel
MORUS was not selected as a winner in the CAESAR competition, which is not surprising since it is considered to be cryptographically broken [0]. (Note that this is not an implementation defect, but a flaw in the underlying algorithm). Since it is unlikely to be in use currently, let's remove it before we're stuck with it. [0] https://eprint.iacr.org/2019/172.pdf Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: lib/aes - export sbox and inverse sboxArd Biesheuvel
There are a few copies of the AES S-boxes floating around, so export the ones from the AES library so that we can reuse them in other modules. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: aes-generic - unexport last-round AES tablesArd Biesheuvel
The versions of the AES lookup tables that are only used during the last round are never used outside of the driver, so there is no need to export their symbols. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: ctr - add helper for performing a CTR encryption walkArd Biesheuvel
Add a static inline helper modeled after crypto_cbc_encrypt_walk() that can be reused for SIMD algorithms that need to implement a non-SIMD fallback for performing CTR encryption. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: aes-generic - drop key expansion routine in favor of library versionArd Biesheuvel
Drop aes-generic's version of crypto_aes_expand_key(), and switch to the key expansion routine provided by the AES library. AES key expansion is not performance critical, and it is better to have a single version shared by all AES implementations. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: aes - create AES library based on the fixed time AES codeArd Biesheuvel
Take the existing small footprint and mostly time invariant C code and turn it into a AES library that can be used for non-performance critical, casual use of AES, and as a fallback for, e.g., SIMD code that needs a secondary path that can be taken in contexts where the SIMD unit is off limits (e.g., in hard interrupts taken from kernel context) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-08Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Here is the crypto update for 5.3: API: - Test shash interface directly in testmgr - cra_driver_name is now mandatory Algorithms: - Replace arc4 crypto_cipher with library helper - Implement 5 way interleave for ECB, CBC and CTR on arm64 - Add xxhash - Add continuous self-test on noise source to drbg - Update jitter RNG Drivers: - Add support for SHA204A random number generator - Add support for 7211 in iproc-rng200 - Fix fuzz test failures in inside-secure - Fix fuzz test failures in talitos - Fix fuzz test failures in qat" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits) crypto: stm32/hash - remove interruptible condition for dma crypto: stm32/hash - Fix hmac issue more than 256 bytes crypto: stm32/crc32 - rename driver file crypto: amcc - remove memset after dma_alloc_coherent crypto: ccp - Switch to SPDX license identifiers crypto: ccp - Validate the the error value used to index error messages crypto: doc - Fix formatting of new crypto engine content crypto: doc - Add parameter documentation crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR crypto: arm64/aes-ce - add 5 way interleave routines crypto: talitos - drop icv_ool crypto: talitos - fix hash on SEC1. crypto: talitos - move struct talitos_edesc into talitos.h lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE crypto/NX: Set receive window credits to max number of CRBs in RxFIFO crypto: asymmetric_keys - select CRYPTO_HASH where needed crypto: serpent - mark __serpent_setkey_sbox noinline crypto: testmgr - dynamically allocate crypto_shash crypto: testmgr - dynamically allocate testvec_config crypto: talitos - eliminate unneeded 'done' functions at build time ...
2019-06-20crypto: arc4 - refactor arc4 core code into separate libraryArd Biesheuvel
Refactor the core rc4 handling so we can move most users to a library interface, permitting us to drop the cipher interface entirely in a future patch. This is part of an effort to simplify the crypto API and improve its robustness against incorrect use. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-06-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu
Merge crypto tree to pick up vmx changes.
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-13crypto: chacha - constify ctx and iv argumentsEric Biggers
Constify the ctx and iv arguments to crypto_chacha_init() and the various chacha*_stream_xor() functions. This makes it clear that they are not modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-06-13crypto: skcipher - make chunksize and walksize accessors internalEric Biggers
The 'chunksize' and 'walksize' properties of skcipher algorithms are implementation details that users of the skcipher API should not be looking at. So move their accessor functions from <crypto/skcipher.h> to <crypto/internal/skcipher.h>. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-06-13crypto: skcipher - un-inline encrypt and decrypt functionsEric Biggers
crypto_skcipher_encrypt() and crypto_skcipher_decrypt() have grown to be more than a single indirect function call. They now also check whether a key has been set, and with CONFIG_CRYPTO_STATS=y they also update the crypto statistics. That can add up to a lot of bloat at every call site. Moreover, these always involve a function call anyway, which greatly limits the benefits of inlining. So change them to be non-inline. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-06-13crypto: aead - un-inline encrypt and decrypt functionsEric Biggers
crypto_aead_encrypt() and crypto_aead_decrypt() have grown to be more than a single indirect function call. They now also check whether a key has been set, the decryption side checks whether the input is at least as long as the authentication tag length, and with CONFIG_CRYPTO_STATS=y they also update the crypto statistics. That can add up to a lot of bloat at every call site. Moreover, these always involve a function call anyway, which greatly limits the benefits of inlining. So change them to be non-inline. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 335Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms and conditions of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 51 franklin st fifth floor boston ma 02110 1301 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 111 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190530000436.567572064@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30crypto: algapi - remove crypto_tfm_in_queue()Eric Biggers
Remove the crypto_tfm_in_queue() function, which is unused. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-05-30crypto: cryptd - move kcrypto_wq into cryptdEric Biggers
kcrypto_wq is only used by cryptd, so move it into cryptd.c and change the workqueue name from "crypto" to "cryptd". Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-05-24treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 36Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public licence as published by the free software foundation either version 2 of the licence or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 114 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190520170857.552531963@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-23crypto: drbg - add FIPS 140-2 CTRNG for noise sourceStephan Mueller
FIPS 140-2 section 4.9.2 requires a continuous self test of the noise source. Up to kernel 4.8 drivers/char/random.c provided this continuous self test. Afterwards it was moved to a location that is inconsistent with the FIPS 140-2 requirements. The relevant patch was e192be9d9a30555aae2ca1dc3aad37cba484cd4a . Thus, the FIPS 140-2 CTRNG is added to the DRBG when it obtains the seed. This patch resurrects the function drbg_fips_continous_test that existed some time ago and applies it to the noise sources. The patch that removed the drbg_fips_continous_test was b3614763059b82c26bdd02ffcb1c016c1132aad0 . The Jitter RNG implements its own FIPS 140-2 self test and thus does not need to be subjected to the test in the DRBG. The patch contains a tiny fix to ensure proper zeroization in case of an error during the Jitter RNG data gathering. Signed-off-by: Stephan Mueller <smueller@chronox.de> Reviewed-by: Yann Droneaud <ydroneaud@opteya.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-05-17crypto: hash - fix incorrect HASH_MAX_DESCSIZEEric Biggers
The "hmac(sha3-224-generic)" algorithm has a descsize of 368 bytes, which is greater than HASH_MAX_DESCSIZE (360) which is only enough for sha3-224-generic. The check in shash_prepare_alg() doesn't catch this because the HMAC template doesn't set descsize on the algorithms, but rather sets it on each individual HMAC transform. This causes a stack buffer overflow when SHASH_DESC_ON_STACK() is used with hmac(sha3-224-generic). Fix it by increasing HASH_MAX_DESCSIZE to the real maximum. Also add a sanity check to hmac_init(). This was detected by the improved crypto self-tests in v5.2, by loading the tcrypt module with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y enabled. I didn't notice this bug when I ran the self-tests by requesting the algorithms via AF_ALG (i.e., not using tcrypt), probably because the stack layout differs in the two cases and that made a difference here. KASAN report: BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:359 [inline] BUG: KASAN: stack-out-of-bounds in shash_default_import+0x52/0x80 crypto/shash.c:223 Write of size 360 at addr ffff8880651defc8 by task insmod/3689 CPU: 2 PID: 3689 Comm: insmod Tainted: G E 5.1.0-10741-g35c99ffa20edd #11 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x86/0xc5 lib/dump_stack.c:113 print_address_description+0x7f/0x260 mm/kasan/report.c:188 __kasan_report+0x144/0x187 mm/kasan/report.c:317 kasan_report+0x12/0x20 mm/kasan/common.c:614 check_memory_region_inline mm/kasan/generic.c:185 [inline] check_memory_region+0x137/0x190 mm/kasan/generic.c:191 memcpy+0x37/0x50 mm/kasan/common.c:125 memcpy include/linux/string.h:359 [inline] shash_default_import+0x52/0x80 crypto/shash.c:223 crypto_shash_import include/crypto/hash.h:880 [inline] hmac_import+0x184/0x240 crypto/hmac.c:102 hmac_init+0x96/0xc0 crypto/hmac.c:107 crypto_shash_init include/crypto/hash.h:902 [inline] shash_digest_unaligned+0x9f/0xf0 crypto/shash.c:194 crypto_shash_digest+0xe9/0x1b0 crypto/shash.c:211 generate_random_hash_testvec.constprop.11+0x1ec/0x5b0 crypto/testmgr.c:1331 test_hash_vs_generic_impl+0x3f7/0x5c0 crypto/testmgr.c:1420 __alg_test_hash+0x26d/0x340 crypto/testmgr.c:1502 alg_test_hash+0x22e/0x330 crypto/testmgr.c:1552 alg_test.part.7+0x132/0x610 crypto/testmgr.c:4931 alg_test+0x1f/0x40 crypto/testmgr.c:4952 Fixes: b68a7ec1e9a3 ("crypto: hash - Remove VLA usage") Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Cc: <stable@vger.kernel.org> # v4.20+ Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-25crypto: shash - remove shash_desc::flagsEric Biggers
The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>