summaryrefslogtreecommitdiff
path: root/include/crypto
AgeCommit message (Collapse)Author
2020-01-09crypto: algapi - fold crypto_init_spawn() into crypto_grab_spawn()Eric Biggers
Now that crypto_init_spawn() is only called by crypto_grab_spawn(), simplify things by moving its functionality into crypto_grab_spawn(). In the process of doing this, also be more consistent about when the spawn and instance are updated, and remove the crypto_spawn::dropref flag since now it's always set. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - unexport crypto_ahash_typeEric Biggers
Now that all the templates that need ahash spawns have been converted to use crypto_grab_ahash() rather than look up the algorithm directly, crypto_ahash_type is no longer used outside of ahash.c. Make it static. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove obsoleted instance creation helpersEric Biggers
Remove lots of helper functions that were previously used for instantiating crypto templates, but are now unused: - crypto_get_attr_alg() and similar functions looked up an inner algorithm directly from a template parameter. These were replaced with getting the algorithm's name, then calling crypto_grab_*(). - crypto_init_spawn2() and similar functions initialized a spawn, given an algorithm. Similarly, these were replaced with crypto_grab_*(). - crypto_alloc_instance() and similar functions allocated an instance with a single spawn, given the inner algorithm. These aren't useful anymore since crypto_grab_*() need the instance allocated first. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: cipher - make crypto_spawn_cipher() take a crypto_cipher_spawnEric Biggers
Now that all users of single-block cipher spawns have been converted to use 'struct crypto_cipher_spawn' rather than the less specifically typed 'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a 'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - use crypto_grab_cipher() and simplify error pathsEric Biggers
Make skcipher_alloc_instance_simple() use the new function crypto_grab_cipher() to initialize its cipher spawn. This is needed to make all spawns be initialized in a consistent way. Also simplify the error handling by taking advantage of crypto_drop_*() now accepting (as a no-op) spawns that haven't been initialized yet, and by taking advantage of crypto_grab_*() now handling ERR_PTR() names. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: cipher - introduce crypto_cipher_spawn and crypto_grab_cipher()Eric Biggers
Currently, "cipher" (single-block cipher) spawns are usually initialized by using crypto_get_attr_alg() to look up the algorithm, then calling crypto_init_spawn(). In one case, crypto_grab_spawn() is used directly. The former way is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. Also, the cipher spawns are not strongly typed; e.g., the API requires that the user manually specify the flags CRYPTO_ALG_TYPE_CIPHER and CRYPTO_ALG_TYPE_MASK. Though the "cipher" algorithm type itself isn't yet strongly typed, we can start by making the spawns strongly typed. So, let's introduce a new 'struct crypto_cipher_spawn', and functions crypto_grab_cipher() and crypto_drop_cipher() to grab and drop them. Later patches will convert all cipher spawns to use these, then make crypto_spawn_cipher() take 'struct crypto_cipher_spawn' as well, instead of a bare 'struct crypto_spawn' as it currently does. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - introduce crypto_grab_ahash()Eric Biggers
Currently, ahash spawns are initialized by using ahash_attr_alg() or crypto_find_alg() to look up the ahash algorithm, then calling crypto_init_ahash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_ahash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - introduce crypto_grab_shash()Eric Biggers
Currently, shash spawns are initialized by using shash_attr_alg() or crypto_alg_mod_lookup() to look up the shash algorithm, then calling crypto_init_shash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_shash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - pass instance to crypto_grab_spawn()Eric Biggers
Currently, crypto_spawn::inst is first used temporarily to pass the instance to crypto_grab_spawn(). Then crypto_init_spawn() overwrites it with crypto_spawn::next, which shares the same union. Finally, crypto_spawn::inst is set again when the instance is registered. Make this less convoluted by just passing the instance as an argument to crypto_grab_spawn() instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: akcipher - pass instance to crypto_grab_akcipher()Eric Biggers
Initializing a crypto_akcipher_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_akcipher(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_akcipher() take the instance as an argument. To keep the function call from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into pkcs1pad_create(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: aead - pass instance to crypto_grab_aead()Eric Biggers
Initializing a crypto_aead_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_aead(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_aead() take the instance as an argument. To keep the function calls from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into the affected places which weren't already using one. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - pass instance to crypto_grab_skcipher()Eric Biggers
Initializing a crypto_skcipher_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_skcipher(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_skcipher() take the instance as an argument. To keep the function calls from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into the affected places which weren't already using one. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - make struct ahash_instance be the full sizeEric Biggers
Define struct ahash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating ahash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct ahash_instance by simplifying the ahash_crypto_instance() and ahash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - make struct shash_instance be the full sizeEric Biggers
Define struct shash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating shash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct shash_instance by simplifying the shash_crypto_instance() and shash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: remove CRYPTO_TFM_RES_WEAK_KEYEric Biggers
The CRYPTO_TFM_RES_WEAK_KEY flag was apparently meant as a way to make the ->setkey() functions provide more information about errors. However, no one actually checks for this flag, which makes it pointless. There are also no tests that verify that all algorithms actually set (or don't set) it correctly. This is also the last remaining CRYPTO_TFM_RES_* flag, which means that it's the only thing still needing all the boilerplate code which propagates these flags around from child => parent tfms. And if someone ever needs to distinguish this error in the future (which is somewhat unlikely, as it's been unneeded for a long time), it would be much better to just define a new return value like -EKEYREJECTED. That would be much simpler, less error-prone, and easier to test. So just remove this flag. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: remove CRYPTO_TFM_RES_BAD_KEY_LENEric Biggers
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to make the ->setkey() functions provide more information about errors. However, no one actually checks for this flag, which makes it pointless. Also, many algorithms fail to set this flag when given a bad length key. Reviewing just the generic implementations, this is the case for aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309, rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably many more in arch/*/crypto/ and drivers/crypto/. Some algorithms can even set this flag when the key is the correct length. For example, authenc and authencesn set it when the key payload is malformed in any way (not just a bad length), the atmel-sha and ccree drivers can set it if a memory allocation fails, and the chelsio driver sets it for bad auth tag lengths, not just bad key lengths. So even if someone actually wanted to start checking this flag (which seems unlikely, since it's been unused for a long time), there would be a lot of work needed to get it working correctly. But it would probably be much better to go back to the drawing board and just define different return values, like -EINVAL if the key is invalid for the algorithm vs. -EKEYREJECTED if the key was rejected by a policy like "no weak keys". That would be much simpler, less error-prone, and easier to test. So just remove this flag. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - remove skcipher_walk_aead()Eric Biggers
skcipher_walk_aead() is unused and is identical to skcipher_walk_aead_encrypt(), so remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-27crypto: skcipher - Add skcipher_ialg_simple helperHerbert Xu
This patch introduces the skcipher_ialg_simple helper which fetches the crypto_alg structure from a simple skcipher instance's spawn. This allows us to remove the third argument from the function skcipher_alloc_instance_simple. In doing so the reference count to the algorithm is now maintained by the Crypto API and the caller no longer needs to drop the alg refcount. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-27crypto: api - Retain alg refcount in crypto_grab_spawnHerbert Xu
This patch changes crypto_grab_spawn to retain the reference count on the algorithm. This is because the caller needs to access the algorithm parameters and without the reference count the algorithm can be freed at any time. The reference count will be subsequently dropped by the crypto API once the instance has been registered. The helper crypto_drop_spawn will also conditionally drop the reference count depending on whether it has been registered. Note that the code is actually added to crypto_init_spawn. However, unless the caller activates this by setting spawn->dropref beforehand then nothing happens. The only caller that sets dropref is currently crypto_grab_spawn. Once all legacy users of crypto_init_spawn disappear, then we can kill the dropref flag. Internally each instance will maintain a list of its spawns prior to registration. This memory used by this list is shared with other fields that are only used after registration. In order for this to work a new flag spawn->registered is added to indicate whether spawn->inst can be used. Fixes: d6ef2f198d4c ("crypto: api - Add crypto_grab_spawn primitive") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-20crypto: algapi - make unregistration functions return voidEric Biggers
Some of the algorithm unregistration functions return -ENOENT when asked to unregister a non-registered algorithm, while others always return 0 or always return void. But no users check the return value, except for two of the bulk unregistration functions which print a message on error but still always return 0 to their caller, and crypto_del_alg() which calls crypto_unregister_instance() which always returns 0. Since unregistering a non-registered algorithm is always a kernel bug but there isn't anything callers should do to handle this situation at runtime, let's simplify things by making all the unregistration functions return void, and moving the error message into crypto_unregister_alg() and upgrading it to a WARN(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: hmac - Use init_tfm/exit_tfm interfaceHerbert Xu
This patch switches hmac over to the new init_tfm/exit_tfm interface as opposed to cra_init/cra_exit. This way the shash API can make sure that descsize does not exceed the maximum. This patch also adds the API helper shash_alg_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: shash - Add init_tfm/exit_tfm and verify descsizeHerbert Xu
The shash interface supports a dynamic descsize field because of the presence of fallbacks (it's just padlock-sha actually, perhaps we can remove it one day). As it is the API does not verify the setting of descsize at all. It is up to the individual algorithms to ensure that descsize does not exceed the specified maximum value of HASH_MAX_DESCSIZE (going above would cause stack corruption). In order to allow the API to impose this limit directly, this patch adds init_tfm/exit_tfm hooks to the shash_alg structure. We can then verify the descsize setting in the API directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: api - Do not zap spawn->algHerbert Xu
Currently when a spawn is removed we will zap its alg field. This is racy because the spawn could belong to an unregistered instance which may dereference the spawn->alg field. This patch fixes this by keeping spawn->alg constant and instead adding a new spawn->dead field to indicate that a spawn is going away. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: chacha - fix warning message in header fileValdis Klētnieks
Building with W=1 causes a warning: CC [M] arch/x86/crypto/chacha_glue.o In file included from arch/x86/crypto/chacha_glue.c:10: ./include/crypto/internal/chacha.h:37:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration] 37 | static int inline chacha12_setkey(struct crypto_skcipher *tfm, const u8 *key, | ^~~~~~ Straighten out the order to match the rest of the header file. Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - add crypto_skcipher_min_keysize()Eric Biggers
Add a helper function crypto_skcipher_min_keysize() to mirror crypto_skcipher_max_keysize(). This will be used by the self-tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: aead - move crypto_aead_maxauthsize() to <crypto/aead.h>Eric Biggers
Move crypto_aead_maxauthsize() to <crypto/aead.h> so that it's available to users of the API, not just AEAD implementations. This will be used by the self-tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: shash - allow essiv and hmac to use OPTIONAL_KEY algorithmsEric Biggers
The essiv and hmac templates refuse to use any hash algorithm that has a ->setkey() function, which includes not just algorithms that always need a key, but also algorithms that optionally take a key. Previously the only optionally-keyed hash algorithms in the crypto API were non-cryptographic algorithms like crc32, so this didn't really matter. But that's changed with BLAKE2 support being added. BLAKE2 should work with essiv and hmac, just like any other cryptographic hash. Fix this by allowing the use of both algorithms without a ->setkey() function and algorithms that have the OPTIONAL_KEY flag set. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::decryptEric Biggers
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::decrypt is now redundant since it always equals crypto_skcipher_alg(tfm)->decrypt. Remove it and update crypto_skcipher_decrypt() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::encryptEric Biggers
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::encrypt is now redundant since it always equals crypto_skcipher_alg(tfm)->encrypt. Remove it and update crypto_skcipher_encrypt() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::setkeyEric Biggers
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::setkey now always points to skcipher_setkey(). Simplify by removing this function pointer and instead just making skcipher_setkey() be crypto_skcipher_setkey() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::keysizeEric Biggers
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::keysize is now redundant since it always equals crypto_skcipher_alg(tfm)->max_keysize. Remove it and update crypto_skcipher_default_keysize() accordingly. Also rename crypto_skcipher_default_keysize() to crypto_skcipher_max_keysize() to clarify that it specifically returns the maximum key size, not some unspecified "default". Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: skcipher - remove crypto_skcipher::ivsizeEric Biggers
Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::ivsize is now redundant since it always equals crypto_skcipher_alg(tfm)->ivsize. Remove it and update crypto_skcipher_ivsize() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: x86 - Regularize glue function prototypesKees Cook
The crypto glue performed function prototype casting via macros to make indirect calls to assembly routines. Instead of performing casts at the call sites (which trips Control Flow Integrity prototype checking), switch each prototype to a common standard set of arguments which allows the removal of the existing macros. In order to keep pointer math unchanged, internal casting between u128 pointers and u8 pointers is added. Co-developed-by: João Moreira <joao.moreira@intel.com> Signed-off-by: João Moreira <joao.moreira@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: ablkcipher - remove deprecated and unused ablkcipher supportArd Biesheuvel
Now that all users of the deprecated ablkcipher interface have been moved to the skcipher interface, ablkcipher is no longer used and can be removed. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: lib/chacha20poly1305 - reimplement crypt_from_sg() routineArd Biesheuvel
Reimplement the library routines to perform chacha20poly1305 en/decryption on scatterlists, without [ab]using the [deprecated] blkcipher interface, which is rather heavyweight and does things we don't really need. Instead, we use the sg_miter API in a novel and clever way, to iterate over the scatterlist in-place (i.e., source == destination, which is the only way this library is expected to be used). That way, we don't have to iterate over two scatterlists in parallel. Another optimization is that, instead of relying on the blkcipher walker to present the input in suitable chunks, we recognize that ChaCha is a streamcipher, and so we can simply deal with partial blocks by keeping a block of cipherstream on the stack and use crypto_xor() to mix it with the in/output. Finally, we omit the scatterwalk_and_copy() call if the last element of the scatterlist covers the MAC as well (which is the common case), avoiding the need to walk the scatterlist and kmap() the page twice. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: chacha20poly1305 - import construction and selftest from ZincArd Biesheuvel
This incorporates the chacha20poly1305 from the Zinc library, retaining the library interface, but replacing the implementation with calls into the code that already existed in the kernel's crypto API. Note that this library API does not implement RFC7539 fully, given that it is limited to 64-bit nonces. (The 96-bit nonce version that was part of the selftest only has been removed, along with the 96-bit nonce test vectors that only tested the selftest but not the actual library itself) Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: curve25519 - generic C library implementationsJason A. Donenfeld
This contains two formally verified C implementations of the Curve25519 scalar multiplication function, one for 32-bit systems, and one for 64-bit systems whose compiler supports efficient 128-bit integer types. Not only are these implementations formally verified, but they are also the fastest available C implementations. They have been modified to be friendly to kernel space and to be generally less horrendous looking, but still an effort has been made to retain their formally verified characteristic, and so the C might look slightly unidiomatic. The 64-bit version comes from HACL*: https://github.com/project-everest/hacl-star The 32-bit version comes from Fiat: https://github.com/mit-plv/fiat-crypto Information: https://cr.yp.to/ecdh.html Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> [ardb: - move from lib/zinc to lib/crypto - replace .c #includes with Kconfig based object selection - drop simd handling and simplify support for per-arch versions ] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: blake2s - implement generic shash driverArd Biesheuvel
Wire up our newly added Blake2s implementation via the shash API. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: blake2s - generic C library implementation and selftestJason A. Donenfeld
The C implementation was originally based on Samuel Neves' public domain reference implementation but has since been heavily modified for the kernel. We're able to do compile-time optimizations by moving some scaffolding around the final function into the header file. Information: https://blake2.net/ Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> [ardb: - move from lib/zinc to lib/crypto - remove simd handling - rewrote selftest for better coverage - use fixed digest length for blake2s_hmac() and rename to blake2s256_hmac() ] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: x86/poly1305 - depend on generic library not generic shashArd Biesheuvel
Remove the dependency on the generic Poly1305 driver. Instead, depend on the generic library so that we only reuse code without pulling in the generic skcipher implementation as well. While at it, remove the logic that prefers the non-SIMD path for short inputs - this is no longer necessary after recent FPU handling changes on x86. Since this removes the last remaining user of the routines exported by the generic shash driver, unexport them and make them static. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: poly1305 - expose init/update/final library interfaceArd Biesheuvel
Expose the existing generic Poly1305 code via a init/update/final library interface so that callers are not required to go through the crypto API's shash abstraction to access it. At the same time, make some preparations so that the library implementation can be superseded by an accelerated arch-specific version in the future. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: x86/poly1305 - unify Poly1305 state struct with generic codeArd Biesheuvel
In preparation of exposing a Poly1305 library interface directly from the accelerated x86 driver, align the state descriptor of the x86 code with the one used by the generic driver. This is needed to make the library interface unified between all implementations. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: poly1305 - move core routines into a separate libraryArd Biesheuvel
Move the core Poly1305 routines shared between the generic Poly1305 shash driver and the Adiantum and NHPoly1305 drivers into a separate library so that using just this pieces does not pull in the crypto API pieces of the generic Poly1305 routine. In a subsequent patch, we will augment this generic library with init/update/final routines so that Poyl1305 algorithm can be used directly without the need for using the crypto API's shash abstraction. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: chacha - unexport chacha_generic routinesArd Biesheuvel
Now that all users of generic ChaCha code have moved to the core library, there is no longer a need for the generic ChaCha skcpiher driver to export parts of it implementation for reuse by other drivers. So drop the exports, and make the symbols static. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: x86/chacha - expose SIMD ChaCha routine as library functionArd Biesheuvel
Wire the existing x86 SIMD ChaCha code into the new ChaCha library interface, so that users of the library interface will get the accelerated version when available. Given that calls into the library API will always go through the routines in this module if it is enabled, switch to static keys to select the optimal implementation available (which may be none at all, in which case we defer to the generic implementation for all invocations). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17crypto: chacha - move existing library code into lib/cryptoArd Biesheuvel
Currently, our generic ChaCha implementation consists of a permute function in lib/chacha.c that operates on the 64-byte ChaCha state directly [and which is always included into the core kernel since it is used by the /dev/random driver], and the crypto API plumbing to expose it as a skcipher. In order to support in-kernel users that need the ChaCha streamcipher but have no need [or tolerance] for going through the abstractions of the crypto API, let's expose the streamcipher bits via a library API as well, in a way that permits the implementation to be superseded by an architecture specific one if provided. So move the streamcipher code into a separate module in lib/crypto, and expose the init() and crypt() routines to users of the library. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-01crypto: skcipher - remove the "blkcipher" algorithm typeEric Biggers
Now that all "blkcipher" algorithms have been converted to "skcipher", remove the blkcipher algorithm type. The skcipher (symmetric key cipher) algorithm type was introduced a few years ago to replace both blkcipher and ablkcipher (synchronous and asynchronous block cipher). The advantages of skcipher include: - A much less confusing name, since none of these algorithm types have ever actually been for raw block ciphers, but rather for all length-preserving encryption modes including block cipher modes of operation, stream ciphers, and other length-preserving modes. - It unified blkcipher and ablkcipher into a single algorithm type which supports both synchronous and asynchronous implementations. Note, blkcipher already operated only on scatterlists, so the fact that skcipher does too isn't a regression in functionality. - Better type safety by using struct skcipher_alg, struct crypto_skcipher, etc. instead of crypto_alg, crypto_tfm, etc. - It sometimes simplifies the implementations of algorithms. Also, the blkcipher API was no longer being tested. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-01crypto: skcipher - unify the crypto_has_skcipher*() functionsEric Biggers
crypto_has_skcipher() and crypto_has_skcipher2() do the same thing: they check for the availability of an algorithm of type skcipher, blkcipher, or ablkcipher, which also meets any non-type constraints the caller specified. And they have exactly the same prototype. Therefore, eliminate the redundancy by removing crypto_has_skcipher() and renaming crypto_has_skcipher2() to crypto_has_skcipher(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-10-05crypto: algif_skcipher - Use chunksize instead of blocksizeHerbert Xu
When algif_skcipher does a partial operation it always process data that is a multiple of blocksize. However, for algorithms such as CTR this is wrong because even though it can process any number of bytes overall, the partial block must come at the very end and not in the middle. This is exactly what chunksize is meant to describe so this patch changes blocksize to chunksize. Fixes: 8ff590903d5f ("crypto: algif_skcipher - User-space...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-27Merge branch 'next-integrity' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity Pull integrity updates from Mimi Zohar: "The major feature in this time is IMA support for measuring and appraising appended file signatures. In addition are a couple of bug fixes and code cleanup to use struct_size(). In addition to the PE/COFF and IMA xattr signatures, the kexec kernel image may be signed with an appended signature, using the same scripts/sign-file tool that is used to sign kernel modules. Similarly, the initramfs may contain an appended signature. This contained a lot of refactoring of the existing appended signature verification code, so that IMA could retain the existing framework of calculating the file hash once, storing it in the IMA measurement list and extending the TPM, verifying the file's integrity based on a file hash or signature (eg. xattrs), and adding an audit record containing the file hash, all based on policy. (The IMA support for appended signatures patch set was posted and reviewed 11 times.) The support for appended signature paves the way for adding other signature verification methods, such as fs-verity, based on a single system-wide policy. The file hash used for verifying the signature and the signature, itself, can be included in the IMA measurement list" * 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity: ima: ima_api: Use struct_size() in kzalloc() ima: use struct_size() in kzalloc() sefltest/ima: support appended signatures (modsig) ima: Fix use after free in ima_read_modsig() MODSIGN: make new include file self contained ima: fix freeing ongoing ahash_request ima: always return negative code for error ima: Store the measurement again when appraising a modsig ima: Define ima-modsig template ima: Collect modsig ima: Implement support for module-style appended signatures ima: Factor xattr_verify() out of ima_appraise_measurement() ima: Add modsig appraise_type option for module-style appended signatures integrity: Select CONFIG_KEYS instead of depending on it PKCS#7: Introduce pkcs7_get_digest() PKCS#7: Refactor verify_pkcs7_signature() MODSIGN: Export module signature definitions ima: initialize the "template" field with the default template