summaryrefslogtreecommitdiff
path: root/arch/arm64/crypto
AgeCommit message (Collapse)Author
2025-12-10crypto: arm64/ghash - Fix incorrect output from ghash-neonEric Biggers
Commit 9a7c987fb92b ("crypto: arm64/ghash - Use API partial block handling") made ghash_finup() pass the wrong buffer to ghash_do_simd_update(). As a result, ghash-neon now produces incorrect outputs when the message length isn't divisible by 16 bytes. Fix this. (I didn't notice this earlier because this code is reached only on CPUs that support NEON but not PMULL. I haven't yet found a way to get qemu-system-aarch64 to emulate that configuration.) Fixes: 9a7c987fb92b ("crypto: arm64/ghash - Use API partial block handling") Cc: stable@vger.kernel.org Reported-by: Diederik de Haas <diederik@cknow-tech.com> Closes: https://lore.kernel.org/linux-crypto/DETXT7QI62KE.F3CGH2VWX1SC@cknow-tech.com/ Tested-by: Diederik de Haas <diederik@cknow-tech.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Link: https://lore.kernel.org/r/20251209223417.112294-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-12-09crypto/arm64: sm4/xts - Merge ksimd scopes to reduce stack bloatArd Biesheuvel
Merge the two ksimd scopes in the implementation of SM4-XTS to prevent stack bloat in cases where the compiler fails to combine the stack slots for the kernel mode FP/SIMD buffers. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20251203163803.157541-6-ardb@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-12-09crypto/arm64: aes/xts - Use single ksimd scope to reduce stack bloatArd Biesheuvel
The ciphertext stealing logic in the AES-XTS implementation creates a separate ksimd scope to call into the FP/SIMD core routines, and in some cases (CONFIG_KASAN_STACK is one, but there might be others), the 528 byte kernel mode FP/SIMD buffer that is allocated inside this scope is not shared with the preceding ksimd scope, resulting in unnecessary stack bloat. Considering that a) the XTS ciphertext stealing logic is never called for block encryption use cases, and XTS is rarely used for anything else, b) in the vast majority of cases, the entire input block is processed during the first iteration of the loop, we can combine both ksimd scopes into a single one with no practical impact on how often/how long FP/SIMD is en/disabled, allowing us to reuse the same stack slot for both FP/SIMD routine calls. Fixes: ba3c1b3b5ac9 ("crypto/arm64: aes-blk - Switch to 'ksimd' scoped guard API") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20251203163803.157541-5-ardb@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-12Merge tag 'arm64-fpsimd-on-stack-for-v6.19' into libcrypto-fpsimd-on-stackEric Biggers
Pull fpsimd-on-stack changes from Ard Biesheuvel: "Shared tag/branch for arm64 FP/SIMD changes going through libcrypto" Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-12crypto/arm64: sm4 - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principle, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: sm3 - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principle, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: sha3 - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principe, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: polyval - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principe, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: nhpoly1305 - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principe, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: aes-gcm - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principe, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: aes-blk - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principe, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: aes-ccm - Switch to 'ksimd' scoped guard APIArd Biesheuvel
Switch to the more abstract 'scoped_ksimd()' API, which will be modified in a future patch to transparently allocate a kernel mode FP/SIMD state buffer on the stack, so that kernel mode FP/SIMD code remains preemptible in principe, but without the memory overhead that adds 528 bytes to the size of struct task_struct. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: sm4-ce-gcm - Avoid pointless yield of the NEON unitArd Biesheuvel
Kernel mode NEON sections are now preemptible on arm64, and so there is no need to yield it when calling APIs that may sleep. Also, move the calls to kernel_neon_end() to the same scope as kernel_neon_begin(). This is needed for a subsequent change where a stack buffer is allocated transparently and passed to kernel_neon_begin(). While at it, simplify the logic. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: sm4-ce-ccm - Avoid pointless yield of the NEON unitArd Biesheuvel
Kernel mode NEON sections are now preemptible on arm64, and so there is no need to yield it when calling APIs that may sleep. Also, move the calls to kernel_neon_end() to the same scope as kernel_neon_begin(). This is needed for a subsequent change where a stack buffer is allocated transparently and passed to kernel_neon_begin(). Reviewed-by: Eric Biggers <ebiggers@kernel.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12crypto/arm64: aes-ce-ccm - Avoid pointless yield of the NEON unitArd Biesheuvel
Kernel mode NEON sections are now preemptible on arm64, and so there is no need to yield it explicitly in order to prevent scheduling latency spikes. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-11lib/crypto: arm64/polyval: Migrate optimized code into libraryEric Biggers
Migrate the arm64 implementation of POLYVAL into lib/crypto/, wiring it up to the POLYVAL library interface. This makes the POLYVAL library be properly optimized on arm64. This drops the arm64 optimizations of polyval in the crypto_shash API. That's fine, since polyval will be removed from crypto_shash entirely since it is unneeded there. But even if it comes back, the crypto_shash API could just be implemented on top of the library API, as usual. Adjust the names and prototypes of the assembly functions to align more closely with the rest of the library code. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251109234726.638437-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: arm64/sha3: Migrate optimized code into libraryEric Biggers
Instead of exposing the arm64-optimized SHA-3 code via arm64-specific crypto_shash algorithms, instead just implement the sha3_absorb_blocks() and sha3_keccakf() library functions. This is much simpler, it makes the SHA-3 library functions be arm64-optimized, and it fixes the longstanding issue where the arm64-optimized SHA-3 code was disabled by default. SHA-3 still remains available through crypto_shash, but individual architectures no longer need to handle it. Note: to see the diff from arch/arm64/crypto/sha3-ce-glue.c to lib/crypto/arm64/sha3.h, view this commit with 'git show -M10'. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251026055032.1413733-10-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05crypto: arm64/sha3 - Update sha3_ce_transform() to prepare for libraryEric Biggers
- Use size_t lengths, to match the library. - Pass the block size instead of digest size, and add support for the block size that SHAKE128 uses. This allows the code to be used with SHAKE128 and SHAKE256, which don't have the concept of a digest size. SHAKE256 has the same block size as SHA3-256, but SHAKE128 has a unique block size. Thus, there are now 5 supported block sizes. Don't bother changing the "glue" code arm64_sha3_update() too much, as it gets deleted when the SHA-3 code is migrated into lib/crypto/ anyway. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251026055032.1413733-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-03crypto: arm64/sha3 - Rename conflicting functionDavid Howells
Rename the arm64 sha3_update() function to have an "arm64_" prefix to avoid a name conflict with the upcoming SHA-3 library. Note: this code will be superseded later. This commit simply keeps the kernel building for the initial introduction of the library. [EB: dropped unnecessary rename of sha3_finup(), and improved commit message] Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251026055032.1413733-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-08-30crypto: arm64/aes - use SHA-256 library instead of crypto_shashEric Biggers
In essiv_cbc_set_key(), just use the SHA-256 library instead of crypto_shash. This is simpler and also slightly faster. Signed-off-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-07-14lib/crypto: arm64/sha1: Migrate optimized code into libraryEric Biggers
Instead of exposing the arm64-optimized SHA-1 code via arm64-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be arm64-optimized, and it fixes the longstanding issue where the arm64-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it. Remove support for SHA-1 finalization from assembly code, since the library does not yet support architecture-specific overrides of the finalization. (Support for that has been omitted for now, for simplicity and because usually it isn't performance-critical.) To match sha1_blocks(), change the type of the nblocks parameter and the return value of __sha1_ce_transform() from int to size_t. Update the assembly code accordingly. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30lib/crypto: arm64/sha512: Migrate optimized SHA-512 code to libraryEric Biggers
Instead of exposing the arm64-optimized SHA-512 code via arm64-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be arm64-optimized, and it fixes the longstanding issue where the arm64-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int or 'unsigned int' to size_t. Update the ARMv8 CE assembly function accordingly. The scalar assembly function actually already treated it as size_t. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30crypto: sha512 - Rename conflicting symbolsEric Biggers
Rename existing functions and structs in architecture-optimized SHA-512 code that had names conflicting with the upcoming library interface which will be added to <crypto/sha2.h>: sha384_init, sha512_init, sha512_update, sha384, and sha512. Note: all affected code will be superseded by later commits that migrate the arch-optimized SHA-512 code into the library. This commit simply keeps the kernel building for the initial introduction of the library. Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-05-05crypto: arm64/sha256 - Add simd block functionHerbert Xu
Add CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD and a SIMD block function so that the caller can decide whether to use SIMD. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: arm64/sha256 - implement library instead of shashEric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Remove support for SHA-256 finalization from the ARMv8 CE assembly code, since the library does not yet support architecture-specific overrides of the finalization. (Support for that has been omitted for now, for simplicity and because usually it isn't performance-critical.) To match sha256_blocks_arch(), change the type of the nblocks parameter of the assembly functions from int or 'unsigned int' to size_t. Update the ARMv8 CE assembly function accordingly. The scalar and NEON assembly functions actually already treated it as size_t. While renaming the assembly files, also fix the naming quirks where "sha2" meant sha256, and "sha512" meant both sha256 and sha512. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: arm64/sha256 - remove obsolete chunking logicEric Biggers
Since kernel-mode NEON sections are now preemptible on arm64, there is no longer any need to limit the length of them. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: arm64/polyval - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: arm64 - move library functions to arch/arm64/lib/crypto/Eric Biggers
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the arm64 ChaCha and Poly1305 library functions into a new directory arch/arm64/lib/crypto/ that does not depend on CRYPTO. This mirrors the distinction between crypto/ and lib/crypto/. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: arm64 - drop redundant dependencies on ARM64Eric Biggers
arch/arm64/crypto/Kconfig is sourced only when CONFIG_ARM64=y, so there is no need for the symbols defined inside it to depend on ARM64. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-26crypto: arm64/sha1 - Set finalize for short finupHerbert Xu
Always set sctx->finalize before calling finup as it may not have been set previously on a short final. Reported-by: Corentin LABBE <clabbe.montjoie@gmail.com> Fixes: b97d31100e36 ("crypto: arm64/sha1 - Use API partial block handling") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Corentin LABBE <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sm4 - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/aes - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sm3-neon - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sm3-ce - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm/sha512 - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sha512-ce - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sha3-ce - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sha256 - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sha256-ce - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/sha1 - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm64/ghash - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-19crypto: lib/poly1305 - restore ability to remove modulesEric Biggers
Though the module_exit functions are now no-ops, they should still be defined, since otherwise the modules become unremovable. Fixes: 1f81c58279c7 ("crypto: arm/poly1305 - remove redundant shash algorithm") Fixes: f4b1a73aec5c ("crypto: arm64/poly1305 - remove redundant shash algorithm") Fixes: 378a337ab40f ("crypto: powerpc/poly1305 - implement library instead of shash") Fixes: 21969da642a2 ("crypto: x86/poly1305 - remove redundant shash algorithm") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-19crypto: lib/chacha - restore ability to remove modulesEric Biggers
Though the module_exit functions are now no-ops, they should still be defined, since otherwise the modules become unremovable. Fixes: 08820553f33a ("crypto: arm/chacha - remove the redundant skcipher algorithms") Fixes: 8c28abede16c ("crypto: arm64/chacha - remove the skcipher algorithms") Fixes: f7915484c020 ("crypto: powerpc/chacha - remove the skcipher algorithms") Fixes: ceba0eda8313 ("crypto: riscv/chacha - implement library instead of skcipher") Fixes: 632ab0978f08 ("crypto: x86/chacha - remove the skcipher algorithms") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: arm64/poly1305 - remove redundant shash algorithmEric Biggers
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm that uses the architecture's Poly1305 library functions, individual architectures no longer need to do the same. Therefore, remove the redundant shash algorithm from the arch-specific code and leave just the library functions there. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: poly1305 - centralize the shash wrappers for arch codeEric Biggers
Following the example of the crc32, crc32c, and chacha code, make the crypto subsystem register both generic and architecture-optimized poly1305 shash algorithms, both implemented on top of the appropriate library functions. This eliminates the need for every architecture to implement the same shash glue code. Note that the poly1305 shash requires that the key be prepended to the data, which differs from the library functions where the key is simply a parameter to poly1305_init(). Previously this was handled at a fairly low level, polluting the library code with shash-specific code. Reorganize things so that the shash code handles this quirk itself. Also, to register the architecture-optimized shashes only when architecture-optimized code is actually being used, add a function poly1305_is_arch_optimized() and make each arch implement it. Change each architecture's Poly1305 module_init function to arch_initcall so that the CPU feature detection is guaranteed to run before poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In cases where poly1305_is_arch_optimized() just returns true unconditionally, using arch_initcall is not strictly needed, but it's still good to be consistent across architectures.) Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: cbcmac - Set block size properlyHerbert Xu
The block size of a hash algorithm is meant to be the number of bytes its block function can handle. For cbcmac that should be the block size of the underlying block cipher instead of one. Set the block size of all cbcmac implementations accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: lib/sm3 - Move sm3 library into lib/cryptoHerbert Xu
Move the sm3 library code into lib/crypto. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: arm64/sha512 - Fix header inclusionsHerbert Xu
Instead of relying on linux/module.h being included through the header file sha512_base.h, include it directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: arm64/chacha - remove the skcipher algorithmsEric Biggers
Since crypto/chacha.c now registers chacha20-$(ARCH), xchacha20-$(ARCH), and xchacha12-$(ARCH) skcipher algorithms that use the architecture's ChaCha and HChaCha library functions, individual architectures no longer need to do the same. Therefore, remove the redundant skcipher algorithms and leave just the library functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: chacha - centralize the skcipher wrappers for arch codeEric Biggers
Following the example of the crc32 and crc32c code, make the crypto subsystem register both generic and architecture-optimized chacha20, xchacha20, and xchacha12 skcipher algorithms, all implemented on top of the appropriate library functions. This eliminates the need for every architecture to implement the same skcipher glue code. To register the architecture-optimized skciphers only when architecture-optimized code is actually being used, add a function chacha_is_arch_optimized() and make each arch implement it. Change each architecture's ChaCha module_init function to arch_initcall so that the CPU feature detection is guaranteed to run before chacha_is_arch_optimized() gets called by crypto/chacha.c. In the case of s390, remove the CPU feature based module autoloading, which is no longer needed since the module just gets pulled in via function linkage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>